dask 2021.10.0

NotesParametersReturnsBackRef
einsum(subscripts, *operands, out=None, dtype=None, order='K', casting='safe', optimize=False)

Some inconsistencies with the Dask version may exist.

Evaluates the Einstein summation convention on the operands.

Using the Einstein summation convention, many common multi-dimensional, linear algebraic array operations can be represented in a simple fashion. In implicit mode einsum computes these values.

In explicit mode, einsum provides further flexibility to compute other array operations that might not be considered classical Einstein summation operations, by disabling, or forcing summation over specified subscript labels.

See the notes and examples for clarification.

Notes

versionadded

The Einstein summation convention can be used to compute many multi-dimensional, linear algebraic array operations. einsum provides a succinct way of representing these.

A non-exhaustive list of these operations, which can be computed by einsum , is shown below along with examples:

The subscripts string is a comma-separated list of subscript labels, where each label refers to a dimension of the corresponding operand. Whenever a label is repeated it is summed, so np.einsum('i,i', a, b) is equivalent to np.inner(a,b) <numpy.inner> . If a label appears only once, it is not summed, so np.einsum('i', a) produces a view of a with no changes. A further example np.einsum('ij,jk', a, b) describes traditional matrix multiplication and is equivalent to np.matmul(a,b) <numpy.matmul> . Repeated subscript labels in one operand take the diagonal. For example, np.einsum('ii', a) is equivalent to np.trace(a) <numpy.trace> .

In implicit mode, the chosen subscripts are important since the axes of the output are reordered alphabetically. This means that np.einsum('ij', a) doesn't affect a 2D array, while np.einsum('ji', a) takes its transpose. Additionally, np.einsum('ij,jk', a, b) returns a matrix multiplication, while, np.einsum('ij,jh', a, b) returns the transpose of the multiplication since subscript 'h' precedes subscript 'i'.

In explicit mode the output can be directly controlled by specifying output subscript labels. This requires the identifier '->' as well as the list of output subscript labels. This feature increases the flexibility of the function since summing can be disabled or forced when required. The call np.einsum('i->', a) is like np.sum(a, axis=-1) <numpy.sum> , and np.einsum('ii->i', a) is like np.diag(a) <numpy.diag> . The difference is that einsum does not allow broadcasting by default. Additionally np.einsum('ij,jh->ih', a, b) directly specifies the order of the output subscript labels and therefore returns matrix multiplication, unlike the example above in implicit mode.

To enable and control broadcasting, use an ellipsis. Default NumPy-style broadcasting is done by adding an ellipsis to the left of each term, like np.einsum('...ii->...i', a) . To take the trace along the first and last axes, you can do np.einsum('i...i', a) , or to do a matrix-matrix product with the left-most indices instead of rightmost, one can do np.einsum('ij...,jk...->ik...', a, b) .

When there is only one operand, no axes are summed, and no output parameter is provided, a view into the operand is returned instead of a new array. Thus, taking the diagonal as np.einsum('ii->i', a) produces a view (changed in version 1.10.0).

einsum also provides an alternative way to provide the subscripts and operands as einsum(op0, sublist0, op1, sublist1, ..., [sublistout]) . If the output shape is not provided in this format einsum will be calculated in implicit mode, otherwise it will be performed explicitly. The examples below have corresponding einsum calls with the two parameter methods.

versionadded

Views returned from einsum are now writeable whenever the input array is writeable. For example, np.einsum('ijk...->kji...', a) will now have the same effect as np.swapaxes(a, 0, 2) <numpy.swapaxes> and np.einsum('ii->i', a) will return a writeable view of the diagonal of a 2D array.

versionadded

Added the optimize argument which will optimize the contraction order of an einsum expression. For a contraction with three or more operands this can greatly increase the computational efficiency at the cost of a larger memory footprint during computation.

Typically a 'greedy' algorithm is applied which empirical tests have shown returns the optimal path in the majority of cases. In some cases 'optimal' will return the superlative path through a more expensive, exhaustive search. For iterative calculations it may be advisable to calculate the optimal path once and reuse that path by supplying it as an argument. An example is given below.

See numpy.einsum_path for more details.

Parameters

subscripts : str

Specifies the subscripts for summation as comma separated list of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator '->' is included as well as subscript labels of the precise output form.

operands : list of array_like

These are the arrays for the operation.

out : ndarray, optional (Not supported in Dask)

If provided, the calculation is done into this array.

dtype : {data-type, None}, optional

If provided, forces the calculation to use the data type specified. Note that you may have to also give a more liberal :None:None:`casting` parameter to allow the conversions. Default is None.

order : {'C', 'F', 'A', 'K'}, optional

Controls the memory layout of the output. 'C' means it should be C contiguous. 'F' means it should be Fortran contiguous, 'A' means it should be 'F' if the inputs are all 'F', 'C' otherwise. 'K' means it should be as close to the layout as the inputs as is possible, including arbitrarily permuted axes. Default is 'K'.

casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional

Controls what kind of data casting may occur. Setting this to 'unsafe' is not recommended, as it can adversely affect accumulations.

  • 'no' means the data types should not be cast at all.

  • 'equiv' means only byte-order changes are allowed.

  • 'safe' means only casts which can preserve values are allowed.

  • 'same_kind' means only safe casts or casts within a kind, like float64 to float32, are allowed.

  • 'unsafe' means any data conversions may be done.

Default is 'safe'.

optimize : {False, True, 'greedy', 'optimal'}, optional (Not supported in Dask)

Controls if intermediate optimization should occur. No optimization will occur if False and True will default to the 'greedy' algorithm. Also accepts an explicit contraction list from the np.einsum_path function. See np.einsum_path for more details. Defaults to False.

Returns

output : ndarray

The calculation based on the Einstein summation convention.

This docstring was copied from numpy.einsum.

See Also

dot
einops

similar verbose interface is provided by :None:None:`einops <https://github.com/arogozhnikov/einops>` package to cover additional operations: transpose, reshape/flatten, repeat/tile, squeeze/unsqueeze and reductions.

einsum_path
inner
linalg.multi_dot
opt_einsum

:None:None:`opt_einsum <https://optimized-einsum.readthedocs.io/en/stable/>` optimizes contraction order for einsum-like expressions in backend-agnostic manner.

outer
tensordot

Examples

This example is valid syntax, but we were not able to check execution
>>> a = np.arange(25).reshape(5,5)  # doctest: +SKIP
... b = np.arange(5) # doctest: +SKIP
... c = np.arange(6).reshape(2,3) # doctest: +SKIP

Trace of a matrix:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('ii', a)  # doctest: +SKIP
60
This example is valid syntax, but we were not able to check execution
>>> np.einsum(a, [0,0])  # doctest: +SKIP
60
This example is valid syntax, but we were not able to check execution
>>> np.trace(a)  # doctest: +SKIP
60

Extract the diagonal (requires explicit form):

This example is valid syntax, but we were not able to check execution
>>> np.einsum('ii->i', a)  # doctest: +SKIP
array([ 0,  6, 12, 18, 24])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(a, [0,0], [0])  # doctest: +SKIP
array([ 0,  6, 12, 18, 24])
This example is valid syntax, but we were not able to check execution
>>> np.diag(a)  # doctest: +SKIP
array([ 0,  6, 12, 18, 24])

Sum over an axis (requires explicit form):

This example is valid syntax, but we were not able to check execution
>>> np.einsum('ij->i', a)  # doctest: +SKIP
array([ 10,  35,  60,  85, 110])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(a, [0,1], [0])  # doctest: +SKIP
array([ 10,  35,  60,  85, 110])
This example is valid syntax, but we were not able to check execution
>>> np.sum(a, axis=1)  # doctest: +SKIP
array([ 10,  35,  60,  85, 110])

For higher dimensional arrays summing a single axis can be done with ellipsis:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('...j->...', a)  # doctest: +SKIP
array([ 10,  35,  60,  85, 110])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(a, [Ellipsis,1], [Ellipsis])  # doctest: +SKIP
array([ 10,  35,  60,  85, 110])

Compute a matrix transpose, or reorder any number of axes:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('ji', c)  # doctest: +SKIP
array([[0, 3],
       [1, 4],
       [2, 5]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum('ij->ji', c)  # doctest: +SKIP
array([[0, 3],
       [1, 4],
       [2, 5]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(c, [1,0])  # doctest: +SKIP
array([[0, 3],
       [1, 4],
       [2, 5]])
This example is valid syntax, but we were not able to check execution
>>> np.transpose(c)  # doctest: +SKIP
array([[0, 3],
       [1, 4],
       [2, 5]])

Vector inner products:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('i,i', b, b)  # doctest: +SKIP
30
This example is valid syntax, but we were not able to check execution
>>> np.einsum(b, [0], b, [0])  # doctest: +SKIP
30
This example is valid syntax, but we were not able to check execution
>>> np.inner(b,b)  # doctest: +SKIP
30

Matrix vector multiplication:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('ij,j', a, b)  # doctest: +SKIP
array([ 30,  80, 130, 180, 230])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(a, [0,1], b, [1])  # doctest: +SKIP
array([ 30,  80, 130, 180, 230])
This example is valid syntax, but we were not able to check execution
>>> np.dot(a, b)  # doctest: +SKIP
array([ 30,  80, 130, 180, 230])
This example is valid syntax, but we were not able to check execution
>>> np.einsum('...j,j', a, b)  # doctest: +SKIP
array([ 30,  80, 130, 180, 230])

Broadcasting and scalar multiplication:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('..., ...', 3, c)  # doctest: +SKIP
array([[ 0,  3,  6],
       [ 9, 12, 15]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(',ij', 3, c)  # doctest: +SKIP
array([[ 0,  3,  6],
       [ 9, 12, 15]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(3, [Ellipsis], c, [Ellipsis])  # doctest: +SKIP
array([[ 0,  3,  6],
       [ 9, 12, 15]])
This example is valid syntax, but we were not able to check execution
>>> np.multiply(3, c)  # doctest: +SKIP
array([[ 0,  3,  6],
       [ 9, 12, 15]])

Vector outer product:

This example is valid syntax, but we were not able to check execution
>>> np.einsum('i,j', np.arange(2)+1, b)  # doctest: +SKIP
array([[0, 1, 2, 3, 4],
       [0, 2, 4, 6, 8]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(np.arange(2)+1, [0], b, [1])  # doctest: +SKIP
array([[0, 1, 2, 3, 4],
       [0, 2, 4, 6, 8]])
This example is valid syntax, but we were not able to check execution
>>> np.outer(np.arange(2)+1, b)  # doctest: +SKIP
array([[0, 1, 2, 3, 4],
       [0, 2, 4, 6, 8]])

Tensor contraction:

This example is valid syntax, but we were not able to check execution
>>> a = np.arange(60.).reshape(3,4,5)  # doctest: +SKIP
... b = np.arange(24.).reshape(4,3,2) # doctest: +SKIP
... np.einsum('ijk,jil->kl', a, b) # doctest: +SKIP array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum(a, [0,1,2], b, [1,0,3], [2,3])  # doctest: +SKIP
array([[4400., 4730.],
       [4532., 4874.],
       [4664., 5018.],
       [4796., 5162.],
       [4928., 5306.]])
This example is valid syntax, but we were not able to check execution
>>> np.tensordot(a,b, axes=([1,0],[0,1]))  # doctest: +SKIP
array([[4400., 4730.],
       [4532., 4874.],
       [4664., 5018.],
       [4796., 5162.],
       [4928., 5306.]])

Writeable returned arrays (since version 1.10.0):

This example is valid syntax, but we were not able to check execution
>>> a = np.zeros((3, 3))  # doctest: +SKIP
... np.einsum('ii->i', a)[:] = 1 # doctest: +SKIP
... a # doctest: +SKIP array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])

Example of ellipsis use:

This example is valid syntax, but we were not able to check execution
>>> a = np.arange(6).reshape((3,2))  # doctest: +SKIP
... b = np.arange(12).reshape((4,3)) # doctest: +SKIP
... np.einsum('ki,jk->ij', a, b) # doctest: +SKIP array([[10, 28, 46, 64], [13, 40, 67, 94]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum('ki,...k->i...', a, b)  # doctest: +SKIP
array([[10, 28, 46, 64],
       [13, 40, 67, 94]])
This example is valid syntax, but we were not able to check execution
>>> np.einsum('k...,jk', a, b)  # doctest: +SKIP
array([[10, 28, 46, 64],
       [13, 40, 67, 94]])

Chained array operations. For more complicated contractions, speed ups might be achieved by repeatedly computing a 'greedy' path or pre-computing the 'optimal' path and repeatedly applying it, using an :None:None:`einsum_path` insertion (since version 1.12.0). Performance improvements can be particularly significant with larger arrays:

This example is valid syntax, but we were not able to check execution
>>> a = np.ones(64).reshape(2,4,8)  # doctest: +SKIP

Basic einsum : ~1520ms (benchmarked on 3.1GHz Intel i5.)

This example is valid syntax, but we were not able to check execution
>>> for iteration in range(500):  # doctest: +SKIP
...  _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a)

Sub-optimal einsum (due to repeated path calculation time): ~330ms

This example is valid syntax, but we were not able to check execution
>>> for iteration in range(500):  # doctest: +SKIP
...  _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='optimal')

Greedy einsum (faster optimal path approximation): ~160ms

This example is valid syntax, but we were not able to check execution
>>> for iteration in range(500):  # doctest: +SKIP
...  _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='greedy')

Optimal einsum (best usage pattern in some use cases): ~110ms

This example is valid syntax, but we were not able to check execution
>>> path = np.einsum_path('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='optimal')[0]  # doctest: +SKIP
... for iteration in range(500): # doctest: +SKIP
...  _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=path)
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

dask.array.einsumfuncs.einsum

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


File: /dask/array/einsumfuncs.py#196
type: <class 'function'>
Commit: