distributed 2021.10.0

ParametersReturnsBackRef
scatter(self, data, workers=None, broadcast=False, direct=None, hash=True, timeout='__no_default__', asynchronous=None)

This moves data from the local client process into the workers of the distributed scheduler. Note that it is often better to submit jobs to your workers to have them load the data rather than loading data locally and then scattering it out to them.

Parameters

data : list, dict, or object

Data to scatter out to workers. Output type matches input type.

workers : list of tuples (optional)

Optionally constrain locations of data. Specify workers as hostname/port pairs, e.g. ('127.0.0.1', 8787) .

broadcast : bool (defaults to False)

Whether to send each data element to all workers. By default we round-robin based on number of cores.

direct : bool (defaults to automatically check)

Whether or not to connect directly to the workers, or to ask the scheduler to serve as intermediary. This can also be set when creating the Client.

hash : bool (optional)

Whether or not to hash data to determine key. If False then this uses a random key

Returns

List, dict, iterator, or queue of futures matching the type of input.

Scatter data into distributed memory

See Also

Client.gather

Gather data back to local process

Examples

This example is valid syntax, but we were not able to check execution
>>> c = Client('127.0.0.1:8787')  # doctest: +SKIP
... c.scatter(1) # doctest: +SKIP <Future: status: finished, key: c0a8a20f903a4915b94db8de3ea63195>
This example is valid syntax, but we were not able to check execution
>>> c.scatter([1, 2, 3])  # doctest: +SKIP
[<Future: status: finished, key: c0a8a20f903a4915b94db8de3ea63195>,
 <Future: status: finished, key: 58e78e1b34eb49a68c65b54815d1b158>,
 <Future: status: finished, key: d3395e15f605bc35ab1bac6341a285e2>]
This example is valid syntax, but we were not able to check execution
>>> c.scatter({'x': 1, 'y': 2, 'z': 3})  # doctest: +SKIP
{'x': <Future: status: finished, key: x>,
 'y': <Future: status: finished, key: y>,
 'z': <Future: status: finished, key: z>}

Constrain location of data to subset of workers

This example is valid syntax, but we were not able to check execution
>>> c.scatter([1, 2, 3], workers=[('hostname', 8788)])   # doctest: +SKIP

Broadcast data to all workers

This example is valid syntax, but we were not able to check execution
>>> [future] = c.scatter([element], broadcast=True)  # doctest: +SKIP

Send scattered data to parallelized function using client futures interface

This example is valid syntax, but we were not able to check execution
>>> data = c.scatter(data, broadcast=True)  # doctest: +SKIP
... res = [c.submit(func, data, i) for i in range(100)]
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

distributed.client.Client.gather distributed.client.Client.scatter

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


File: /distributed/client.py#2087
type: <class 'function'>
Commit: