dask 2021.10.0

ParametersReturnsBackRef
persist(*args, **kwargs)

This turns lazy Dask collections into Dask collections with the same metadata, but now with their results fully computed or actively computing in the background.

For example a lazy dask.array built up from many lazy calls will now be a dask.array of the same shape, dtype, chunks, etc., but now with all of those previously lazy tasks either computed in memory as many small numpy.array (in the single-machine case) or asynchronously running in the background on a cluster (in the distributed case).

This function operates differently if a dask.distributed.Client exists and is connected to a distributed scheduler. In this case this function will return as soon as the task graph has been submitted to the cluster, but before the computations have completed. Computations will continue asynchronously in the background. When using this function with the single machine scheduler it blocks until the computations have finished.

When using Dask on a single machine you should ensure that the dataset fits entirely within memory.

Parameters

*args: Dask collections :
scheduler : string, optional

Which scheduler to use like "threads", "synchronous" or "processes". If not provided, the default is to check the global settings first, and then fall back to the collection defaults.

traverse : bool, optional

By default dask traverses builtin python collections looking for dask objects passed to persist . For large collections this can be expensive. If none of the arguments contain any dask objects, set traverse=False to avoid doing this traversal.

optimize_graph : bool, optional

If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.

**kwargs :

Extra keywords to forward to the scheduler function.

Returns

New dask collections backed by in-memory data

Persist multiple Dask collections into memory

Examples

This example is valid syntax, but we were not able to check execution
>>> df = dd.read_csv('/path/to/*.csv')  # doctest: +SKIP
... df = df[df.name == 'Alice'] # doctest: +SKIP
... df['in-debt'] = df.balance < 0 # doctest: +SKIP
... df = df.persist() # triggers computation # doctest: +SKIP
This example is valid syntax, but we were not able to check execution
>>> df.value().min()  # future computations are now fast  # doctest: +SKIP
-10
This example is valid syntax, but we were not able to check execution
>>> df.value().max()  # doctest: +SKIP
100
This example is valid syntax, but we were not able to check execution
>>> from dask import persist  # use persist function on multiple collections
... a, b = persist(a, b) # doctest: +SKIP
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

dask.base.persist dask.base.DaskMethodsMixin.persist

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


File: /dask/base.py#682
type: <class 'function'>
Commit: