compute(self, **kwargs)
This turns a lazy Dask collection into its in-memory equivalent. For example a Dask array turns into a NumPy array and a Dask dataframe turns into a Pandas dataframe. The entire dataset must fit into memory before calling this operation.
Which scheduler to use like "threads", "synchronous" or "processes". If not provided, the default is to check the global settings first, and then fall back to the collection defaults.
If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.
Extra keywords to forward to the scheduler function.
Compute this dask collection
The following pages refer to to this document either explicitly or contain code examples using this.
dask.array.routines.histogram
dask.array.reductions.topk
dask.array.random.RandomState
dask.array.core.map_blocks
dask.array.routines.histogram2d
dask.array.blockwise.blockwise
dask.array.core.from_func
dask.array.tiledb_io.from_tiledb
dask.array.core.from_delayed
dask.array.routines.histogramdd
dask.array.core.Array.map_overlap
dask.array.reductions.argtopk
dask.array.overlap.map_overlap
dask.base.optimize
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them