publish_dataset(self, *args, **kwargs)
This stores a named reference to a dask collection or list of futures on the scheduler. These references are available to other Clients which can download the collection or futures with get_dataset
.
Datasets are not immediately computed. You may wish to call Client.persist
prior to publishing a dataset.
if true, override any already present dataset with the same name
named collections to publish on the scheduler
Publish named datasets to scheduler
Publishing client:
This example is valid syntax, but we were not able to check execution>>> df = dd.read_csv('s3://...') # doctest: +SKIP
... df = c.persist(df) # doctest: +SKIP
... c.publish_dataset(my_dataset=df) # doctest: +SKIP
Alternative invocation >>> c.publish_dataset(df, name='my_dataset')
Receiving client:
This example is valid syntax, but we were not able to check execution>>> c.list_datasets() # doctest: +SKIP ['my_dataset']This example is valid syntax, but we were not able to check execution
>>> df2 = c.get_dataset('my_dataset') # doctest: +SKIPSee :
The following pages refer to to this document either explicitly or contain code examples using this.
distributed.client.Client.get_dataset
distributed.client.Client.unpublish_dataset
distributed.client.Client.list_datasets
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them