distributed 2021.10.0

ParametersReturnsBackRef
workers_to_close(self, comm=None, memory_ratio: 'int | float | None' = None, n: 'int | None' = None, key: 'Callable[[WorkerState], Hashable] | None' = None, minimum: 'int | None' = None, target: 'int | None' = None, attribute: str = 'address') -> 'list[str]'

This returns a list of workers that are good candidates to retire. These workers are not running anything and are storing relatively little data relative to their peers. If all workers are idle then we still maintain enough workers to have enough RAM to store our data, with a comfortable buffer.

This is for use with systems like distributed.deploy.adaptive .

Parameters

memory_ratio : Number

Amount of extra space we want to have for our stored data. Defaults to 2, or that we want to have twice as much memory as we currently have data.

n : int

Number of workers to close

minimum : int

Minimum number of workers to keep around

key : Callable(WorkerState)

An optional callable mapping a WorkerState object to a group affiliation. Groups will be closed together. This is useful when closing workers must be done collectively, such as by hostname.

target : int

Target number of workers to have after we close

attribute : str

The attribute of the WorkerState object to return, like "address" or "name". Defaults to "address".

Returns

to_close: list of worker addresses that are OK to close

Find workers that we can close with low cost

See Also

Scheduler.retire_workers

Examples

This example is valid syntax, but we were not able to check execution
>>> scheduler.workers_to_close()
['tcp://192.168.0.1:1234', 'tcp://192.168.0.2:1234']

Group workers by hostname prior to closing

This example is valid syntax, but we were not able to check execution
>>> scheduler.workers_to_close(key=lambda ws: ws.host)
['tcp://192.168.0.1:1234', 'tcp://192.168.0.1:4567']

Remove two workers

This example is valid syntax, but we were not able to check execution
>>> scheduler.workers_to_close(n=2)

Keep enough workers to have twice as much memory as we we need.

This example is valid syntax, but we were not able to check execution
>>> scheduler.workers_to_close(memory_ratio=2)
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

distributed.deploy.adaptive.Adaptive.workers_to_close distributed.scheduler.Scheduler.retire_workers

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


File: /distributed/scheduler.py#6528
type: <class 'function'>
Commit: