distributed 2021.10.0

Parameters

This class controls our adaptive scaling behavior. It is intended to be used as a super-class or mixin. It expects the following state and methods:

State

plan: set

A set of workers that we think should exist. Here and below worker is just a token, often an address or name string

requested: set

A set of workers that the cluster class has successfully requested from the resource manager. We expect that resource manager to work to make these exist.

observed: set

A set of workers that have successfully checked in with the scheduler

These sets are not necessarily equivalent. Often plan and requested will be very similar (requesting is usually fast) but there may be a large delay between requested and observed (often resource managers don't give us what we want).

Functions

target

target

workers_to_close

workers_to_close

scale_up

scale_up

scale_down

scale_down

Parameters

minimum: int :

The minimum number of allowed workers

maximum: int | inf :

The maximum number of allowed workers

wait_count: int :

The number of scale-down requests we should receive before actually scaling down

interval: str :

The amount of time, like "1s" between checks

The core logic for adaptive deployments, with none of the cluster details

Examples

See :

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


File: /distributed/deploy/adaptive_core.py#24
type: <class 'type'>
Commit: