distributed 2021.10.0

Parameters

This provides a connect method that mirrors the normal distributed.connect method, but provides connection sharing and tracks connection limits.

This object provides an rpc like interface:

>>> rpc = ConnectionPool(limit=512)
>>> scheduler = rpc('127.0.0.1:8786')
>>> workers = [rpc(address) for address ...]

>>> info = yield scheduler.identity()

It creates enough comms to satisfy concurrent connections to any particular address:

>>> a, b = yield [scheduler.who_has(), scheduler.has_what()]

It reuses existing comms so that we don't have to continuously reconnect.

It also maintains a comm limit to avoid "too many open file handle" issues. Whenever this maximum is reached we clear out all idling comms. If that doesn't do the trick then we wait until one of the occupied comms closes.

Parameters

limit: int :

The number of open comms to maintain at once

deserialize: bool :

Whether or not to deserialize data by default or pass it through

A maximum sized pool of Comm objects.

Examples

See :

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


File: /distributed/core.py#886
type: <class 'type'>
Commit: