groupby(self, by=None, axis: 'Axis' = 0, level: 'Level | None' = None, as_index: 'bool' = True, sort: 'bool' = True, group_keys: 'bool' = True, squeeze: 'bool | lib.NoDefault' = <no_default>, observed: 'bool' = False, dropna: 'bool' = True) -> 'DataFrameGroupBy'
A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.
See the :None:None:`user guide
<https://pandas.pydata.org/pandas-docs/stable/groupby.html>`
for more detailed usage and examples, including splitting an object into groups, iterating through groups, selecting a group, aggregation, and more.
Used to determine the groups for the groupby. If by
is a function, it's called on each value of the object's index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series' values are first aligned; see .align()
method). If a list or ndarray of length equal to the selected axis is passed (see the :None:None:`groupby user guide
<https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#splitting-an-object-into-groups>`
), the values are used as-is to determine the groups. A label or list of labels may be passed to group by the columns in self
. Notice that a tuple is interpreted as a (single) key.
Split along rows (0) or columns (1).
If the axis is a MultiIndex (hierarchical), group by a particular level or levels.
For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively "SQL-style" grouped output.
Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.
When calling apply, add group keys to index to identify pieces.
Reduce the dimensionality of the return type if possible, otherwise return a consistent type.
This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.
If True, and if group keys contain NA values, NA values together with row/column will be dropped. If False, NA values will also be treated as the key in groups.
Returns a groupby object that contains information about the groups.
Group DataFrame using a mapper or by a Series of columns.
resample
Convenience method for frequency conversion and resampling of time series.
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon',This example is valid syntax, but we were not able to check execution
... 'Parrot', 'Parrot'],
... 'Max Speed': [380., 370., 24., 26.]})
... df Animal Max Speed 0 Falcon 380.0 1 Falcon 370.0 2 Parrot 24.0 3 Parrot 26.0
>>> df.groupby(['Animal']).mean() Max Speed Animal Falcon 375.0 Parrot 25.0
Hierarchical Indexes
We can groupby different levels of a hierarchical index using the :None:None:`level`
parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'],This example is valid syntax, but we were not able to check execution
... ['Captive', 'Wild', 'Captive', 'Wild']]
... index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type'))
... df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]},
... index=index)
... df Max Speed Animal Type Falcon Captive 390.0 Wild 350.0 Parrot Captive 30.0 Wild 20.0
>>> df.groupby(level=0).mean() Max Speed Animal Falcon 370.0 Parrot 25.0This example is valid syntax, but we were not able to check execution
>>> df.groupby(level="Type").mean() Max Speed Type Captive 210.0 Wild 185.0
We can also choose to include NA in group keys or not by setting dropna
parameter, the default setting is :None:None:`True`
.
>>> l = [[1, 2, 3], [1, None, 4], [2, 1, 3], [1, 2, 2]]This example is valid syntax, but we were not able to check execution
... df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by=["b"]).sum() a c b 1.0 2 3 2.0 2 5This example is valid syntax, but we were not able to check execution
>>> df.groupby(by=["b"], dropna=False).sum() a c b 1.0 2 3 2.0 2 5 NaN 1 4This example is valid syntax, but we were not able to check execution
>>> l = [["a", 12, 12], [None, 12.3, 33.], ["b", 12.3, 123], ["a", 1, 1]]This example is valid syntax, but we were not able to check execution
... df = pd.DataFrame(l, columns=["a", "b", "c"])
>>> df.groupby(by="a").sum() b c a a 13.0 13.0 b 12.3 123.0This example is valid syntax, but we were not able to check execution
>>> df.groupby(by="a", dropna=False).sum() b c a a 13.0 13.0 b 12.3 123.0 NaN 12.3 33.0See :
The following pages refer to to this document either explicitly or contain code examples using this.
pandas.core.groupby.groupby.GroupBy.mean
pandas.core.groupby.groupby.GroupBy.ewm
pandas.core.groupby.groupby.GroupBy.rolling
pandas.core.groupby.groupby.GroupBy.any
pandas.core.groupby.groupby.GroupBy.nth
pandas.core.groupby.groupby.GroupBy.cummax
pandas.core.groupby.groupby.GroupBy.std
pandas.core.groupby.groupby.GroupBy.size
pandas.core.groupby.groupby.GroupBy.all
pandas.core.groupby.groupby.GroupBy.expanding
pandas.core.groupby.groupby.GroupBy.cumsum
pandas.core.groupby.groupby.GroupBy.tail
pandas.core.groupby.groupby.GroupBy.ohlc
pandas.core.groupby.groupby.GroupBy.cummin
pandas.core.groupby.groupby.GroupBy.var
pandas.core.groupby.groupby.GroupBy.sem
pandas.core.groupby.groupby.GroupBy.cumprod
pandas.core.groupby.groupby.GroupBy.median
pandas.core.groupby.groupby.GroupBy.pct_change
pandas.core.groupby.groupby.GroupBy.head
pandas.core.groupby.groupby.GroupBy.count
pandas.core.groupby.groupby.GroupBy.rank
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them