GroupBy: Group and Bin Data#
Often we want to bin or group data, produce statistics (mean, variance) on the groups, and then return a reduced data set. To do this, Xarray supports “group by” operations with the same API as pandas to implement the split-apply-combine strategy:
Split your data into multiple independent groups.
Apply some function to each group.
Combine your groups back into a single data object.
Group by operations work on both Dataset
and
DataArray
objects. Most of the examples focus on grouping by
a single one-dimensional variable, although support for grouping
over a multi-dimensional variable has recently been implemented. Note that for
one-dimensional data, it is usually faster to rely on pandas’ implementation of
the same pipeline.
Tip
Install the flox package to substantially improve the performance of GroupBy operations, particularly with dask. flox extends Xarray’s in-built GroupBy capabilities by allowing grouping by multiple variables, and lazy grouping by dask arrays. If installed, Xarray will automatically use flox by default.
Split#
Let’s create a simple example dataset:
In [1]: ds = xr.Dataset(
...: {"foo": (("x", "y"), np.random.rand(4, 3))},
...: coords={"x": [10, 20, 30, 40], "letters": ("x", list("abba"))},
...: )
...:
In [2]: arr = ds["foo"]
In [3]: ds
Out[3]:
<xarray.Dataset> Size: 144B
Dimensions: (x: 4, y: 3)
Coordinates:
* x (x) int64 32B 10 20 30 40
letters (x) <U1 16B 'a' 'b' 'b' 'a'
Dimensions without coordinates: y
Data variables:
foo (x, y) float64 96B 0.127 0.9667 0.2605 0.8972 ... 0.543 0.373 0.448
If we groupby the name of a variable or coordinate in a dataset (we can also
use a DataArray directly), we get back a GroupBy
object:
In [4]: ds.groupby("letters")
Out[4]:
<DatasetGroupBy, grouped over 1 grouper(s), 2 groups in total:
'letters': 2/2 groups present with labels 'a', 'b'>
This object works very similarly to a pandas GroupBy object. You can view
the group indices with the groups
attribute:
In [5]: ds.groupby("letters").groups
Out[5]: {'a': [0, 3], 'b': [1, 2]}
You can also iterate over groups in (label, group)
pairs:
In [6]: list(ds.groupby("letters"))
Out[6]:
[('a',
<xarray.Dataset> Size: 72B
Dimensions: (x: 2, y: 3)
Coordinates:
* x (x) int64 16B 10 40
letters (x) <U1 8B 'a' 'a'
Dimensions without coordinates: y
Data variables:
foo (x, y) float64 48B 0.127 0.9667 0.2605 0.543 0.373 0.448),
('b',
<xarray.Dataset> Size: 72B
Dimensions: (x: 2, y: 3)
Coordinates:
* x (x) int64 16B 20 30
letters (x) <U1 8B 'b' 'b'
Dimensions without coordinates: y
Data variables:
foo (x, y) float64 48B 0.8972 0.3767 0.3362 0.4514 0.8403 0.1231)]
You can index out a particular group:
In [7]: ds.groupby("letters")["b"]
Out[7]:
<xarray.Dataset> Size: 72B
Dimensions: (x: 2, y: 3)
Coordinates:
* x (x) int64 16B 20 30
letters (x) <U1 8B 'b' 'b'
Dimensions without coordinates: y
Data variables:
foo (x, y) float64 48B 0.8972 0.3767 0.3362 0.4514 0.8403 0.1231
To group by multiple variables, see this section.
Binning#
Sometimes you don’t want to use all the unique values to determine the groups
but instead want to “bin” the data into coarser groups. You could always create
a customized coordinate, but xarray facilitates this via the
Dataset.groupby_bins()
method.
In [8]: x_bins = [0, 25, 50]
In [9]: ds.groupby_bins("x", x_bins).groups
Out[9]:
{Interval(0, 25, closed='right'): [0, 1],
Interval(25, 50, closed='right'): [2, 3]}
The binning is implemented via pandas.cut()
, whose documentation details how
the bins are assigned. As seen in the example above, by default, the bins are
labeled with strings using set notation to precisely identify the bin limits. To
override this behavior, you can specify the bin labels explicitly. Here we
choose float
labels which identify the bin centers:
In [10]: x_bin_labels = [12.5, 37.5]
In [11]: ds.groupby_bins("x", x_bins, labels=x_bin_labels).groups
Out[11]: {np.float64(12.5): [0, 1], np.float64(37.5): [2, 3]}
Apply#
To apply a function to each group, you can use the flexible
core.groupby.DatasetGroupBy.map()
method. The resulting objects are automatically
concatenated back together along the group axis:
In [12]: def standardize(x):
....: return (x - x.mean()) / x.std()
....:
In [13]: arr.groupby("letters").map(standardize)
Out[13]:
<xarray.DataArray 'foo' (x: 4, y: 3)> Size: 96B
array([[-1.23 , 1.937, -0.726],
[ 1.42 , -0.46 , -0.607],
[-0.191, 1.214, -1.376],
[ 0.339, -0.302, -0.019]])
Coordinates:
* x (x) int64 32B 10 20 30 40
letters (x) <U1 16B 'a' 'b' 'b' 'a'
Dimensions without coordinates: y
GroupBy objects also have a core.groupby.DatasetGroupBy.reduce()
method and
methods like core.groupby.DatasetGroupBy.mean()
as shortcuts for applying an
aggregation function:
In [14]: arr.groupby("letters").mean(dim="x")
Out[14]:
<xarray.DataArray 'foo' (letters: 2, y: 3)> Size: 48B
array([[0.335, 0.67 , 0.354],
[0.674, 0.609, 0.23 ]])
Coordinates:
* letters (letters) object 16B 'a' 'b'
Dimensions without coordinates: y
Using a groupby is thus also a convenient shortcut for aggregating over all dimensions other than the provided one:
In [15]: ds.groupby("x").std(...)
Out[15]:
<xarray.Dataset> Size: 64B
Dimensions: (x: 4)
Coordinates:
* x (x) int64 32B 10 20 30 40
Data variables:
foo (x) float64 32B 0.3684 0.2554 0.2931 0.06957
Note
We use an ellipsis (…) here to indicate we want to reduce over all other dimensions
First and last#
There are two special aggregation operations that are currently only found on groupby objects: first and last. These provide the first or last example of values for group along the grouped dimension:
In [16]: ds.groupby("letters").first(...)
Out[16]:
<xarray.Dataset> Size: 64B
Dimensions: (letters: 2, y: 3)
Coordinates:
* letters (letters) object 16B 'a' 'b'
Dimensions without coordinates: y
Data variables:
foo (letters, y) float64 48B 0.127 0.9667 0.2605 0.8972 0.3767 0.3362
By default, they skip missing values (control this with skipna
).
Grouped arithmetic#
GroupBy objects also support a limited set of binary arithmetic operations, as
a shortcut for mapping over all unique labels. Binary arithmetic is supported
for (GroupBy, Dataset)
and (GroupBy, DataArray)
pairs, as long as the
dataset or data array uses the unique grouped values as one of its index
coordinates. For example:
In [17]: alt = arr.groupby("letters").mean(...)
In [18]: alt
Out[18]:
<xarray.DataArray 'foo' (letters: 2)> Size: 16B
array([0.453, 0.504])
Coordinates:
* letters (letters) object 16B 'a' 'b'
In [19]: ds.groupby("letters") - alt
Out[19]:
<xarray.Dataset> Size: 144B
Dimensions: (x: 4, y: 3)
Coordinates:
* x (x) int64 32B 10 20 30 40
letters (x) <U1 16B 'a' 'b' 'b' 'a'
Dimensions without coordinates: y
Data variables:
foo (x, y) float64 96B -0.3261 0.5137 -0.1926 ... -0.08002 -0.005036
This last line is roughly equivalent to the following:
results = []
for label, group in ds.groupby('letters'):
results.append(group - alt.sel(letters=label))
xr.concat(results, dim='x')
Multidimensional Grouping#
Many datasets have a multidimensional coordinate variable (e.g. longitude) which is different from the logical grid dimensions (e.g. nx, ny). Such variables are valid under the CF conventions. Xarray supports groupby operations over multidimensional coordinate variables:
In [20]: da = xr.DataArray(
....: [[0, 1], [2, 3]],
....: coords={
....: "lon": (["ny", "nx"], [[30, 40], [40, 50]]),
....: "lat": (["ny", "nx"], [[10, 10], [20, 20]]),
....: },
....: dims=["ny", "nx"],
....: )
....:
In [21]: da
Out[21]:
<xarray.DataArray (ny: 2, nx: 2)> Size: 32B
array([[0, 1],
[2, 3]])
Coordinates:
lon (ny, nx) int64 32B 30 40 40 50
lat (ny, nx) int64 32B 10 10 20 20
Dimensions without coordinates: ny, nx
In [22]: da.groupby("lon").sum(...)
Out[22]:
<xarray.DataArray (lon: 3)> Size: 24B
array([0, 3, 3])
Coordinates:
* lon (lon) int64 24B 30 40 50
In [23]: da.groupby("lon").map(lambda x: x - x.mean(), shortcut=False)
Out[23]:
<xarray.DataArray (ny: 2, nx: 2)> Size: 32B
array([[ 0. , -0.5],
[ 0.5, 0. ]])
Coordinates:
lon (ny, nx) int64 32B 30 40 40 50
lat (ny, nx) int64 32B 10 10 20 20
Dimensions without coordinates: ny, nx
Because multidimensional groups have the ability to generate a very large
number of bins, coarse-binning via Dataset.groupby_bins()
may be desirable:
In [24]: da.groupby_bins("lon", [0, 45, 50]).sum()
Out[24]:
<xarray.DataArray (lon_bins: 2)> Size: 16B
array([3, 3])
Coordinates:
* lon_bins (lon_bins) object 16B (0, 45] (45, 50]
These methods group by lon
values. It is also possible to groupby each
cell in a grid, regardless of value, by stacking multiple dimensions,
applying your function, and then unstacking the result:
In [25]: stacked = da.stack(gridcell=["ny", "nx"])
In [26]: stacked.groupby("gridcell").sum(...).unstack("gridcell")
Out[26]:
<xarray.DataArray (ny: 2, nx: 2)> Size: 32B
array([[0, 1],
[2, 3]])
Coordinates:
* ny (ny) int64 16B 0 1
* nx (nx) int64 16B 0 1
lon (ny, nx) int64 32B 30 40 40 50
lat (ny, nx) int64 32B 10 10 20 20
Alternatively, you can groupby both lat
and lon
at the same time.
Grouper Objects#
Both groupby_bins
and resample
are specializations of the core groupby
operation for binning,
and time resampling. Many problems demand more complex GroupBy application: for example, grouping by multiple
variables with a combination of categorical grouping, binning, and resampling; or more specializations like
spatial resampling; or more complex time grouping like special handling of seasons, or the ability to specify
custom seasons. To handle these use-cases and more, Xarray is evolving to providing an
extension point using Grouper
objects.
Tip
See the grouper design doc for more detail on the motivation and design ideas behind Grouper objects.
For now Xarray provides three specialized Grouper objects:
groupers.UniqueGrouper
for categorical groupinggroupers.BinGrouper
for binned groupinggroupers.TimeResampler
for resampling along a datetime coordinate
These provide functionality identical to the existing groupby
, groupby_bins
, and resample
methods.
That is,
ds.groupby("x")
is identical to
from xarray.groupers import UniqueGrouper
ds.groupby(x=UniqueGrouper())
Similarly,
ds.groupby_bins("x", bins=bins)
is identical to
from xarray.groupers import BinGrouper
ds.groupby(x=BinGrouper(bins))
and
ds.resample(time="ME")
is identical to
from xarray.groupers import TimeResampler
ds.resample(time=TimeResampler("ME"))
The groupers.UniqueGrouper
accepts an optional labels
kwarg that is not present
in DataArray.groupby()
or Dataset.groupby()
.
Specifying labels
is required when grouping by a lazy array type (e.g. dask or cubed).
The labels
are used to construct the output coordinate (say for a reduction), and aggregations
will only be run over the specified labels.
You may use labels
to also specify the ordering of groups to be used during iteration.
The order will be preserved in the output.
Grouping by multiple variables#
Use grouper objects to group by multiple dimensions:
In [27]: from xarray.groupers import UniqueGrouper
In [28]: da.groupby(["lat", "lon"]).sum()
Out[28]:
<xarray.DataArray (lat: 2, lon: 3)> Size: 48B
array([[ 0., 1., nan],
[nan, 2., 3.]])
Coordinates:
* lat (lat) int64 16B 10 20
* lon (lon) int64 24B 30 40 50
The above is sugar for using UniqueGrouper
objects directly:
In [29]: da.groupby(lat=UniqueGrouper(), lon=UniqueGrouper()).sum()
Out[29]:
<xarray.DataArray (lat: 2, lon: 3)> Size: 48B
array([[ 0., 1., nan],
[nan, 2., 3.]])
Coordinates:
* lat (lat) int64 16B 10 20
* lon (lon) int64 24B 30 40 50
Different groupers can be combined to construct sophisticated GroupBy operations.
In [30]: from xarray.groupers import BinGrouper
In [31]: ds.groupby(x=BinGrouper(bins=[5, 15, 25]), letters=UniqueGrouper()).sum()
Out[31]:
<xarray.Dataset> Size: 128B
Dimensions: (x_bins: 2, letters: 2, y: 3)
Coordinates:
* x_bins (x_bins) object 16B (5, 15] (15, 25]
* letters (letters) object 16B 'a' 'b'
Dimensions without coordinates: y
Data variables:
foo (y, x_bins, letters) float64 96B 0.127 nan nan ... nan nan 0.3362
Shuffling#
Shuffling is a generalization of sorting a DataArray or Dataset by another DataArray, named label
for example, that follows from the idea of grouping by label
.
Shuffling reorders the DataArray or the DataArrays in a Dataset such that all members of a group occur sequentially. For example,
Shuffle the object using either DatasetGroupBy
or DataArrayGroupBy
as appropriate.
In [32]: da = xr.DataArray(
....: dims="x",
....: data=[1, 2, 3, 4, 5, 6],
....: coords={"label": ("x", "a b c a b c".split(" "))},
....: )
....:
In [33]: da.groupby("label").shuffle_to_chunks()
Out[33]:
<xarray.DataArray (x: 6)> Size: 48B
array([1, 4, 2, 5, 3, 6])
Coordinates:
label (x) <U1 24B 'a' 'a' 'b' 'b' 'c' 'c'
Dimensions without coordinates: x
For chunked array types (e.g. dask or cubed), shuffle may result in a more optimized communication pattern when compared to direct indexing by the appropriate indexer.
Shuffling also makes GroupBy operations on chunked arrays an embarrassingly parallel problem, and may significantly improve workloads that use DatasetGroupBy.map()
or DataArrayGroupBy.map()
.