xarray.core.groupby.DataArrayGroupBy.mean

xarray.core.groupby.DataArrayGroupBy.mean#

DataArrayGroupBy.mean(dim=None, *, skipna=None, keep_attrs=None, **kwargs)[source]#

Reduce this DataArray’s data by applying mean along some dimension(s).

Parameters
  • dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply mean. For e.g. dim="x" or dim=["x", "y"]. If None, will reduce over the GroupBy dimensions. If “…”, will reduce over all dimensions.

  • skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).

  • keep_attrs (bool or None, optional) – If True, attrs will be copied from the original object to the new one. If False, the new object will be returned without attributes.

  • **kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating mean on this object’s data. These could include dask-specific kwargs like split_every.

Returns

reduced (DataArray) – New DataArray with mean applied to its data and the indicated dimension(s) removed

See also

numpy.mean, dask.array.mean, DataArray.mean

GroupBy: Group and Bin Data

User guide on groupby operations.

Notes

Use the flox package to significantly speed up groupby computations, especially with dask arrays. Xarray will use flox by default if installed. Pass flox-specific keyword arguments in **kwargs. See the flox documentation for more.

Non-numeric variables will be removed prior to reducing.

Examples

>>> da = xr.DataArray(
...     np.array([1, 2, 3, 0, 2, np.nan]),
...     dims="time",
...     coords=dict(
...         time=("time", pd.date_range("2001-01-01", freq="ME", periods=6)),
...         labels=("time", np.array(["a", "b", "c", "c", "b", "a"])),
...     ),
... )
>>> da
<xarray.DataArray (time: 6)> Size: 48B
array([ 1.,  2.,  3.,  0.,  2., nan])
Coordinates:
  * time     (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30
    labels   (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.groupby("labels").mean()
<xarray.DataArray (labels: 3)> Size: 24B
array([1. , 2. , 1.5])
Coordinates:
  * labels   (labels) object 24B 'a' 'b' 'c'

Use skipna to control whether NaNs are ignored.

>>> da.groupby("labels").mean(skipna=False)
<xarray.DataArray (labels: 3)> Size: 24B
array([nan, 2. , 1.5])
Coordinates:
  * labels   (labels) object 24B 'a' 'b' 'c'