xarray.core.groupby.DataArrayGroupBy.cumprod#
- DataArrayGroupBy.cumprod(dim=None, *, skipna=None, keep_attrs=None, **kwargs)[source]#
Reduce this DataArray’s data by applying
cumprod
along some dimension(s).- Parameters
dim (
str
,Iterable
ofHashable
,"..."
orNone
, default:None
) – Name of dimension[s] along which to applycumprod
. For e.g.dim="x"
ordim=["x", "y"]
. If None, will reduce over the GroupBy dimensions. If “…”, will reduce over all dimensions.skipna (
bool
orNone
, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) orskipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (
bool
orNone
, optional) – If True,attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (
Any
) – Additional keyword arguments passed on to the appropriate array function for calculatingcumprod
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced (
DataArray
) – New DataArray withcumprod
applied to its data and the indicated dimension(s) removed
See also
numpy.cumprod
,dask.array.cumprod
,DataArray.cumprod
,DataArray.cumulative
- GroupBy: Group and Bin Data
User guide on groupby operations.
Notes
Use the
flox
package to significantly speed up groupby computations, especially with dask arrays. Xarray will use flox by default if installed. Pass flox-specific keyword arguments in**kwargs
. See the flox documentation for more.Non-numeric variables will be removed prior to reducing.
Note that the methods on the
cumulative
method are more performant (with numbagg installed) and better supported.cumsum
andcumprod
may be deprecated in the future.Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="ME", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> Size: 48B array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.groupby("labels").cumprod() <xarray.DataArray (time: 6)> Size: 48B array([1., 2., 3., 0., 4., 1.]) Coordinates: * time (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'
Use
skipna
to control whether NaNs are ignored.>>> da.groupby("labels").cumprod(skipna=False) <xarray.DataArray (time: 6)> Size: 48B array([ 1., 2., 3., 0., 4., nan]) Coordinates: * time (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'