xarray.DataArray.cumsum

xarray.DataArray.cumsum#

DataArray.cumsum(dim=None, *, skipna=None, keep_attrs=None, **kwargs)[source]#

Reduce this DataArray’s data by applying cumsum along some dimension(s).

Parameters
  • dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply cumsum. For e.g. dim="x" or dim=["x", "y"]. If “…” or None, will reduce over all dimensions.

  • skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).

  • keep_attrs (bool or None, optional) – If True, attrs will be copied from the original object to the new one. If False, the new object will be returned without attributes.

  • **kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating cumsum on this object’s data. These could include dask-specific kwargs like split_every.

Returns

reduced (DataArray) – New DataArray with cumsum applied to its data and the indicated dimension(s) removed

See also

numpy.cumsum, dask.array.cumsum, Dataset.cumsum, DataArray.cumulative

Aggregation

User guide on reduction or aggregation operations.

Notes

Non-numeric variables will be removed prior to reducing.

Note that the methods on the cumulative method are more performant (with numbagg installed) and better supported. cumsum and cumprod may be deprecated in the future.

Examples

>>> da = xr.DataArray(
...     np.array([1, 2, 3, 0, 2, np.nan]),
...     dims="time",
...     coords=dict(
...         time=("time", pd.date_range("2001-01-01", freq="ME", periods=6)),
...         labels=("time", np.array(["a", "b", "c", "c", "b", "a"])),
...     ),
... )
>>> da
<xarray.DataArray (time: 6)> Size: 48B
array([ 1.,  2.,  3.,  0.,  2., nan])
Coordinates:
  * time     (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30
    labels   (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.cumsum()
<xarray.DataArray (time: 6)> Size: 48B
array([1., 3., 6., 6., 8., 8.])
Coordinates:
  * time     (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30
    labels   (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'

Use skipna to control whether NaNs are ignored.

>>> da.cumsum(skipna=False)
<xarray.DataArray (time: 6)> Size: 48B
array([ 1.,  3.,  6.,  6.,  8., nan])
Coordinates:
  * time     (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30
    labels   (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'