xarray.core.groupby.DataArrayGroupBy.prod#
- DataArrayGroupBy.prod(dim=None, *, skipna=None, min_count=None, keep_attrs=None, **kwargs)[source]#
Reduce this DataArray’s data by applying
prod
along some dimension(s).- Parameters
dim (
str
,Iterable
ofHashable
,"..."
orNone
, default:None
) – Name of dimension[s] along which to applyprod
. For e.g.dim="x"
ordim=["x", "y"]
. If None, will reduce over the GroupBy dimensions. If “…”, will reduce over all dimensions.skipna (
bool
orNone
, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) orskipna=True
has not been implemented (object, datetime64 or timedelta64).min_count (
int
orNone
, optional) – The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Only used if skipna is set to True or defaults to True for the array’s dtype. Changed in version 0.17.0: if specified on an integer array and skipna=True, the result will be a float array.keep_attrs (
bool
orNone
, optional) – If True,attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (
Any
) – Additional keyword arguments passed on to the appropriate array function for calculatingprod
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced (
DataArray
) – New DataArray withprod
applied to its data and the indicated dimension(s) removed
See also
numpy.prod
,dask.array.prod
,DataArray.prod
- GroupBy: Group and Bin Data
User guide on groupby operations.
Notes
Use the
flox
package to significantly speed up groupby computations, especially with dask arrays. Xarray will use flox by default if installed. Pass flox-specific keyword arguments in**kwargs
. See the flox documentation for more.Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="ME", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> Size: 48B array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 48B 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 24B 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.groupby("labels").prod() <xarray.DataArray (labels: 3)> Size: 24B array([1., 4., 0.]) Coordinates: * labels (labels) object 24B 'a' 'b' 'c'
Use
skipna
to control whether NaNs are ignored.>>> da.groupby("labels").prod(skipna=False) <xarray.DataArray (labels: 3)> Size: 24B array([nan, 4., 0.]) Coordinates: * labels (labels) object 24B 'a' 'b' 'c'
Specify
min_count
for finer control over when NaNs are ignored.>>> da.groupby("labels").prod(skipna=True, min_count=2) <xarray.DataArray (labels: 3)> Size: 24B array([nan, 4., 0.]) Coordinates: * labels (labels) object 24B 'a' 'b' 'c'