API reference#

This page provides an auto-generated summary of xarray’s API. For more details and examples, refer to the relevant chapters in the main part of the documentation.

See also: What parts of xarray are considered public API?

Top-level functions#

apply_ufunc(func, *args[, input_core_dims, ...])

Apply a vectorized function for unlabeled arrays on xarray objects.

align(*objects[, join, copy, indexes, ...])

Given any number of Dataset and/or DataArray objects, returns new objects with aligned indexes and dimension sizes.

broadcast(*args[, exclude])

Explicitly broadcast any number of DataArray or Dataset objects against one another.

concat(objs, dim[, data_vars, coords, ...])

Concatenate xarray objects along a new or existing dimension.

merge(objects[, compat, join, fill_value, ...])

Merge any number of xarray objects into a single Dataset as variables.

combine_by_coords([data_objects, compat, ...])

Attempt to auto-magically combine the given datasets (or data arrays) into one by using dimension coordinates.

combine_nested(datasets, concat_dim[, ...])

Explicitly combine an N-dimensional grid of datasets into one by using a succession of concat and merge operations along each dimension of the grid.

where(cond, x, y[, keep_attrs])

Return elements from x or y depending on cond.

infer_freq(index)

Infer the most likely frequency given the input index.

full_like(other, fill_value[, dtype, ...])

Return a new object with the same shape and type as a given object.

zeros_like(other[, dtype, chunks, ...])

Return a new object of zeros with the same shape and type as a given dataarray or dataset.

ones_like(other[, dtype, chunks, ...])

Return a new object of ones with the same shape and type as a given dataarray or dataset.

cov(da_a, da_b[, dim, ddof, weights])

Compute covariance between two DataArray objects along a shared dimension.

corr(da_a, da_b[, dim, weights])

Compute the Pearson correlation coefficient between two DataArray objects along a shared dimension.

cross(a, b, *, dim)

Compute the cross product of two (arrays of) vectors.

dot(*arrays[, dim])

Generalized dot product for xarray objects.

polyval(coord, coeffs[, degree_dim])

Evaluate a polynomial at specific values

map_blocks(func, obj[, args, kwargs, template])

Apply a function to each block of a DataArray or Dataset.

show_versions([file])

print the versions of xarray and its dependencies

set_options(**kwargs)

Set options for xarray in a controlled context.

get_options()

Get options for xarray.

unify_chunks(*objects)

Given any number of Dataset and/or DataArray objects, returns new objects with unified chunk size along all chunked dimensions.

Dataset#

Creating a dataset#

Dataset([data_vars, coords, attrs])

A multi-dimensional, in memory, array database.

decode_cf(obj[, concat_characters, ...])

Decode the given Dataset or Datastore according to CF conventions into a new Dataset.

Attributes#

Dataset.dims

Mapping from dimension names to lengths.

Dataset.sizes

Mapping from dimension names to lengths.

Dataset.dtypes

Mapping from data variable names to dtypes.

Dataset.data_vars

Dictionary of DataArray objects corresponding to data variables

Dataset.coords

Mapping of DataArray objects corresponding to coordinate variables.

Dataset.attrs

Dictionary of global attributes on this dataset

Dataset.encoding

Dictionary of global encoding attributes on this dataset

Dataset.indexes

Mapping of pandas.Index objects used for label based indexing.

Dataset.chunks

Mapping from dimension names to block lengths for this dataset's data, or None if the underlying data is not a dask array.

Dataset.chunksizes

Mapping from dimension names to block lengths for this dataset's data, or None if the underlying data is not a dask array.

Dataset.nbytes

Total bytes consumed by the data arrays of all variables in this dataset.

Dictionary interface#

Datasets implement the mapping interface with keys given by variable names and values given by DataArray objects.

Dataset.__getitem__(key)

Access variables or coordinates of this dataset as a DataArray or a subset of variables or a indexed dataset.

Dataset.__setitem__(key, value)

Add an array to this dataset.

Dataset.__delitem__(key)

Remove a variable from this dataset.

Dataset.update(other)

Update this dataset's variables with those from another dataset.

Dataset.get(k[,d])

Dataset.items()

Dataset.keys()

Dataset.values()

Dataset contents#

Dataset.copy([deep, data])

Returns a copy of this dataset.

Dataset.assign([variables])

Assign new data variables to a Dataset, returning a new object with all the original variables in addition to the new ones.

Dataset.assign_coords([coords])

Assign new coordinates to this object.

Dataset.assign_attrs(*args, **kwargs)

Assign new attrs to this object.

Dataset.pipe(func, *args, **kwargs)

Apply func(self, *args, **kwargs)

Dataset.merge(other[, overwrite_vars, ...])

Merge the arrays of two datasets into a single dataset.

Dataset.rename([name_dict])

Returns a new object with renamed variables, coordinates and dimensions.

Dataset.rename_vars([name_dict])

Returns a new object with renamed variables including coordinates

Dataset.rename_dims([dims_dict])

Returns a new object with renamed dimensions only.

Dataset.swap_dims([dims_dict])

Returns a new object with swapped dimensions.

Dataset.expand_dims([dim, axis, ...])

Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape.

Dataset.drop_vars(names, *[, errors])

Drop variables from this dataset.

Dataset.drop_indexes(coord_names, *[, errors])

Drop the indexes assigned to the given coordinates.

Dataset.drop_duplicates(dim, *[, keep])

Returns a new Dataset with duplicate dimension values removed.

Dataset.drop_dims(drop_dims, *[, errors])

Drop dimensions and associated variables from this dataset.

Dataset.drop_encoding()

Return a new Dataset without encoding on the dataset or any of its variables/coords.

Dataset.drop_attrs(*[, deep])

Removes all attributes from the Dataset and its variables.

Dataset.set_coords(names)

Given names of one or more variables, set them as coordinates

Dataset.reset_coords([names, drop])

Given names of coordinates, reset them to become variables

Dataset.convert_calendar(calendar[, dim, ...])

Convert the Dataset to another calendar.

Dataset.interp_calendar(target[, dim])

Interpolates the Dataset to another calendar based on decimal year measure.

Dataset.get_index(key)

Get an index for a dimension, with fall-back to a default RangeIndex

Comparisons#

Dataset.equals(other)

Two Datasets are equal if they have matching variables and coordinates, all of which are equal.

Dataset.identical(other)

Like equals, but also checks all dataset attributes and the attributes on all variables and coordinates.

Dataset.broadcast_equals(other)

Two Datasets are broadcast equal if they are equal after broadcasting all variables against each other.

Indexing#

Dataset.loc

Attribute for location based indexing.

Dataset.isel([indexers, drop, missing_dims])

Returns a new dataset with each array indexed along the specified dimension(s).

Dataset.sel([indexers, method, tolerance, drop])

Returns a new dataset with each array indexed by tick labels along the specified dimension(s).

Dataset.drop_sel([labels, errors])

Drop index labels from this dataset.

Dataset.drop_isel([indexers])

Drop index positions from this Dataset.

Dataset.head([indexers])

Returns a new dataset with the first n values of each array for the specified dimension(s).

Dataset.tail([indexers])

Returns a new dataset with the last n values of each array for the specified dimension(s).

Dataset.thin([indexers])

Returns a new dataset with each array indexed along every n-th value for the specified dimension(s)

Dataset.squeeze([dim, drop, axis])

Return a new object with squeezed data.

Dataset.interp([coords, method, ...])

Interpolate a Dataset onto new coordinates

Dataset.interp_like(other[, method, ...])

Interpolate this object onto the coordinates of another object, filling the out of range values with NaN.

Dataset.reindex([indexers, method, ...])

Conform this object onto a new set of indexes, filling in missing values with fill_value.

Dataset.reindex_like(other[, method, ...])

Conform this object onto the indexes of another object, for indexes which the objects share.

Dataset.set_index([indexes, append])

Set Dataset (multi-)indexes using one or more existing coordinates or variables.

Dataset.reset_index(dims_or_levels, *[, drop])

Reset the specified index(es) or multi-index level(s).

Dataset.set_xindex(coord_names[, index_cls])

Set a new, Xarray-compatible index from one or more existing coordinate(s).

Dataset.reorder_levels([dim_order])

Rearrange index levels using input order.

Dataset.query([queries, parser, engine, ...])

Return a new dataset with each array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the data variables in the dataset.

Missing value handling#

Dataset.isnull([keep_attrs])

Test each value in the array for whether it is a missing value.

Dataset.notnull([keep_attrs])

Test each value in the array for whether it is not a missing value.

Dataset.combine_first(other)

Combine two Datasets, default to data_vars of self.

Dataset.count([dim, keep_attrs])

Reduce this Dataset's data by applying count along some dimension(s).

Dataset.dropna(dim, *[, how, thresh, subset])

Returns a new dataset with dropped labels for missing values along the provided dimension.

Dataset.fillna(value)

Fill missing values in this object.

Dataset.ffill(dim[, limit])

Fill NaN values by propagating values forward

Dataset.bfill(dim[, limit])

Fill NaN values by propagating values backward

Dataset.interpolate_na([dim, method, limit, ...])

Fill in NaNs by interpolating according to different methods.

Dataset.where(cond[, other, drop])

Filter elements from this object according to a condition.

Dataset.isin(test_elements)

Tests each value in the array for whether it is in test elements.

Computation#

Dataset.map(func[, keep_attrs, args])

Apply a function to each data variable in this dataset

Dataset.reduce(func[, dim, keep_attrs, ...])

Reduce this dataset by applying func along some dimension(s).

Dataset.groupby([group, squeeze, ...])

Returns a DatasetGroupBy object for performing grouped operations.

Dataset.groupby_bins(group, bins[, right, ...])

Returns a DatasetGroupBy object for performing grouped operations.

Dataset.rolling([dim, min_periods, center])

Rolling window object for Datasets.

Dataset.rolling_exp([window, window_type])

Exponentially-weighted moving window.

Dataset.cumulative(dim[, min_periods])

Accumulating object for Datasets

Dataset.weighted(weights)

Weighted Dataset operations.

Dataset.coarsen([dim, boundary, side, ...])

Coarsen object for Datasets.

Dataset.resample([indexer, skipna, closed, ...])

Returns a Resample object for performing resampling operations.

Dataset.diff(dim[, n, label])

Calculate the n-th order discrete difference along given axis.

Dataset.quantile(q[, dim, method, ...])

Compute the qth quantile of the data along the specified dimension.

Dataset.differentiate(coord[, edge_order, ...])

Differentiate with the second order accurate central differences.

Dataset.integrate(coord[, datetime_unit])

Integrate along the given coordinate using the trapezoidal rule.

Dataset.map_blocks(func[, args, kwargs, ...])

Apply a function to each block of this Dataset.

Dataset.polyfit(dim, deg[, skipna, rcond, ...])

Least squares polynomial fit.

Dataset.curvefit(coords, func[, ...])

Curve fitting optimization for arbitrary functions.

Dataset.eval(statement, *[, parser])

Calculate an expression supplied as a string in the context of the dataset.

Aggregation#

Dataset.all([dim, keep_attrs])

Reduce this Dataset's data by applying all along some dimension(s).

Dataset.any([dim, keep_attrs])

Reduce this Dataset's data by applying any along some dimension(s).

Dataset.argmax([dim])

Indices of the maxima of the member variables.

Dataset.argmin([dim])

Indices of the minima of the member variables.

Dataset.count([dim, keep_attrs])

Reduce this Dataset's data by applying count along some dimension(s).

Dataset.idxmax([dim, skipna, fill_value, ...])

Return the coordinate label of the maximum value along a dimension.

Dataset.idxmin([dim, skipna, fill_value, ...])

Return the coordinate label of the minimum value along a dimension.

Dataset.max([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying max along some dimension(s).

Dataset.min([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying min along some dimension(s).

Dataset.mean([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying mean along some dimension(s).

Dataset.median([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying median along some dimension(s).

Dataset.prod([dim, skipna, min_count, ...])

Reduce this Dataset's data by applying prod along some dimension(s).

Dataset.sum([dim, skipna, min_count, keep_attrs])

Reduce this Dataset's data by applying sum along some dimension(s).

Dataset.std([dim, skipna, ddof, keep_attrs])

Reduce this Dataset's data by applying std along some dimension(s).

Dataset.var([dim, skipna, ddof, keep_attrs])

Reduce this Dataset's data by applying var along some dimension(s).

Dataset.cumsum([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying cumsum along some dimension(s).

Dataset.cumprod([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying cumprod along some dimension(s).

ndarray methods#

Dataset.argsort([axis, kind, order])

Returns the indices that would sort this array.

Dataset.astype(dtype, *[, order, casting, ...])

Copy of the xarray object, with data cast to a specified type.

Dataset.clip([min, max, keep_attrs])

Return an array whose values are limited to [min, max].

Dataset.conj()

Complex-conjugate all elements.

Dataset.conjugate()

Return the complex conjugate, element-wise.

Dataset.imag

The imaginary part of each data variable.

Dataset.round(*args, **kwargs)

Dataset.real

The real part of each data variable.

Dataset.rank(dim, *[, pct, keep_attrs])

Ranks the data.

Reshaping and reorganizing#

Dataset.transpose(*dim[, missing_dims])

Return a new Dataset object with all array dimensions transposed.

Dataset.stack([dim, create_index, index_cls])

Stack any number of existing dimensions into a single new dimension.

Dataset.unstack([dim, fill_value, sparse])

Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.

Dataset.to_stacked_array(new_dim, sample_dims)

Combine variables of differing dimensionality into a DataArray without broadcasting.

Dataset.shift([shifts, fill_value])

Shift this dataset by an offset along one or more dimensions.

Dataset.roll([shifts, roll_coords])

Roll this dataset by an offset along one or more dimensions.

Dataset.pad([pad_width, mode, stat_length, ...])

Pad this dataset along one or more dimensions.

Dataset.sortby(variables[, ascending])

Sort object by labels or values (along an axis).

Dataset.broadcast_like(other[, exclude])

Broadcast this DataArray against another Dataset or DataArray.

DataArray#

DataArray([data, coords, dims, name, attrs, ...])

N-dimensional array with labeled coordinates and dimensions.

Attributes#

DataArray.values

The array's data converted to numpy.ndarray.

DataArray.data

The DataArray's data as an array.

DataArray.coords

Mapping of DataArray objects corresponding to coordinate variables.

DataArray.dims

Tuple of dimension names associated with this array.

DataArray.sizes

Ordered mapping from dimension names to lengths.

DataArray.name

The name of this array.

DataArray.attrs

Dictionary storing arbitrary metadata with this array.

DataArray.encoding

Dictionary of format-specific settings for how this array should be serialized.

DataArray.indexes

Mapping of pandas.Index objects used for label based indexing.

DataArray.chunksizes

Mapping from dimension names to block lengths for this dataarray's data, or None if the underlying data is not a dask array.

ndarray attributes#

DataArray.ndim

Number of array dimensions.

DataArray.nbytes

Total bytes consumed by the elements of this DataArray's data.

DataArray.shape

Tuple of array dimensions.

DataArray.size

Number of elements in the array.

DataArray.dtype

Data-type of the array’s elements.

DataArray.chunks

Tuple of block lengths for this dataarray's data, in order of dimensions, or None if the underlying data is not a dask array.

DataArray contents#

DataArray.assign_coords([coords])

Assign new coordinates to this object.

DataArray.assign_attrs(*args, **kwargs)

Assign new attrs to this object.

DataArray.pipe(func, *args, **kwargs)

Apply func(self, *args, **kwargs)

DataArray.rename([new_name_or_name_dict])

Returns a new DataArray with renamed coordinates, dimensions or a new name.

DataArray.swap_dims([dims_dict])

Returns a new DataArray with swapped dimensions.

DataArray.expand_dims([dim, axis, ...])

Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape.

DataArray.drop_vars(names, *[, errors])

Returns an array with dropped variables.

DataArray.drop_indexes(coord_names, *[, errors])

Drop the indexes assigned to the given coordinates.

DataArray.drop_duplicates(dim, *[, keep])

Returns a new DataArray with duplicate dimension values removed.

DataArray.drop_encoding()

Return a new DataArray without encoding on the array or any attached coords.

DataArray.drop_attrs(*[, deep])

Removes all attributes from the DataArray.

DataArray.reset_coords([names, drop])

Given names of coordinates, reset them to become variables.

DataArray.copy([deep, data])

Returns a copy of this array.

DataArray.convert_calendar(calendar[, dim, ...])

Convert the DataArray to another calendar.

DataArray.interp_calendar(target[, dim])

Interpolates the DataArray to another calendar based on decimal year measure.

DataArray.get_index(key)

Get an index for a dimension, with fall-back to a default RangeIndex

DataArray.astype(dtype, *[, order, casting, ...])

Copy of the xarray object, with data cast to a specified type.

DataArray.item(*args)

Copy an element of an array to a standard Python scalar and return it.

Indexing#

DataArray.__getitem__(key)

DataArray.__setitem__(key, value)

DataArray.loc

Attribute for location based indexing like pandas.

DataArray.isel([indexers, drop, missing_dims])

Return a new DataArray whose data is given by selecting indexes along the specified dimension(s).

DataArray.sel([indexers, method, tolerance, ...])

Return a new DataArray whose data is given by selecting index labels along the specified dimension(s).

DataArray.drop_sel([labels, errors])

Drop index labels from this DataArray.

DataArray.drop_isel([indexers])

Drop index positions from this DataArray.

DataArray.head([indexers])

Return a new DataArray whose data is given by the the first n values along the specified dimension(s).

DataArray.tail([indexers])

Return a new DataArray whose data is given by the the last n values along the specified dimension(s).

DataArray.thin([indexers])

Return a new DataArray whose data is given by each n value along the specified dimension(s).

DataArray.squeeze([dim, drop, axis])

Return a new object with squeezed data.

DataArray.interp([coords, method, ...])

Interpolate a DataArray onto new coordinates

DataArray.interp_like(other[, method, ...])

Interpolate this object onto the coordinates of another object, filling out of range values with NaN.

DataArray.reindex([indexers, method, ...])

Conform this object onto the indexes of another object, filling in missing values with fill_value.

DataArray.reindex_like(other, *[, method, ...])

Conform this object onto the indexes of another object, for indexes which the objects share.

DataArray.set_index([indexes, append])

Set DataArray (multi-)indexes using one or more existing coordinates.

DataArray.reset_index(dims_or_levels[, drop])

Reset the specified index(es) or multi-index level(s).

DataArray.set_xindex(coord_names[, index_cls])

Set a new, Xarray-compatible index from one or more existing coordinate(s).

DataArray.reorder_levels([dim_order])

Rearrange index levels using input order.

DataArray.query([queries, parser, engine, ...])

Return a new data array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the values in the array.

Missing value handling#

DataArray.isnull([keep_attrs])

Test each value in the array for whether it is a missing value.

DataArray.notnull([keep_attrs])

Test each value in the array for whether it is not a missing value.

DataArray.combine_first(other)

Combine two DataArray objects, with union of coordinates.

DataArray.count([dim, keep_attrs])

Reduce this DataArray's data by applying count along some dimension(s).

DataArray.dropna(dim, *[, how, thresh])

Returns a new array with dropped labels for missing values along the provided dimension.

DataArray.fillna(value)

Fill missing values in this object.

DataArray.ffill(dim[, limit])

Fill NaN values by propagating values forward

DataArray.bfill(dim[, limit])

Fill NaN values by propagating values backward

DataArray.interpolate_na([dim, method, ...])

Fill in NaNs by interpolating according to different methods.

DataArray.where(cond[, other, drop])

Filter elements from this object according to a condition.

DataArray.isin(test_elements)

Tests each value in the array for whether it is in test elements.

Comparisons#

DataArray.equals(other)

True if two DataArrays have the same dimensions, coordinates and values; otherwise False.

DataArray.identical(other)

Like equals, but also checks the array name and attributes, and attributes on all coordinates.

DataArray.broadcast_equals(other)

Two DataArrays are broadcast equal if they are equal after broadcasting them against each other such that they have the same dimensions.

Computation#

DataArray.reduce(func[, dim, axis, ...])

Reduce this array by applying func along some dimension(s).

DataArray.groupby([group, squeeze, ...])

Returns a DataArrayGroupBy object for performing grouped operations.

DataArray.groupby_bins(group, bins[, right, ...])

Returns a DataArrayGroupBy object for performing grouped operations.

DataArray.rolling([dim, min_periods, center])

Rolling window object for DataArrays.

DataArray.rolling_exp([window, window_type])

Exponentially-weighted moving window.

DataArray.cumulative(dim[, min_periods])

Accumulating object for DataArrays.

DataArray.weighted(weights)

Weighted DataArray operations.

DataArray.coarsen([dim, boundary, side, ...])

Coarsen object for DataArrays.

DataArray.resample([indexer, skipna, ...])

Returns a Resample object for performing resampling operations.

DataArray.get_axis_num(dim)

Return axis number(s) corresponding to dimension(s) in this array.

DataArray.diff(dim[, n, label])

Calculate the n-th order discrete difference along given axis.

DataArray.dot(other[, dim])

Perform dot product of two DataArrays along their shared dims.

DataArray.quantile(q[, dim, method, ...])

Compute the qth quantile of the data along the specified dimension.

DataArray.differentiate(coord[, edge_order, ...])

Differentiate the array with the second order accurate central differences.

DataArray.integrate([coord, datetime_unit])

Integrate along the given coordinate using the trapezoidal rule.

DataArray.polyfit(dim, deg[, skipna, rcond, ...])

Least squares polynomial fit.

DataArray.map_blocks(func[, args, kwargs, ...])

Apply a function to each block of this DataArray.

DataArray.curvefit(coords, func[, ...])

Curve fitting optimization for arbitrary functions.

Aggregation#

DataArray.all([dim, keep_attrs])

Reduce this DataArray's data by applying all along some dimension(s).

DataArray.any([dim, keep_attrs])

Reduce this DataArray's data by applying any along some dimension(s).

DataArray.argmax([dim, axis, keep_attrs, skipna])

Index or indices of the maximum of the DataArray over one or more dimensions.

DataArray.argmin([dim, axis, keep_attrs, skipna])

Index or indices of the minimum of the DataArray over one or more dimensions.

DataArray.count([dim, keep_attrs])

Reduce this DataArray's data by applying count along some dimension(s).

DataArray.idxmax([dim, skipna, fill_value, ...])

Return the coordinate label of the maximum value along a dimension.

DataArray.idxmin([dim, skipna, fill_value, ...])

Return the coordinate label of the minimum value along a dimension.

DataArray.max([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying max along some dimension(s).

DataArray.min([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying min along some dimension(s).

DataArray.mean([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying mean along some dimension(s).

DataArray.median([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying median along some dimension(s).

DataArray.prod([dim, skipna, min_count, ...])

Reduce this DataArray's data by applying prod along some dimension(s).

DataArray.sum([dim, skipna, min_count, ...])

Reduce this DataArray's data by applying sum along some dimension(s).

DataArray.std([dim, skipna, ddof, keep_attrs])

Reduce this DataArray's data by applying std along some dimension(s).

DataArray.var([dim, skipna, ddof, keep_attrs])

Reduce this DataArray's data by applying var along some dimension(s).

DataArray.cumsum([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying cumsum along some dimension(s).

DataArray.cumprod([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying cumprod along some dimension(s).

ndarray methods#

DataArray.argsort([axis, kind, order])

Returns the indices that would sort this array.

DataArray.clip([min, max, keep_attrs])

Return an array whose values are limited to [min, max].

DataArray.conj()

Complex-conjugate all elements.

DataArray.conjugate()

Return the complex conjugate, element-wise.

DataArray.imag

The imaginary part of the array.

DataArray.searchsorted(v[, side, sorter])

Find indices where elements of v should be inserted in a to maintain order.

DataArray.round(*args, **kwargs)

DataArray.real

The real part of the array.

DataArray.T

DataArray.rank(dim, *[, pct, keep_attrs])

Ranks the data.

String manipulation#

DataArray.str.capitalize()

Convert strings in the array to be capitalized.

DataArray.str.casefold()

Convert strings in the array to be casefolded.

DataArray.str.cat(*others[, sep])

Concatenate strings elementwise in the DataArray with other strings.

DataArray.str.center(width[, fillchar])

Pad left and right side of each string in the array.

DataArray.str.contains(pat[, case, flags, regex])

Test if pattern or regex is contained within each string of the array.

DataArray.str.count(pat[, flags, case])

Count occurrences of pattern in each string of the array.

DataArray.str.decode(encoding[, errors])

Decode character string in the array using indicated encoding.

DataArray.str.encode(encoding[, errors])

Encode character string in the array using indicated encoding.

DataArray.str.endswith(pat)

Test if the end of each string in the array matches a pattern.

DataArray.str.extract(pat, dim[, case, flags])

Extract the first match of capture groups in the regex pat as a new dimension in a DataArray.

DataArray.str.extractall(pat, group_dim, ...)

Extract all matches of capture groups in the regex pat as new dimensions in a DataArray.

DataArray.str.find(sub[, start, end, side])

Return lowest or highest indexes in each strings in the array where the substring is fully contained between [start:end].

DataArray.str.findall(pat[, case, flags])

Find all occurrences of pattern or regular expression in the DataArray.

DataArray.str.format(*args, **kwargs)

Perform python string formatting on each element of the DataArray.

DataArray.str.get(i[, default])

Extract character number i from each string in the array.

DataArray.str.get_dummies(dim[, sep])

Return DataArray of dummy/indicator variables.

DataArray.str.index(sub[, start, end, side])

Return lowest or highest indexes in each strings where the substring is fully contained between [start:end].

DataArray.str.isalnum()

Check whether all characters in each string are alphanumeric.

DataArray.str.isalpha()

Check whether all characters in each string are alphabetic.

DataArray.str.isdecimal()

Check whether all characters in each string are decimal.

DataArray.str.isdigit()

Check whether all characters in each string are digits.

DataArray.str.islower()

Check whether all characters in each string are lowercase.

DataArray.str.isnumeric()

Check whether all characters in each string are numeric.

DataArray.str.isspace()

Check whether all characters in each string are spaces.

DataArray.str.istitle()

Check whether all characters in each string are titlecase.

DataArray.str.isupper()

Check whether all characters in each string are uppercase.

DataArray.str.join([dim, sep])

Concatenate strings in a DataArray along a particular dimension.

DataArray.str.len()

Compute the length of each string in the array.

DataArray.str.ljust(width[, fillchar])

Pad right side of each string in the array.

DataArray.str.lower()

Convert strings in the array to lowercase.

DataArray.str.lstrip([to_strip])

Remove leading characters.

DataArray.str.match(pat[, case, flags])

Determine if each string in the array matches a regular expression.

DataArray.str.normalize(form)

Return the Unicode normal form for the strings in the datarray.

DataArray.str.pad(width[, side, fillchar])

Pad strings in the array up to width.

DataArray.str.partition(dim[, sep])

Split the strings in the DataArray at the first occurrence of separator sep.

DataArray.str.repeat(repeats)

Repeat each string in the array.

DataArray.str.replace(pat, repl[, n, case, ...])

Replace occurrences of pattern/regex in the array with some string.

DataArray.str.rfind(sub[, start, end])

Return highest indexes in each strings in the array where the substring is fully contained between [start:end].

DataArray.str.rindex(sub[, start, end])

Return highest indexes in each strings where the substring is fully contained between [start:end].

DataArray.str.rjust(width[, fillchar])

Pad left side of each string in the array.

DataArray.str.rpartition(dim[, sep])

Split the strings in the DataArray at the last occurrence of separator sep.

DataArray.str.rsplit(dim[, sep, maxsplit])

Split strings in a DataArray around the given separator/delimiter sep.

DataArray.str.rstrip([to_strip])

Remove trailing characters.

DataArray.str.slice([start, stop, step])

Slice substrings from each string in the array.

DataArray.str.slice_replace([start, stop, repl])

Replace a positional slice of a string with another value.

DataArray.str.split(dim[, sep, maxsplit])

Split strings in a DataArray around the given separator/delimiter sep.

DataArray.str.startswith(pat)

Test if the start of each string in the array matches a pattern.

DataArray.str.strip([to_strip, side])

Remove leading and trailing characters.

DataArray.str.swapcase()

Convert strings in the array to be swapcased.

DataArray.str.title()

Convert strings in the array to titlecase.

DataArray.str.translate(table)

Map characters of each string through the given mapping table.

DataArray.str.upper()

Convert strings in the array to uppercase.

DataArray.str.wrap(width, **kwargs)

Wrap long strings in the array in paragraphs with length less than width.

DataArray.str.zfill(width)

Pad each string in the array by prepending '0' characters.

Datetimelike properties#

Datetime properties:

DataArray.dt.year

The year of the datetime

DataArray.dt.month

The month as January=1, December=12

DataArray.dt.day

The days of the datetime

DataArray.dt.hour

The hours of the datetime

DataArray.dt.minute

The minutes of the datetime

DataArray.dt.second

The seconds of the datetime

DataArray.dt.microsecond

The microseconds of the datetime

DataArray.dt.nanosecond

The nanoseconds of the datetime

DataArray.dt.dayofweek

The day of the week with Monday=0, Sunday=6

DataArray.dt.weekday

The day of the week with Monday=0, Sunday=6

DataArray.dt.dayofyear

The ordinal day of the year

DataArray.dt.quarter

The quarter of the date

DataArray.dt.days_in_month

The number of days in the month

DataArray.dt.daysinmonth

The number of days in the month

DataArray.dt.days_in_year

Each datetime as the year plus the fraction of the year elapsed.

DataArray.dt.season

Season of the year

DataArray.dt.time

Timestamps corresponding to datetimes

DataArray.dt.date

Date corresponding to datetimes

DataArray.dt.decimal_year

Convert the dates as a fractional year.

DataArray.dt.calendar

The name of the calendar of the dates.

DataArray.dt.is_month_start

Indicate whether the date is the first day of the month

DataArray.dt.is_month_end

Indicate whether the date is the last day of the month

DataArray.dt.is_quarter_end

Indicate whether the date is the last day of a quarter

DataArray.dt.is_year_start

Indicate whether the date is the first day of a year

DataArray.dt.is_leap_year

Indicate if the date belongs to a leap year

Datetime methods:

DataArray.dt.floor(freq)

Round timestamps downward to specified frequency resolution.

DataArray.dt.ceil(freq)

Round timestamps upward to specified frequency resolution.

DataArray.dt.isocalendar()

Dataset containing ISO year, week number, and weekday.

DataArray.dt.round(freq)

Round timestamps to specified frequency resolution.

DataArray.dt.strftime(date_format)

Return an array of formatted strings specified by date_format, which supports the same string format as the python standard library.

Timedelta properties:

DataArray.dt.days

Number of days for each element

DataArray.dt.seconds

Number of seconds (>= 0 and less than 1 day) for each element

DataArray.dt.microseconds

Number of microseconds (>= 0 and less than 1 second) for each element

DataArray.dt.nanoseconds

Number of nanoseconds (>= 0 and less than 1 microsecond) for each element

DataArray.dt.total_seconds

Timedelta methods:

DataArray.dt.floor(freq)

Round timestamps downward to specified frequency resolution.

DataArray.dt.ceil(freq)

Round timestamps upward to specified frequency resolution.

DataArray.dt.round(freq)

Round timestamps to specified frequency resolution.

Reshaping and reorganizing#

DataArray.transpose(*dim[, ...])

Return a new DataArray object with transposed dimensions.

DataArray.stack([dim, create_index, index_cls])

Stack any number of existing dimensions into a single new dimension.

DataArray.unstack([dim, fill_value, sparse])

Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.

DataArray.to_unstacked_dataset(dim[, level])

Unstack DataArray expanding to Dataset along a given level of a stacked coordinate.

DataArray.shift([shifts, fill_value])

Shift this DataArray by an offset along one or more dimensions.

DataArray.roll([shifts, roll_coords])

Roll this array by an offset along one or more dimensions.

DataArray.pad([pad_width, mode, ...])

Pad this array along one or more dimensions.

DataArray.sortby(variables[, ascending])

Sort object by labels or values (along an axis).

DataArray.broadcast_like(other, *[, exclude])

Broadcast this DataArray against another Dataset or DataArray.

DataTree#

Creating a DataTree#

Methods of creating a DataTree.

DataTree([dataset, children, name])

A tree-like hierarchical collection of xarray objects.

DataTree.from_dict(d, /[, name])

Create a datatree from a dictionary of data objects, organised by paths into the tree.

Tree Attributes#

Attributes relating to the recursive tree-like structure of a DataTree.

DataTree.parent

Parent of this node.

DataTree.children

Child nodes of this node, stored under a mapping via their names.

DataTree.name

The name of this node.

DataTree.path

Return the file-like path from the root to this node.

DataTree.root

Root node of the tree

DataTree.is_root

Whether this node is the tree root.

DataTree.is_leaf

Whether this node is a leaf node.

DataTree.leaves

All leaf nodes.

DataTree.level

Level of this node.

DataTree.depth

Maximum level of this tree.

DataTree.width

Number of nodes at this level in the tree.

DataTree.subtree

An iterator over all nodes in this tree, including both self and all descendants.

DataTree.descendants

Child nodes and all their child nodes.

DataTree.siblings

Nodes with the same parent as this node.

DataTree.lineage

All parent nodes and their parent nodes, starting with the closest.

DataTree.parents

All parent nodes and their parent nodes, starting with the closest.

DataTree.ancestors

All parent nodes and their parent nodes, starting with the most distant.

DataTree.groups

Return all netCDF4 groups in the tree, given as a tuple of path-like strings.

Data Contents#

Interface to the data objects (optionally) stored inside a single DataTree node. This interface echoes that of xarray.Dataset.

DataTree.dims

Mapping from dimension names to lengths.

DataTree.sizes

Mapping from dimension names to lengths.

DataTree.data_vars

Dictionary of DataArray objects corresponding to data variables

DataTree.coords

Dictionary of xarray.DataArray objects corresponding to coordinate variables

DataTree.attrs

Dictionary of global attributes on this node object.

DataTree.encoding

Dictionary of global encoding attributes on this node object.

DataTree.indexes

Mapping of pandas.Index objects used for label based indexing.

DataTree.nbytes

DataTree.dataset

An immutable Dataset-like view onto the data in this node.

DataTree.to_dataset([inherited])

Return the data in this node as a new xarray.Dataset object.

DataTree.has_data

Whether or not there are any variables in this node.

DataTree.has_attrs

Whether or not there are any metadata attributes in this node.

DataTree.is_empty

False if node contains any data or attrs.

DataTree.is_hollow

True if only leaf nodes contain data.

Dictionary Interface#

DataTree objects also have a dict-like interface mapping keys to either xarray.DataArrays or to child DataTree nodes.

DataTree.__getitem__(key)

Access child nodes, variables, or coordinates stored anywhere in this tree.

DataTree.__setitem__(key, value)

Add either a child node or an array to the tree, at any position.

DataTree.__delitem__(key)

Remove a variable or child node from this datatree node.

DataTree.update(other)

Update this node's children and / or variables.

DataTree.get(key[, default])

Access child nodes, variables, or coordinates stored in this node.

DataTree.items()

DataTree.keys()

DataTree.values()

Tree Manipulation#

For manipulating, traversing, navigating, or mapping over the tree structure.

DataTree.orphan()

Detach this node from its parent.

DataTree.same_tree(other)

True if other node is in the same tree as this node.

DataTree.relative_to(other)

Compute the relative path from this node to node other.

DataTree.iter_lineage()

Iterate up the tree, starting from the current node.

DataTree.find_common_ancestor(other)

Find the first common ancestor of two nodes in the same tree.

DataTree.map_over_subtree(func, *args, **kwargs)

Apply a function to every dataset in this subtree, returning a new tree which stores the results.

DataTree.pipe(func, *args, **kwargs)

Apply func(self, *args, **kwargs)

DataTree.match(pattern)

Return nodes with paths matching pattern.

DataTree.filter(filterfunc)

Filter nodes according to a specified condition.

Pathlib-like Interface#

DataTree objects deliberately echo some of the API of pathlib.PurePath.

DataTree.name

The name of this node.

DataTree.parent

Parent of this node.

DataTree.parents

All parent nodes and their parent nodes, starting with the closest.

DataTree.relative_to(other)

Compute the relative path from this node to node other.

Missing:

DataTree.glob DataTree.joinpath DataTree.with_name DataTree.walk DataTree.rename DataTree.replace

DataTree Contents#

Manipulate the contents of all nodes in a DataTree simultaneously.

DataTree.copy([deep])

Returns a copy of this subtree.

DataTree.assign_coords([coords])

Assign new coordinates to this object.

DataTree.merge(datatree)

Merge all the leaves of a second DataTree into this one.

DataTree.rename([name_dict])

Returns a new object with renamed variables, coordinates and dimensions.

DataTree.rename_vars([name_dict])

Returns a new object with renamed variables including coordinates

DataTree.rename_dims([dims_dict])

Returns a new object with renamed dimensions only.

DataTree.swap_dims([dims_dict])

Returns a new object with swapped dimensions.

DataTree.expand_dims([dim, axis, ...])

Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape.

DataTree.drop_vars(names, *[, errors])

Drop variables from this dataset.

DataTree.drop_dims(drop_dims, *[, errors])

Drop dimensions and associated variables from this dataset.

DataTree.set_coords(names)

Given names of one or more variables, set them as coordinates

DataTree.reset_coords([names, drop])

Given names of coordinates, reset them to become variables

DataTree Node Contents#

Manipulate the contents of a single DataTree node.

DataTree.assign([items])

Assign new data variables or child nodes to a DataTree, returning a new object with all the original items in addition to the new ones.

DataTree.drop_nodes(names, *[, errors])

Drop child nodes from this node.

Comparisons#

Compare one DataTree object to another.

DataTree.isomorphic(other[, from_root, ...])

Two DataTrees are considered isomorphic if every node has the same number of children.

DataTree.equals(other[, from_root])

Two DataTrees are equal if they have isomorphic node structures, with matching node names, and if they have matching variables and coordinates, all of which are equal.

DataTree.identical(other[, from_root])

Like equals, but will also check all dataset attributes and the attributes on all variables and coordinates.

Indexing#

Index into all nodes in the subtree simultaneously.

DataTree.isel([indexers, drop, missing_dims])

Returns a new dataset with each array indexed along the specified dimension(s).

DataTree.sel([indexers, method, tolerance, drop])

Returns a new dataset with each array indexed by tick labels along the specified dimension(s).

DataTree.drop_sel([labels, errors])

Drop index labels from this dataset.

DataTree.drop_isel([indexers])

Drop index positions from this Dataset.

DataTree.head([indexers])

Returns a new dataset with the first n values of each array for the specified dimension(s).

DataTree.tail([indexers])

Returns a new dataset with the last n values of each array for the specified dimension(s).

DataTree.thin([indexers])

Returns a new dataset with each array indexed along every n-th value for the specified dimension(s)

DataTree.squeeze([dim, drop, axis])

Return a new object with squeezed data.

DataTree.interp([coords, method, ...])

Interpolate a Dataset onto new coordinates

DataTree.interp_like(other[, method, ...])

Interpolate this object onto the coordinates of another object, filling the out of range values with NaN.

DataTree.reindex([indexers, method, ...])

Conform this object onto a new set of indexes, filling in missing values with fill_value.

DataTree.reindex_like(other[, method, ...])

Conform this object onto the indexes of another object, for indexes which the objects share.

DataTree.set_index([indexes, append])

Set Dataset (multi-)indexes using one or more existing coordinates or variables.

DataTree.reset_index(dims_or_levels, *[, drop])

Reset the specified index(es) or multi-index level(s).

DataTree.reorder_levels([dim_order])

Rearrange index levels using input order.

DataTree.query([queries, parser, engine, ...])

Return a new dataset with each array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the data variables in the dataset.

Missing: DataTree.loc

Missing Value Handling#

DataTree.isnull([keep_attrs])

Test each value in the array for whether it is a missing value.

DataTree.notnull([keep_attrs])

Test each value in the array for whether it is not a missing value.

DataTree.combine_first(other)

Combine two Datasets, default to data_vars of self.

DataTree.dropna(dim, *[, how, thresh, subset])

Returns a new dataset with dropped labels for missing values along the provided dimension.

DataTree.fillna(value)

Fill missing values in this object.

DataTree.ffill(dim[, limit])

Fill NaN values by propagating values forward

DataTree.bfill(dim[, limit])

Fill NaN values by propagating values backward

DataTree.interpolate_na([dim, method, ...])

Fill in NaNs by interpolating according to different methods.

DataTree.where(cond[, other, drop])

Filter elements from this object according to a condition.

DataTree.isin(test_elements)

Tests each value in the array for whether it is in test elements.

Computation#

Apply a computation to the data in all nodes in the subtree simultaneously.

DataTree.map(func[, keep_attrs, args])

Apply a function to each data variable in this dataset

DataTree.reduce(func[, dim, keep_attrs, ...])

Reduce this dataset by applying func along some dimension(s).

DataTree.diff(dim[, n, label])

Calculate the n-th order discrete difference along given axis.

DataTree.quantile(q[, dim, method, ...])

Compute the qth quantile of the data along the specified dimension.

DataTree.differentiate(coord[, edge_order, ...])

Differentiate with the second order accurate central differences.

DataTree.integrate(coord[, datetime_unit])

Integrate along the given coordinate using the trapezoidal rule.

DataTree.map_blocks(func[, args, kwargs, ...])

Apply a function to each block of this Dataset.

DataTree.polyfit(dim, deg[, skipna, rcond, ...])

Least squares polynomial fit.

DataTree.curvefit(coords, func[, ...])

Curve fitting optimization for arbitrary functions.

Aggregation#

Aggregate data in all nodes in the subtree simultaneously.

DataTree.all([dim, keep_attrs])

Reduce this Dataset's data by applying all along some dimension(s).

DataTree.any([dim, keep_attrs])

Reduce this Dataset's data by applying any along some dimension(s).

DataTree.argmax([dim])

Indices of the maxima of the member variables.

DataTree.argmin([dim])

Indices of the minima of the member variables.

DataTree.idxmax([dim, skipna, fill_value, ...])

Return the coordinate label of the maximum value along a dimension.

DataTree.idxmin([dim, skipna, fill_value, ...])

Return the coordinate label of the minimum value along a dimension.

DataTree.max([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying max along some dimension(s).

DataTree.min([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying min along some dimension(s).

DataTree.mean([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying mean along some dimension(s).

DataTree.median([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying median along some dimension(s).

DataTree.prod([dim, skipna, min_count, ...])

Reduce this Dataset's data by applying prod along some dimension(s).

DataTree.sum([dim, skipna, min_count, ...])

Reduce this Dataset's data by applying sum along some dimension(s).

DataTree.std([dim, skipna, ddof, keep_attrs])

Reduce this Dataset's data by applying std along some dimension(s).

DataTree.var([dim, skipna, ddof, keep_attrs])

Reduce this Dataset's data by applying var along some dimension(s).

DataTree.cumsum([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying cumsum along some dimension(s).

DataTree.cumprod([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying cumprod along some dimension(s).

ndarray methods#

Methods copied from numpy.ndarray objects, here applying to the data in all nodes in the subtree.

DataTree.argsort([axis, kind, order])

DataTree.astype(dtype, *[, order, casting, ...])

Copy of the xarray object, with data cast to a specified type.

DataTree.clip([min, max, keep_attrs])

Return an array whose values are limited to [min, max].

DataTree.conj()

DataTree.conjugate()

DataTree.round(*args, **kwargs)

DataTree.rank(dim, *[, pct, keep_attrs])

Ranks the data.

Reshaping and reorganising#

Reshape or reorganise the data in all nodes in the subtree.

DataTree.transpose(*dim[, missing_dims])

Return a new Dataset object with all array dimensions transposed.

DataTree.stack([dim, create_index, index_cls])

Stack any number of existing dimensions into a single new dimension.

DataTree.unstack([dim, fill_value, sparse])

Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.

DataTree.shift([shifts, fill_value])

Shift this dataset by an offset along one or more dimensions.

DataTree.roll([shifts, roll_coords])

Roll this dataset by an offset along one or more dimensions.

DataTree.pad([pad_width, mode, stat_length, ...])

Pad this dataset along one or more dimensions.

DataTree.sortby(variables[, ascending])

Sort object by labels or values (along an axis).

DataTree.broadcast_like(other[, exclude])

Broadcast this DataArray against another Dataset or DataArray.

IO / Conversion#

Dataset methods#

load_dataset(filename_or_obj, **kwargs)

Open, load into memory, and close a Dataset from a file or file-like object.

open_dataset(filename_or_obj, *[, engine, ...])

Open and decode a dataset from a file or file-like object.

open_mfdataset(paths[, chunks, concat_dim, ...])

Open multiple files as a single dataset.

open_zarr(store[, group, synchronizer, ...])

Load and decode a dataset from a Zarr store.

save_mfdataset(datasets, paths[, mode, ...])

Write multiple datasets to disk as netCDF files simultaneously.

Dataset.as_numpy()

Coerces wrapped data and coordinates into numpy arrays, returning a Dataset.

Dataset.from_dataframe(dataframe[, sparse])

Convert a pandas.DataFrame into an xarray.Dataset

Dataset.from_dict(d)

Convert a dictionary into an xarray.Dataset.

Dataset.to_dataarray([dim, name])

Convert this dataset into an xarray.DataArray

Dataset.to_dataframe([dim_order])

Convert this dataset into a pandas.DataFrame.

Dataset.to_dask_dataframe([dim_order, set_index])

Convert this dataset into a dask.dataframe.DataFrame.

Dataset.to_dict([data, encoding])

Convert this dataset to a dictionary following xarray naming conventions.

Dataset.to_netcdf([path, mode, format, ...])

Write dataset contents to a netCDF file.

Dataset.to_pandas()

Convert this dataset into a pandas object without changing the number of dimensions.

Dataset.to_zarr([store, chunk_store, mode, ...])

Write dataset contents to a zarr group.

Dataset.chunk([chunks, name_prefix, token, ...])

Coerce all arrays in this dataset into dask arrays with the given chunks.

Dataset.close()

Release any resources linked to this object.

Dataset.compute(**kwargs)

Manually trigger loading and/or computation of this dataset's data from disk or a remote source into memory and return a new dataset.

Dataset.filter_by_attrs(**kwargs)

Returns a Dataset with variables that match specific conditions.

Dataset.info([buf])

Concise summary of a Dataset variables and attributes.

Dataset.load(**kwargs)

Manually trigger loading and/or computation of this dataset's data from disk or a remote source into memory and return this dataset.

Dataset.persist(**kwargs)

Trigger computation, keeping data as dask arrays

Dataset.unify_chunks()

Unify chunk size along all chunked dimensions of this Dataset.

DataArray methods#

load_dataarray(filename_or_obj, **kwargs)

Open, load into memory, and close a DataArray from a file or file-like object containing a single data variable.

open_dataarray(filename_or_obj, *[, engine, ...])

Open an DataArray from a file or file-like object containing a single data variable.

DataArray.as_numpy()

Coerces wrapped data and coordinates into numpy arrays, returning a DataArray.

DataArray.from_dict(d)

Convert a dictionary into an xarray.DataArray

DataArray.from_iris(cube)

Convert a iris.cube.Cube into an xarray.DataArray

DataArray.from_series(series[, sparse])

Convert a pandas.Series into an xarray.DataArray.

DataArray.to_dask_dataframe([dim_order, ...])

Convert this array into a dask.dataframe.DataFrame.

DataArray.to_dataframe([name, dim_order])

Convert this array and its coordinates into a tidy pandas.DataFrame.

DataArray.to_dataset([dim, name, promote_attrs])

Convert a DataArray to a Dataset.

DataArray.to_dict([data, encoding])

Convert this xarray.DataArray into a dictionary following xarray naming conventions.

DataArray.to_index()

Convert this variable to a pandas.Index.

DataArray.to_iris()

Convert this array into a iris.cube.Cube

DataArray.to_masked_array([copy])

Convert this array into a numpy.ma.MaskedArray

DataArray.to_netcdf([path, mode, format, ...])

Write DataArray contents to a netCDF file.

DataArray.to_numpy()

Coerces wrapped data to numpy and returns a numpy.ndarray.

DataArray.to_pandas()

Convert this array into a pandas object with the same shape.

DataArray.to_series()

Convert this array into a pandas.Series.

DataArray.to_zarr([store, chunk_store, ...])

Write DataArray contents to a Zarr store

DataArray.chunk([chunks, name_prefix, ...])

Coerce this array's data into a dask arrays with the given chunks.

DataArray.close()

Release any resources linked to this object.

DataArray.compute(**kwargs)

Manually trigger loading of this array's data from disk or a remote source into memory and return a new array.

DataArray.persist(**kwargs)

Trigger computation in constituent dask arrays

DataArray.load(**kwargs)

Manually trigger loading of this array's data from disk or a remote source into memory and return this array.

DataArray.unify_chunks()

Unify chunk size along all chunked dimensions of this DataArray.

DataTree methods#

open_datatree(filename_or_obj[, engine])

Open and decode a DataTree from a file or file-like object, creating one tree node for each group in the file.

open_groups(filename_or_obj[, engine])

Open and decode a file or file-like object, creating a dictionary containing one xarray Dataset for each group in the file.

map_over_subtree(func)

Decorator which turns a function which acts on (and returns) Datasets into one which acts on and returns DataTrees.

DataTree.to_dict()

Create a dictionary mapping of absolute node paths to the data contained in those nodes.

DataTree.to_netcdf(filepath[, mode, ...])

Write datatree contents to a netCDF file.

DataTree.to_zarr(store[, mode, encoding, ...])

Write datatree contents to a Zarr store.

Missing: open_mfdatatree

Coordinates objects#

Dataset#

core.coordinates.DatasetCoordinates(dataset)

Dictionary like container for Dataset coordinates (variables + indexes).

core.coordinates.DatasetCoordinates.dtypes

Mapping from coordinate names to dtypes.

DataArray#

core.coordinates.DataArrayCoordinates(dataarray)

Dictionary like container for DataArray coordinates (variables + indexes).

core.coordinates.DataArrayCoordinates.dtypes

Mapping from coordinate names to dtypes.

Plotting#

Dataset#

Dataset.plot.scatter(*args[, x, y, z, hue, ...])

Scatter variables against each other.

Dataset.plot.quiver(*args[, x, y, u, v, ...])

Quiver plot of Dataset variables.

Dataset.plot.streamplot(*args[, x, y, u, v, ...])

Plot streamlines of Dataset variables.

DataArray#

DataArray.plot(*[, row, col, col_wrap, ax, ...])

Default plot of DataArray using matplotlib.pyplot.

DataArray.plot.contourf(*args[, x, y, ...])

Filled contour plot of 2D DataArray.

DataArray.plot.contour(*args[, x, y, ...])

Contour plot of 2D DataArray.

DataArray.plot.hist(*args[, figsize, size, ...])

Histogram of DataArray.

DataArray.plot.imshow(*args[, x, y, ...])

Image plot of 2D DataArray.

DataArray.plot.line(*args[, row, col, ...])

Line plot of DataArray values.

DataArray.plot.pcolormesh(*args[, x, y, ...])

Pseudocolor plot of 2D DataArray.

DataArray.plot.step(*args[, where, ...])

Step plot of DataArray values.

DataArray.plot.scatter(*args[, x, y, z, ...])

Scatter variables against each other.

DataArray.plot.surface(*args[, x, y, ...])

Surface plot of 2D DataArray.

Faceting#

plot.FacetGrid(data[, col, row, col_wrap, ...])

Initialize the Matplotlib figure and FacetGrid object.

plot.FacetGrid.add_colorbar(**kwargs)

Draw a colorbar.

plot.FacetGrid.add_legend(*[, label, ...])

plot.FacetGrid.add_quiverkey(u, v, **kwargs)

plot.FacetGrid.map(func, *args, **kwargs)

Apply a plotting function to each facet's subset of the data.

plot.FacetGrid.map_dataarray(func, x, y, ...)

Apply a plotting function to a 2d facet's subset of the data.

plot.FacetGrid.map_dataarray_line(func, x, ...)

plot.FacetGrid.map_dataset(func[, x, y, ...])

plot.FacetGrid.map_plot1d(func, x, y, *[, ...])

Apply a plotting function to a 1d facet's subset of the data.

plot.FacetGrid.set_axis_labels(*axlabels)

Set axis labels on the left column and bottom row of the grid.

plot.FacetGrid.set_ticks([max_xticks, ...])

Set and control tick behavior.

plot.FacetGrid.set_titles([template, ...])

Draw titles either above each facet or on the grid margins.

plot.FacetGrid.set_xlabels([label])

Label the x axis on the bottom row of the grid.

plot.FacetGrid.set_ylabels([label])

Label the y axis on the left column of the grid.

GroupBy objects#

Dataset#

DatasetGroupBy(obj, groupers[, ...])

DatasetGroupBy.map(func[, args, shortcut])

Apply a function to each Dataset in the group and concatenate them together into a new Dataset.

DatasetGroupBy.reduce(func[, dim, axis, ...])

Reduce the items in this group by applying func along some dimension(s).

DatasetGroupBy.assign(**kwargs)

Assign data variables by group.

DatasetGroupBy.assign_coords([coords])

Assign coordinates by group.

DatasetGroupBy.first([skipna, keep_attrs])

Return the first element of each group along the group dimension

DatasetGroupBy.last([skipna, keep_attrs])

Return the last element of each group along the group dimension

DatasetGroupBy.fillna(value)

Fill missing values in this object by group.

DatasetGroupBy.quantile(q[, dim, method, ...])

Compute the qth quantile over each array in the groups and concatenate them together into a new array.

DatasetGroupBy.where(cond[, other])

Return elements from self or other depending on cond.

DatasetGroupBy.all([dim, keep_attrs])

Reduce this Dataset's data by applying all along some dimension(s).

DatasetGroupBy.any([dim, keep_attrs])

Reduce this Dataset's data by applying any along some dimension(s).

DatasetGroupBy.count([dim, keep_attrs])

Reduce this Dataset's data by applying count along some dimension(s).

DatasetGroupBy.cumsum([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying cumsum along some dimension(s).

DatasetGroupBy.cumprod([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying cumprod along some dimension(s).

DatasetGroupBy.max([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying max along some dimension(s).

DatasetGroupBy.mean([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying mean along some dimension(s).

DatasetGroupBy.median([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying median along some dimension(s).

DatasetGroupBy.min([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying min along some dimension(s).

DatasetGroupBy.prod([dim, skipna, ...])

Reduce this Dataset's data by applying prod along some dimension(s).

DatasetGroupBy.std([dim, skipna, ddof, ...])

Reduce this Dataset's data by applying std along some dimension(s).

DatasetGroupBy.sum([dim, skipna, min_count, ...])

Reduce this Dataset's data by applying sum along some dimension(s).

DatasetGroupBy.var([dim, skipna, ddof, ...])

Reduce this Dataset's data by applying var along some dimension(s).

DatasetGroupBy.dims

DatasetGroupBy.groups

Mapping from group labels to indices.

DataArray#

DataArrayGroupBy(obj, groupers[, ...])

DataArrayGroupBy.map(func[, args, shortcut])

Apply a function to each array in the group and concatenate them together into a new array.

DataArrayGroupBy.reduce(func[, dim, axis, ...])

Reduce the items in this group by applying func along some dimension(s).

DataArrayGroupBy.assign_coords([coords])

Assign coordinates by group.

DataArrayGroupBy.first([skipna, keep_attrs])

Return the first element of each group along the group dimension

DataArrayGroupBy.last([skipna, keep_attrs])

Return the last element of each group along the group dimension

DataArrayGroupBy.fillna(value)

Fill missing values in this object by group.

DataArrayGroupBy.quantile(q[, dim, method, ...])

Compute the qth quantile over each array in the groups and concatenate them together into a new array.

DataArrayGroupBy.where(cond[, other])

Return elements from self or other depending on cond.

DataArrayGroupBy.all([dim, keep_attrs])

Reduce this DataArray's data by applying all along some dimension(s).

DataArrayGroupBy.any([dim, keep_attrs])

Reduce this DataArray's data by applying any along some dimension(s).

DataArrayGroupBy.count([dim, keep_attrs])

Reduce this DataArray's data by applying count along some dimension(s).

DataArrayGroupBy.cumsum([dim, skipna, ...])

Reduce this DataArray's data by applying cumsum along some dimension(s).

DataArrayGroupBy.cumprod([dim, skipna, ...])

Reduce this DataArray's data by applying cumprod along some dimension(s).

DataArrayGroupBy.max([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying max along some dimension(s).

DataArrayGroupBy.mean([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying mean along some dimension(s).

DataArrayGroupBy.median([dim, skipna, ...])

Reduce this DataArray's data by applying median along some dimension(s).

DataArrayGroupBy.min([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying min along some dimension(s).

DataArrayGroupBy.prod([dim, skipna, ...])

Reduce this DataArray's data by applying prod along some dimension(s).

DataArrayGroupBy.std([dim, skipna, ddof, ...])

Reduce this DataArray's data by applying std along some dimension(s).

DataArrayGroupBy.sum([dim, skipna, ...])

Reduce this DataArray's data by applying sum along some dimension(s).

DataArrayGroupBy.var([dim, skipna, ddof, ...])

Reduce this DataArray's data by applying var along some dimension(s).

DataArrayGroupBy.dims

DataArrayGroupBy.groups

Mapping from group labels to indices.

Grouper Objects#

groupers.BinGrouper(bins[, right, labels, ...])

Grouper object for binning numeric data.

groupers.UniqueGrouper([_group_as_index])

Grouper object for grouping by a categorical variable.

groupers.TimeResampler(freq[, closed, ...])

Grouper object specialized to resampling the time coordinate.

Rolling objects#

Dataset#

DatasetRolling(obj, windows[, min_periods, ...])

DatasetRolling.construct([window_dim, ...])

Convert this rolling object to xr.Dataset, where the window dimension is stacked as a new dimension

DatasetRolling.reduce(func[, keep_attrs])

Reduce the items in this group by applying func along some dimension(s).

DatasetRolling.argmax([keep_attrs])

Reduce this object's data windows by applying argmax along its dimension.

DatasetRolling.argmin([keep_attrs])

Reduce this object's data windows by applying argmin along its dimension.

DatasetRolling.count([keep_attrs])

Reduce this object's data windows by applying count along its dimension.

DatasetRolling.max([keep_attrs])

Reduce this object's data windows by applying max along its dimension.

DatasetRolling.mean([keep_attrs])

Reduce this object's data windows by applying mean along its dimension.

DatasetRolling.median([keep_attrs])

Reduce this object's data windows by applying median along its dimension.

DatasetRolling.min([keep_attrs])

Reduce this object's data windows by applying min along its dimension.

DatasetRolling.prod([keep_attrs])

Reduce this object's data windows by applying prod along its dimension.

DatasetRolling.std([keep_attrs])

Reduce this object's data windows by applying std along its dimension.

DatasetRolling.sum([keep_attrs])

Reduce this object's data windows by applying sum along its dimension.

DatasetRolling.var([keep_attrs])

Reduce this object's data windows by applying var along its dimension.

DataArray#

DataArrayRolling(obj, windows[, ...])

DataArrayRolling.__iter__()

DataArrayRolling.construct([window_dim, ...])

Convert this rolling object to xr.DataArray, where the window dimension is stacked as a new dimension

DataArrayRolling.reduce(func[, keep_attrs])

Reduce the items in this group by applying func along some dimension(s).

DataArrayRolling.argmax([keep_attrs])

Reduce this object's data windows by applying argmax along its dimension.

DataArrayRolling.argmin([keep_attrs])

Reduce this object's data windows by applying argmin along its dimension.

DataArrayRolling.count([keep_attrs])

Reduce this object's data windows by applying count along its dimension.

DataArrayRolling.max([keep_attrs])

Reduce this object's data windows by applying max along its dimension.

DataArrayRolling.mean([keep_attrs])

Reduce this object's data windows by applying mean along its dimension.

DataArrayRolling.median([keep_attrs])

Reduce this object's data windows by applying median along its dimension.

DataArrayRolling.min([keep_attrs])

Reduce this object's data windows by applying min along its dimension.

DataArrayRolling.prod([keep_attrs])

Reduce this object's data windows by applying prod along its dimension.

DataArrayRolling.std([keep_attrs])

Reduce this object's data windows by applying std along its dimension.

DataArrayRolling.sum([keep_attrs])

Reduce this object's data windows by applying sum along its dimension.

DataArrayRolling.var([keep_attrs])

Reduce this object's data windows by applying var along its dimension.

Coarsen objects#

Dataset#

DatasetCoarsen(obj, windows, boundary, side, ...)

DatasetCoarsen.all([keep_attrs])

Reduce this DatasetCoarsen's data by applying all along some dimension(s).

DatasetCoarsen.any([keep_attrs])

Reduce this DatasetCoarsen's data by applying any along some dimension(s).

DatasetCoarsen.construct([window_dim, ...])

Convert this Coarsen object to a DataArray or Dataset, where the coarsening dimension is split or reshaped to two new dimensions.

DatasetCoarsen.count([keep_attrs])

Reduce this DatasetCoarsen's data by applying count along some dimension(s).

DatasetCoarsen.max([keep_attrs])

Reduce this DatasetCoarsen's data by applying max along some dimension(s).

DatasetCoarsen.mean([keep_attrs])

Reduce this DatasetCoarsen's data by applying mean along some dimension(s).

DatasetCoarsen.median([keep_attrs])

Reduce this DatasetCoarsen's data by applying median along some dimension(s).

DatasetCoarsen.min([keep_attrs])

Reduce this DatasetCoarsen's data by applying min along some dimension(s).

DatasetCoarsen.prod([keep_attrs])

Reduce this DatasetCoarsen's data by applying prod along some dimension(s).

DatasetCoarsen.reduce(func[, keep_attrs])

Reduce the items in this group by applying func along some dimension(s).

DatasetCoarsen.std([keep_attrs])

Reduce this DatasetCoarsen's data by applying std along some dimension(s).

DatasetCoarsen.sum([keep_attrs])

Reduce this DatasetCoarsen's data by applying sum along some dimension(s).

DatasetCoarsen.var([keep_attrs])

Reduce this DatasetCoarsen's data by applying var along some dimension(s).

DataArray#

DataArrayCoarsen(obj, windows, boundary, ...)

DataArrayCoarsen.all([keep_attrs])

Reduce this DataArrayCoarsen's data by applying all along some dimension(s).

DataArrayCoarsen.any([keep_attrs])

Reduce this DataArrayCoarsen's data by applying any along some dimension(s).

DataArrayCoarsen.construct([window_dim, ...])

Convert this Coarsen object to a DataArray or Dataset, where the coarsening dimension is split or reshaped to two new dimensions.

DataArrayCoarsen.count([keep_attrs])

Reduce this DataArrayCoarsen's data by applying count along some dimension(s).

DataArrayCoarsen.max([keep_attrs])

Reduce this DataArrayCoarsen's data by applying max along some dimension(s).

DataArrayCoarsen.mean([keep_attrs])

Reduce this DataArrayCoarsen's data by applying mean along some dimension(s).

DataArrayCoarsen.median([keep_attrs])

Reduce this DataArrayCoarsen's data by applying median along some dimension(s).

DataArrayCoarsen.min([keep_attrs])

Reduce this DataArrayCoarsen's data by applying min along some dimension(s).

DataArrayCoarsen.prod([keep_attrs])

Reduce this DataArrayCoarsen's data by applying prod along some dimension(s).

DataArrayCoarsen.reduce(func[, keep_attrs])

Reduce the items in this group by applying func along some dimension(s).

DataArrayCoarsen.std([keep_attrs])

Reduce this DataArrayCoarsen's data by applying std along some dimension(s).

DataArrayCoarsen.sum([keep_attrs])

Reduce this DataArrayCoarsen's data by applying sum along some dimension(s).

DataArrayCoarsen.var([keep_attrs])

Reduce this DataArrayCoarsen's data by applying var along some dimension(s).

Exponential rolling objects#

RollingExp(obj, windows[, window_type, ...])

Exponentially-weighted moving window object.

RollingExp.mean([keep_attrs])

Exponentially weighted moving average.

RollingExp.sum([keep_attrs])

Exponentially weighted moving sum.

Weighted objects#

Dataset#

DatasetWeighted(obj, weights)

DatasetWeighted.mean([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted mean along some dimension(s).

DatasetWeighted.quantile(q, *[, dim, ...])

Apply a weighted quantile to this Dataset's data along some dimension(s).

DatasetWeighted.sum([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted sum along some dimension(s).

DatasetWeighted.std([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted std along some dimension(s).

DatasetWeighted.var([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted var along some dimension(s).

DatasetWeighted.sum_of_weights([dim, keep_attrs])

Calculate the sum of weights, accounting for missing values in the data.

DatasetWeighted.sum_of_squares([dim, ...])

Reduce this Dataset's data by a weighted sum_of_squares along some dimension(s).

DataArray#

DataArrayWeighted(obj, weights)

DataArrayWeighted.mean([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted mean along some dimension(s).

DataArrayWeighted.quantile(q, *[, dim, ...])

Apply a weighted quantile to this Dataset's data along some dimension(s).

DataArrayWeighted.sum([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted sum along some dimension(s).

DataArrayWeighted.std([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted std along some dimension(s).

DataArrayWeighted.var([dim, skipna, keep_attrs])

Reduce this Dataset's data by a weighted var along some dimension(s).

DataArrayWeighted.sum_of_weights([dim, ...])

Calculate the sum of weights, accounting for missing values in the data.

DataArrayWeighted.sum_of_squares([dim, ...])

Reduce this Dataset's data by a weighted sum_of_squares along some dimension(s).

Resample objects#

Dataset#

DatasetResample(*args[, dim, resample_dim])

DatasetGroupBy object specialized to resampling a specified dimension

DatasetResample.asfreq()

Return values of original object at the new up-sampling frequency; essentially a re-index with new times set to NaN.

DatasetResample.backfill([tolerance])

Backward fill new values at up-sampled frequency.

DatasetResample.interpolate([kind])

Interpolate up-sampled data using the original data as knots.

DatasetResample.nearest([tolerance])

Take new values from nearest original coordinate to up-sampled frequency coordinates.

DatasetResample.pad([tolerance])

Forward fill new values at up-sampled frequency.

DatasetResample.all([dim, keep_attrs])

Reduce this Dataset's data by applying all along some dimension(s).

DatasetResample.any([dim, keep_attrs])

Reduce this Dataset's data by applying any along some dimension(s).

DatasetResample.apply(func[, args, shortcut])

Backward compatible implementation of map

DatasetResample.assign(**kwargs)

Assign data variables by group.

DatasetResample.assign_coords([coords])

Assign coordinates by group.

DatasetResample.bfill([tolerance])

Backward fill new values at up-sampled frequency.

DatasetResample.count([dim, keep_attrs])

Reduce this Dataset's data by applying count along some dimension(s).

DatasetResample.ffill([tolerance])

Forward fill new values at up-sampled frequency.

DatasetResample.fillna(value)

Fill missing values in this object by group.

DatasetResample.first([skipna, keep_attrs])

Return the first element of each group along the group dimension

DatasetResample.last([skipna, keep_attrs])

Return the last element of each group along the group dimension

DatasetResample.map(func[, args, shortcut])

Apply a function over each Dataset in the groups generated for resampling and concatenate them together into a new Dataset.

DatasetResample.max([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying max along some dimension(s).

DatasetResample.mean([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying mean along some dimension(s).

DatasetResample.median([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying median along some dimension(s).

DatasetResample.min([dim, skipna, keep_attrs])

Reduce this Dataset's data by applying min along some dimension(s).

DatasetResample.prod([dim, skipna, ...])

Reduce this Dataset's data by applying prod along some dimension(s).

DatasetResample.quantile(q[, dim, method, ...])

Compute the qth quantile over each array in the groups and concatenate them together into a new array.

DatasetResample.reduce(func[, dim, axis, ...])

Reduce the items in this group by applying func along the pre-defined resampling dimension.

DatasetResample.std([dim, skipna, ddof, ...])

Reduce this Dataset's data by applying std along some dimension(s).

DatasetResample.sum([dim, skipna, ...])

Reduce this Dataset's data by applying sum along some dimension(s).

DatasetResample.var([dim, skipna, ddof, ...])

Reduce this Dataset's data by applying var along some dimension(s).

DatasetResample.where(cond[, other])

Return elements from self or other depending on cond.

DatasetResample.dims

DatasetResample.groups

Mapping from group labels to indices.

DataArray#

DataArrayResample(*args[, dim, resample_dim])

DataArrayGroupBy object specialized to time resampling operations over a specified dimension

DataArrayResample.asfreq()

Return values of original object at the new up-sampling frequency; essentially a re-index with new times set to NaN.

DataArrayResample.backfill([tolerance])

Backward fill new values at up-sampled frequency.

DataArrayResample.interpolate([kind])

Interpolate up-sampled data using the original data as knots.

DataArrayResample.nearest([tolerance])

Take new values from nearest original coordinate to up-sampled frequency coordinates.

DataArrayResample.pad([tolerance])

Forward fill new values at up-sampled frequency.

DataArrayResample.all([dim, keep_attrs])

Reduce this DataArray's data by applying all along some dimension(s).

DataArrayResample.any([dim, keep_attrs])

Reduce this DataArray's data by applying any along some dimension(s).

DataArrayResample.apply(func[, args, shortcut])

Backward compatible implementation of map

DataArrayResample.assign_coords([coords])

Assign coordinates by group.

DataArrayResample.bfill([tolerance])

Backward fill new values at up-sampled frequency.

DataArrayResample.count([dim, keep_attrs])

Reduce this DataArray's data by applying count along some dimension(s).

DataArrayResample.ffill([tolerance])

Forward fill new values at up-sampled frequency.

DataArrayResample.fillna(value)

Fill missing values in this object by group.

DataArrayResample.first([skipna, keep_attrs])

Return the first element of each group along the group dimension

DataArrayResample.last([skipna, keep_attrs])

Return the last element of each group along the group dimension

DataArrayResample.map(func[, args, shortcut])

Apply a function to each array in the group and concatenate them together into a new array.

DataArrayResample.max([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying max along some dimension(s).

DataArrayResample.mean([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying mean along some dimension(s).

DataArrayResample.median([dim, skipna, ...])

Reduce this DataArray's data by applying median along some dimension(s).

DataArrayResample.min([dim, skipna, keep_attrs])

Reduce this DataArray's data by applying min along some dimension(s).

DataArrayResample.prod([dim, skipna, ...])

Reduce this DataArray's data by applying prod along some dimension(s).

DataArrayResample.quantile(q[, dim, method, ...])

Compute the qth quantile over each array in the groups and concatenate them together into a new array.

DataArrayResample.reduce(func[, dim, axis, ...])

Reduce the items in this group by applying func along the pre-defined resampling dimension.

DataArrayResample.std([dim, skipna, ddof, ...])

Reduce this DataArray's data by applying std along some dimension(s).

DataArrayResample.sum([dim, skipna, ...])

Reduce this DataArray's data by applying sum along some dimension(s).

DataArrayResample.var([dim, skipna, ddof, ...])

Reduce this DataArray's data by applying var along some dimension(s).

DataArrayResample.where(cond[, other])

Return elements from self or other depending on cond.

DataArrayResample.dims

DataArrayResample.groups

Mapping from group labels to indices.

Accessors#

accessor_dt.DatetimeAccessor(obj)

Access datetime fields for DataArrays with datetime-like dtypes.

accessor_dt.TimedeltaAccessor(obj)

Access Timedelta fields for DataArrays with Timedelta-like dtypes.

accessor_str.StringAccessor(obj)

Vectorized string functions for string-like arrays.

Custom Indexes#

CFTimeIndex(data[, name])

Custom Index for working with CF calendars and dates

Creating custom indexes#

cftime_range([start, end, periods, freq, ...])

Return a fixed frequency CFTimeIndex.

date_range([start, end, periods, freq, tz, ...])

Return a fixed frequency datetime index.

date_range_like(source, calendar[, use_cftime])

Generate a datetime array with the same frequency, start and end as another one, but in a different calendar.

Tutorial#

tutorial.open_dataset(name[, cache, ...])

Open a dataset from the online repository (requires internet).

tutorial.load_dataset(*args, **kwargs)

Open, load into memory, and close a dataset from the online repository (requires internet).

Testing#

testing.assert_equal(a, b[, from_root, ...])

Like numpy.testing.assert_array_equal(), but for xarray objects.

testing.assert_identical(a, b[, from_root])

Like xarray.testing.assert_equal(), but also matches the objects' names and attributes.

testing.assert_allclose(a, b[, rtol, atol, ...])

Like numpy.testing.assert_allclose(), but for xarray objects.

testing.assert_chunks_equal(a, b)

Assert that chunksizes along chunked dimensions are equal.

Test that two DataTree objects are similar.

testing.assert_isomorphic(a, b[, from_root])

Two DataTrees are considered isomorphic if every node has the same number of children.

testing.assert_equal(a, b[, from_root, ...])

Like numpy.testing.assert_array_equal(), but for xarray objects.

testing.assert_identical(a, b[, from_root])

Like xarray.testing.assert_equal(), but also matches the objects' names and attributes.

Hypothesis Testing Strategies#

See the documentation page on testing for a guide on how to use these strategies.

Warning

These strategies should be considered highly experimental, and liable to change at any time.

testing.strategies.supported_dtypes()

Generates only those numpy dtypes which xarray can handle.

testing.strategies.names()

Generates arbitrary string names for dimensions / variables.

testing.strategies.dimension_names(*[, ...])

Generates an arbitrary list of valid dimension names.

testing.strategies.dimension_sizes(*[, ...])

Generates an arbitrary mapping from dimension names to lengths.

testing.strategies.attrs()

Generates arbitrary valid attributes dictionaries for xarray objects.

testing.strategies.variables(*[, ...])

Generates arbitrary xarray.Variable objects.

testing.strategies.unique_subset_of(objs, *)

Return a strategy which generates a unique subset of the given objects.

Exceptions#

MergeError

Error class for merge failures due to incompatible arguments.

SerializationWarning

Warnings about encoding/decoding issues in serialization.

DataTree#

Exceptions raised when manipulating trees.

xarray.TreeIsomorphismError

Error raised if two tree objects do not share the same node structure.

xarray.InvalidTreeError

Raised when user attempts to create an invalid tree in some way.

xarray.NotFoundInTreeError

Raised when operation can't be completed because one node is not part of the expected tree.

Advanced API#

Coordinates([coords, indexes])

Dictionary like container for Xarray coordinates (variables + indexes).

Dataset.variables

Low level interface to Dataset contents as dict of Variable objects.

DataArray.variable

Low level interface to the Variable object for this DataArray.

DataTree.variables

Low level interface to node contents as dict of Variable objects.

Variable(dims, data[, attrs, encoding, fastpath])

A netcdf-like variable consisting of dimensions, data and attributes which describe a single Array.

IndexVariable(dims, data[, attrs, encoding, ...])

Wrapper for accommodating a pandas.Index in an xarray.Variable.

as_variable(obj[, name, auto_convert])

Convert an object into a Variable.

Index()

Base class inherited by all xarray-compatible indexes.

IndexSelResult(dim_indexers[, indexes, ...])

Index query results.

Context(func)

object carrying the information of a call

register_dataset_accessor(name)

Register a custom property on xarray.Dataset objects.

register_dataarray_accessor(name)

Register a custom accessor on xarray.DataArray objects.

register_datatree_accessor(name)

Register a custom accessor on DataTree objects.

Dataset.set_close(close)

Register the function that releases any resources linked to this object.

backends.BackendArray()

backends.BackendEntrypoint()

BackendEntrypoint is a class container and it is the main interface for the backend plugins, see BackendEntrypoint subclassing.

backends.list_engines()

Return a dictionary of available engines and their BackendEntrypoint objects.

backends.refresh_engines()

Refreshes the backend engines based on installed packages.

Missing: DataTree.set_close

Default, pandas-backed indexes built-in Xarray:

indexes.PandasIndex indexes.PandasMultiIndex

These backends provide a low-level interface for lazily loading data from external file-formats or protocols, and can be manually invoked to create arguments for the load_store and dump_to_store Dataset methods:

backends.NetCDF4DataStore(manager[, group, ...])

Store for reading and writing data via the Python-NetCDF4 library.

backends.H5NetCDFStore(manager[, group, ...])

Store for reading and writing data via h5netcdf

backends.PydapDataStore(ds)

Store for accessing OpenDAP datasets with pydap.

backends.ScipyDataStore(filename_or_obj[, ...])

Store for reading and writing data via scipy.io.netcdf.

backends.ZarrStore(zarr_group[, mode, ...])

Store for reading and writing data via zarr

backends.FileManager()

Manager for acquiring and closing a file object.

backends.CachingFileManager(opener, *args[, ...])

Wrapper for automatically opening and closing file objects.

backends.DummyFileManager(value)

FileManager that simply wraps an open file in the FileManager interface.

These BackendEntrypoints provide a basic interface to the most commonly used filetypes in the xarray universe.

backends.NetCDF4BackendEntrypoint()

Backend for netCDF files based on the netCDF4 package.

backends.H5netcdfBackendEntrypoint()

Backend for netCDF files based on the h5netcdf package.

backends.PydapBackendEntrypoint()

Backend for steaming datasets over the internet using the Data Access Protocol, also known as DODS or OPeNDAP based on the pydap package.

backends.ScipyBackendEntrypoint()

Backend for netCDF files based on the scipy package.

backends.StoreBackendEntrypoint()

backends.ZarrBackendEntrypoint()

Backend for ".zarr" files based on the zarr package.

Deprecated / Pending Deprecation#

Dataset.drop([labels, dim, errors])

Backward compatible method based on drop_vars and drop_sel

DataArray.drop([labels, dim, errors])

Backward compatible method based on drop_vars and drop_sel

Dataset.apply(func[, keep_attrs, args])

Backward compatible implementation of map

core.groupby.DataArrayGroupBy.apply(func[, ...])

Backward compatible implementation of map

core.groupby.DatasetGroupBy.apply(func[, ...])

Backward compatible implementation of map

DataArray.dt.weekofyear

The week ordinal of the year

DataArray.dt.week

The week ordinal of the year