# Combining data#

• For combining datasets or data arrays along a single dimension, see concatenate.

• For combining datasets with different variables, see merge.

• For combining datasets or data arrays with different indexes or missing values, see combine.

• For combining datasets or data arrays along multiple dimensions see combining.multi.

## Concatenate#

To combine `concat`. `concat` takes an iterable of `DataArray` or `Dataset` objects, as well as a dimension name, and concatenates along that dimension:

```In : da = xr.DataArray(
...:     np.arange(6).reshape(2, 3), [("x", ["a", "b"]), ("y", [10, 20, 30])]
...: )
...:

In : da.isel(y=slice(0, 1))  # same as da[:, :1]
Out:
<xarray.DataArray (x: 2, y: 1)>
array([,
])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10

# This resembles how you would use np.concatenate:
In : xr.concat([da[:, :1], da[:, 1:]], dim="y")
Out:
<xarray.DataArray (x: 2, y: 3)>
array([[0, 1, 2],
[3, 4, 5]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30

# For more friendly pandas-like indexing you can use:
In : xr.concat([da.isel(y=slice(0, 1)), da.isel(y=slice(1, None))], dim="y")
Out:
<xarray.DataArray (x: 2, y: 3)>
array([[0, 1, 2],
[3, 4, 5]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
```

In addition to combining along an existing dimension, `concat` can create a new dimension by stacking lower dimensional arrays together:

```In : da.sel(x="a")
Out:
<xarray.DataArray (y: 3)>
array([0, 1, 2])
Coordinates:
x        <U1 'a'
* y        (y) int64 10 20 30

In : xr.concat([da.isel(x=0), da.isel(x=1)], "x")
Out:
<xarray.DataArray (x: 2, y: 3)>
array([[0, 1, 2],
[3, 4, 5]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
```

If the second argument to `concat` is a new dimension name, the arrays will be concatenated along that new dimension, which is always inserted as the first dimension:

```In : xr.concat([da.isel(x=0), da.isel(x=1)], "new_dim")
Out:
<xarray.DataArray (new_dim: 2, y: 3)>
array([[0, 1, 2],
[3, 4, 5]])
Coordinates:
x        (new_dim) <U1 'a' 'b'
* y        (y) int64 10 20 30
Dimensions without coordinates: new_dim
```

The second argument to `concat` can also be an `Index` or `DataArray` object as well as a string, in which case it is used to label the values along the new dimension:

```In : xr.concat([da.isel(x=0), da.isel(x=1)], pd.Index([-90, -100], name="new_dim"))
Out:
<xarray.DataArray (new_dim: 2, y: 3)>
array([[0, 1, 2],
[3, 4, 5]])
Coordinates:
x        (new_dim) <U1 'a' 'b'
* y        (y) int64 10 20 30
* new_dim  (new_dim) int64 -90 -100
```

Of course, `concat` also works on `Dataset` objects:

```In : ds = da.to_dataset(name="foo")

In : xr.concat([ds.sel(x="a"), ds.sel(x="b")], "x")
Out:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
Data variables:
foo      (x, y) int64 0 1 2 3 4 5
```

`concat()` has a number of options which provide deeper control over which variables are concatenated and how it handles conflicting variables between datasets. With the default parameters, xarray will load some coordinate variables into memory to compare them between datasets. This may be prohibitively expensive if you are manipulating your dataset lazily using Parallel computing with Dask.

## Merge#

To combine variables and coordinates between multiple `DataArray` and/or `Dataset` objects, use `merge()`. It can merge a list of `Dataset`, `DataArray` or dictionaries of objects convertible to `DataArray` objects:

```In : xr.merge([ds, ds.rename({"foo": "bar"})])
Out:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
Data variables:
foo      (x, y) int64 0 1 2 3 4 5
bar      (x, y) int64 0 1 2 3 4 5

In : xr.merge([xr.DataArray(n, name="var%d" % n) for n in range(5)])
Out:
<xarray.Dataset>
Dimensions:  ()
Data variables:
var0     int64 0
var1     int64 1
var2     int64 2
var3     int64 3
var4     int64 4
```

If you merge another dataset (or a dictionary including data array objects), by default the resulting dataset will be aligned on the union of all index coordinates:

```In : other = xr.Dataset({"bar": ("x", [1, 2, 3, 4]), "x": list("abcd")})

In : xr.merge([ds, other])
Out:
<xarray.Dataset>
Dimensions:  (x: 4, y: 3)
Coordinates:
* x        (x) <U1 'a' 'b' 'c' 'd'
* y        (y) int64 10 20 30
Data variables:
foo      (x, y) float64 0.0 1.0 2.0 3.0 4.0 5.0 nan nan nan nan nan nan
bar      (x) int64 1 2 3 4
```

This ensures that `merge` is non-destructive. `xarray.MergeError` is raised if you attempt to merge two variables with the same name but different values:

```In : xr.merge([ds, ds + 1])
MergeError: conflicting values for variable 'foo' on objects to be combined:
first value: <xarray.Variable (x: 2, y: 3)>
array([[ 0.4691123 , -0.28286334, -1.5090585 ],
[-1.13563237,  1.21211203, -0.17321465]])
second value: <xarray.Variable (x: 2, y: 3)>
array([[ 1.4691123 ,  0.71713666, -0.5090585 ],
[-0.13563237,  2.21211203,  0.82678535]])
```

The same non-destructive merging between `DataArray` index coordinates is used in the `Dataset` constructor:

```In : xr.Dataset({"a": da.isel(x=slice(0, 1)), "b": da.isel(x=slice(1, 2))})
Out:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
Data variables:
a        (x, y) float64 0.0 1.0 2.0 nan nan nan
b        (x, y) float64 nan nan nan 3.0 4.0 5.0
```

## Combine#

The instance method `combine_first()` combines two datasets/data arrays and defaults to non-null values in the calling object, using values from the called object to fill holes. The resulting coordinates are the union of coordinate labels. Vacant cells as a result of the outer-join are filled with `NaN`. For example:

```In : ar0 = xr.DataArray([[0, 0], [0, 0]], [("x", ["a", "b"]), ("y", [-1, 0])])

In : ar1 = xr.DataArray([[1, 1], [1, 1]], [("x", ["b", "c"]), ("y", [0, 1])])

In : ar0.combine_first(ar1)
Out:
<xarray.DataArray (x: 3, y: 3)>
array([[ 0.,  0., nan],
[ 0.,  0.,  1.],
[nan,  1.,  1.]])
Coordinates:
* x        (x) <U1 'a' 'b' 'c'
* y        (y) int64 -1 0 1

In : ar1.combine_first(ar0)
Out:
<xarray.DataArray (x: 3, y: 3)>
array([[ 0.,  0., nan],
[ 0.,  1.,  1.],
[nan,  1.,  1.]])
Coordinates:
* x        (x) <U1 'a' 'b' 'c'
* y        (y) int64 -1 0 1
```

For datasets, `ds0.combine_first(ds1)` works similarly to `xr.merge([ds0, ds1])`, except that `xr.merge` raises `MergeError` when there are conflicting values in variables to be merged, whereas `.combine_first` defaults to the calling object’s values.

## Update#

In contrast to `merge`, `update()` modifies a dataset in-place without checking for conflicts, and will overwrite any existing variables with new values:

```In : ds.update({"space": ("space", [10.2, 9.4, 3.9])})
Out:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3, space: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
* space    (space) float64 10.2 9.4 3.9
Data variables:
foo      (x, y) int64 0 1 2 3 4 5
```

However, dimensions are still required to be consistent between different Dataset variables, so you cannot change the size of a dimension unless you replace all dataset variables that use it.

`update` also performs automatic alignment if necessary. Unlike `merge`, it maintains the alignment of the original array instead of merging indexes:

```In : ds.update(other)
Out:
<xarray.Dataset>
Dimensions:  (x: 2, y: 3, space: 3)
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
* space    (space) float64 10.2 9.4 3.9
Data variables:
foo      (x, y) int64 0 1 2 3 4 5
bar      (x) int64 1 2
```

The exact same alignment logic when setting a variable with `__setitem__` syntax:

```In : ds["baz"] = xr.DataArray([9, 9, 9, 9, 9], coords=[("x", list("abcde"))])

In : ds.baz
Out:
<xarray.DataArray 'baz' (x: 2)>
array([9, 9])
Coordinates:
* x        (x) <U1 'a' 'b'
```

## Equals and identical#

Xarray objects can be compared by using the `equals()`, `identical()` and `broadcast_equals()` methods. These methods are used by the optional `compat` argument on `concat` and `merge`.

`equals` checks dimension names, indexes and array values:

```In : da.equals(da.copy())
Out: True
```

`identical` also checks attributes, and the name of each object:

```In : da.identical(da.rename("bar"))
Out: False
```

`broadcast_equals` does a more relaxed form of equality check that allows variables to have different dimensions, as long as values are constant along those new dimensions:

```In : left = xr.Dataset(coords={"x": 0})

In : right = xr.Dataset({"x": [0, 0, 0]})

Out: True
```

Like pandas objects, two xarray objects are still equal or identical if they have missing values marked by `NaN` in the same locations.

In contrast, the `==` operation performs element-wise comparison (like numpy):

```In : da == da.copy()
Out:
<xarray.DataArray (x: 2, y: 3)>
array([[ True,  True,  True],
[ True,  True,  True]])
Coordinates:
* x        (x) <U1 'a' 'b'
* y        (y) int64 10 20 30
```

Note that `NaN` does not compare equal to `NaN` in element-wise comparison; you may need to deal with missing values explicitly.

## Merging with ‘no_conflicts’#

The `compat` argument `'no_conflicts'` is only available when combining xarray objects with `merge`. In addition to the above comparison methods it allows the merging of xarray objects with locations where either have `NaN` values. This can be used to combine data with overlapping coordinates as long as any non-missing values agree or are disjoint:

```In : ds1 = xr.Dataset({"a": ("x", [10, 20, 30, np.nan])}, {"x": [1, 2, 3, 4]})

In : ds2 = xr.Dataset({"a": ("x", [np.nan, 30, 40, 50])}, {"x": [2, 3, 4, 5]})

In : xr.merge([ds1, ds2], compat="no_conflicts")
Out:
<xarray.Dataset>
Dimensions:  (x: 5)
Coordinates:
* x        (x) int64 1 2 3 4 5
Data variables:
a        (x) float64 10.0 20.0 30.0 40.0 50.0
```

Note that due to the underlying representation of missing values as floating point numbers (`NaN`), variable data type is not always preserved when merging in this manner.

## Combining along multiple dimensions#

For combining many objects along multiple dimensions xarray provides `combine_nested()` and `combine_by_coords()`. These functions use a combination of `concat` and `merge` across different variables to combine many objects into one.

`combine_nested()` requires specifying the order in which the objects should be combined, while `combine_by_coords()` attempts to infer this ordering automatically from the coordinates in the data.

`combine_nested()` is useful when you know the spatial relationship between each object in advance. The datasets must be provided in the form of a nested list, which specifies their relative position and ordering. A common task is collecting data from a parallelized simulation where each processor wrote out data to a separate file. A domain which was decomposed into 4 parts, 2 each along both the x and y axes, requires organising the datasets into a doubly-nested list, e.g:

```In : arr = xr.DataArray(
....:     name="temperature", data=np.random.randint(5, size=(2, 2)), dims=["x", "y"]
....: )
....:

In : arr
Out:
<xarray.DataArray 'temperature' (x: 2, y: 2)>
array([[1, 2],
[2, 1]])
Dimensions without coordinates: x, y

In : ds_grid = [[arr, arr], [arr, arr]]

In : xr.combine_nested(ds_grid, concat_dim=["x", "y"])
Out:
<xarray.DataArray 'temperature' (x: 4, y: 4)>
array([[1, 2, 1, 2],
[2, 1, 2, 1],
[1, 2, 1, 2],
[2, 1, 2, 1]])
Dimensions without coordinates: x, y
```

`combine_nested()` can also be used to explicitly merge datasets with different variables. For example if we have 4 datasets, which are divided along two times, and contain two different variables, we can pass `None` to `'concat_dim'` to specify the dimension of the nested list over which we wish to use `merge` instead of `concat`:

```In : temp = xr.DataArray(name="temperature", data=np.random.randn(2), dims=["t"])

In : precip = xr.DataArray(name="precipitation", data=np.random.randn(2), dims=["t"])

In : ds_grid = [[temp, precip], [temp, precip]]

In : xr.combine_nested(ds_grid, concat_dim=["t", None])
Out:
<xarray.Dataset>
Dimensions:        (t: 4)
Dimensions without coordinates: t
Data variables:
temperature    (t) float64 0.4691 -0.2829 0.4691 -0.2829
precipitation  (t) float64 -1.509 -1.136 -1.509 -1.136
```

`combine_by_coords()` is for combining objects which have dimension coordinates which specify their relationship to and order relative to one another, for example a linearly-increasing ‘time’ dimension coordinate.

Here we combine two datasets using their common dimension coordinates. Notice they are concatenated in order based on the values in their dimension coordinates, not on their position in the list passed to `combine_by_coords`.

```In : x1 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [0, 1, 2])])

In : x2 = xr.DataArray(name="foo", data=np.random.randn(3), coords=[("x", [3, 4, 5])])

In : xr.combine_by_coords([x2, x1])
Out:
<xarray.Dataset>
Dimensions:  (x: 6)
Coordinates:
* x        (x) int64 0 1 2 3 4 5
Data variables:
foo      (x) float64 1.212 -0.1732 0.1192 -1.044 -0.8618 -2.105
```

These functions can be used by `open_mfdataset()` to open many files as one dataset. The particular function used is specified by setting the argument `'combine'` to `'by_coords'` or `'nested'`. This is useful for situations where your data is split across many files in multiple locations, which have some known relationship between one another.