Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creation of a file for Blockwise #700

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ To build and pass your coregistration pipeline to {func}`~xdem.DEM.coregister_3d

coreg.Coreg
coreg.CoregPipeline
coreg.BlockwiseCoreg
coreg.blockwise.BlockwiseCoreg
```

#### Fitting and applying transforms
Expand Down
2 changes: 1 addition & 1 deletion doc/source/biascorr.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ to {func}`~xdem.coreg.Coreg.fit` and {func}`~xdem.coreg.Coreg.apply`. See {ref}

Each bias-correction method in xDEM inherits their interface from the {class}`~xdem.coreg.Coreg` class (see {ref}`coreg_object`).
This implies that bias-correction methods can be combined in a {class}`~xdem.coreg.CoregPipeline` with any other methods, or
applied in a block-wise manner through {class}`~xdem.coreg.BlockwiseCoreg`.
applied in a block-wise manner through {class}`~xdem.coreg.blockwise.BlockwiseCoreg`.

**Inheritance diagram of co-registration and bias corrections:**

Expand Down
8 changes: 4 additions & 4 deletions doc/source/coregistration.md
Original file line number Diff line number Diff line change
Expand Up @@ -496,18 +496,18 @@ for {class}`xdem.coreg.DirectionalBias`, an input `angle` to define the angle at

## Dividing coregistration in blocks

### The {class}`~xdem.coreg.BlockwiseCoreg` object
### The {class}`~xdem.coreg.blockwise.BlockwiseCoreg` object

```{caution}
The {class}`~xdem.coreg.BlockwiseCoreg` feature is still experimental: it might not support all coregistration
The {class}`~xdem.coreg.blockwise.BlockwiseCoreg` feature is still experimental: it might not support all coregistration
methods, and create edge artefacts.
```

Sometimes, we want to split a coregistration across different spatial subsets of an elevation dataset, running that
method independently in each subset. A {class}`~xdem.coreg.BlockwiseCoreg` can be constructed for this:
method independently in each subset. A {class}`~xdem.coreg.blockwise.BlockwiseCoreg` can be constructed for this:

```{code-cell} ipython3
blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=16)
blockwise = xdem.coreg.blockwise.BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=16)
```

The subdivision corresponds to an equal-length block division across the extent of the elevation dataset. It needs
Expand Down
6 changes: 3 additions & 3 deletions examples/advanced/plot_blockwise_coreg.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Often, biases are spatially variable, and a "global" shift may not be enough to coregister a DEM properly.
In the :ref:`sphx_glr_basic_examples_plot_nuth_kaab.py` example, we saw that the method improved the alignment significantly, but there were still possibly nonlinear artefacts in the result.
Clearly, nonlinear coregistration approaches are needed.
One solution is :class:`xdem.coreg.BlockwiseCoreg`, a helper to run any ``Coreg`` class over an arbitrarily small grid, and then "puppet warp" the DEM to fit the reference best.
One solution is :class:`xdem.coreg.blockwise.BlockwiseCoreg`, a helper to run any ``Coreg`` class over an arbitrarily small grid, and then "puppet warp" the DEM to fit the reference best.

The ``BlockwiseCoreg`` class runs in five steps:

Expand Down Expand Up @@ -56,7 +56,7 @@
# Horizontal and vertical shifts can be estimated using :class:`xdem.coreg.NuthKaab`.
# Let's prepare a coregistration class that calculates 64 offsets, evenly spread over the DEM.

blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=64)
blockwise = xdem.coreg.blockwise.BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=64)


# %%
Expand All @@ -76,7 +76,7 @@
# %%
# The estimated shifts can be visualized by applying the coregistration to a completely flat surface.
# This shows the estimated shifts that would be applied in elevation; additional horizontal shifts will also be applied if the method supports it.
# The :func:`xdem.coreg.BlockwiseCoreg.stats` method can be used to annotate each block with its associated Z shift.
# The :func:`xdem.coreg.blockwise.BlockwiseCoreg.stats` method can be used to annotate each block with its associated Z shift.

z_correction = blockwise.apply(
np.zeros_like(dem_to_be_aligned.data), transform=dem_to_be_aligned.transform, crs=dem_to_be_aligned.crs
Expand Down
274 changes: 1 addition & 273 deletions tests/test_coreg/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,8 @@
from scipy.ndimage import binary_dilation

import xdem
from xdem import coreg, examples, misc, spatialstats
from xdem import coreg, examples
from xdem._typing import NDArrayf
from xdem.coreg import BlockwiseCoreg
from xdem.coreg.base import Coreg, apply_matrix, dict_key_to_str


Expand Down Expand Up @@ -847,143 +846,6 @@ def test_pipeline_consistency(self) -> None:
assert np.allclose(nk_vshift.to_matrix(), vshift_nk.to_matrix(), atol=10e-1)


class TestBlockwiseCoreg:
ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
inlier_mask = ~outlines.create_mask(ref)

fit_params = dict(
reference_elev=ref.data,
to_be_aligned_elev=tba.data,
inlier_mask=inlier_mask,
transform=ref.transform,
crs=ref.crs,
)
# Create some 3D coordinates with Z coordinates being 0 to try the apply functions.
points_arr = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [0, 0, 0, 0]], dtype="float64").T
points = gpd.GeoDataFrame(
geometry=gpd.points_from_xy(x=points_arr[:, 0], y=points_arr[:, 1], crs=ref.crs), data={"z": points_arr[:, 2]}
)

@pytest.mark.parametrize(
"pipeline", [coreg.VerticalShift(), coreg.VerticalShift() + coreg.NuthKaab()]
) # type: ignore
@pytest.mark.parametrize("subdivision", [4, 10]) # type: ignore
def test_blockwise_coreg(self, pipeline: Coreg, subdivision: int) -> None:

blockwise = coreg.BlockwiseCoreg(step=pipeline, subdivision=subdivision)

# Results can not yet be extracted (since fit has not been called) and should raise an error
with pytest.raises(AssertionError, match="No coreg results exist.*"):
blockwise.to_points()

blockwise.fit(**self.fit_params)
points = blockwise.to_points()

# Validate that the number of points is equal to the amount of subdivisions.
assert points.shape[0] == subdivision

# Validate that the points do not represent only the same location.
assert np.sum(np.linalg.norm(points[:, :, 0] - points[:, :, 1], axis=1)) != 0.0

z_diff = points[:, 2, 1] - points[:, 2, 0]

# Validate that all values are different
assert np.unique(z_diff).size == z_diff.size, "Each coreg cell should have different results."

# Validate that the BlockwiseCoreg doesn't accept uninstantiated Coreg classes
with pytest.raises(ValueError, match="instantiated Coreg subclass"):
coreg.BlockwiseCoreg(step=coreg.VerticalShift, subdivision=1) # type: ignore

# Metadata copying has been an issue. Validate that all chunks have unique ids
chunk_numbers = [m["i"] for m in blockwise.meta["step_meta"]]
assert np.unique(chunk_numbers).shape[0] == len(chunk_numbers)

transformed_dem = blockwise.apply(self.tba)

ddem_pre = (self.ref - self.tba)[~self.inlier_mask]
ddem_post = (self.ref - transformed_dem)[~self.inlier_mask]

# Check that the periglacial difference is lower after coregistration.
assert abs(np.ma.median(ddem_post)) < abs(np.ma.median(ddem_pre))

stats = blockwise.stats()

# Check that nans don't exist (if they do, something has gone very wrong)
assert np.all(np.isfinite(stats["nmad"]))
# Check that offsets were actually calculated.
assert np.sum(np.abs(np.linalg.norm(stats[["x_off", "y_off", "z_off"]], axis=0))) > 0

def test_blockwise_coreg_large_gaps(self) -> None:
"""Test BlockwiseCoreg when large gaps are encountered, e.g. around the frame of a rotated DEM."""
reference_dem = self.ref.reproject(crs="EPSG:3413", res=self.ref.res, resampling="bilinear")
dem_to_be_aligned = self.tba.reproject(ref=reference_dem, resampling="bilinear")

blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), 64, warn_failures=False)

# This should not fail or trigger warnings as warn_failures is False
blockwise.fit(reference_dem, dem_to_be_aligned)

stats = blockwise.stats()

# We expect holes in the blockwise coregistration, but not in stats due to nan padding for failing chunks
assert stats.shape[0] == 64

# Copy the TBA DEM and set a square portion to nodata
tba = self.tba.copy()
mask = np.zeros(np.shape(tba.data), dtype=bool)
mask[450:500, 450:500] = True
tba.set_mask(mask=mask)

blockwise = xdem.coreg.BlockwiseCoreg(xdem.coreg.NuthKaab(), 8, warn_failures=False)

# Align the DEM and apply blockwise to a zero-array (to get the z_shift)
aligned = blockwise.fit(self.ref, tba).apply(tba)
zshift, _ = blockwise.apply(np.zeros_like(tba.data), transform=tba.transform, crs=tba.crs)

# Validate that the zshift is not something crazy high and that no negative values exist in the data.
assert np.nanmax(np.abs(zshift)) < 50
assert np.count_nonzero(aligned.data.compressed() < -50) == 0

# Check that coregistration improved the alignment
ddem_post = (aligned - self.ref).data.compressed()
ddem_pre = (tba - self.ref).data.compressed()
assert abs(np.nanmedian(ddem_pre)) > abs(np.nanmedian(ddem_post))
# assert np.nanstd(ddem_pre) > np.nanstd(ddem_post)

def test_failed_chunks_return_nan(self) -> None:
blockwise = BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=4)
blockwise.fit(**self.fit_params)
# Missing chunk 1 to simulate failure
blockwise._meta["step_meta"] = [meta for meta in blockwise._meta["step_meta"] if meta.get("i") != 1]

result_df = blockwise.stats()

# Check that chunk 1 (index 1) has NaN values for the statistics
assert np.isnan(result_df.loc[1, "inlier_count"])
assert np.isnan(result_df.loc[1, "nmad"])
assert np.isnan(result_df.loc[1, "median"])
assert isinstance(result_df.loc[1, "center_x"], float)
assert isinstance(result_df.loc[1, "center_y"], float)
assert np.isnan(result_df.loc[1, "center_z"])
assert np.isnan(result_df.loc[1, "x_off"])
assert np.isnan(result_df.loc[1, "y_off"])
assert np.isnan(result_df.loc[1, "z_off"])

def test_successful_chunks_return_values(self) -> None:
blockwise = BlockwiseCoreg(xdem.coreg.NuthKaab(), subdivision=2)
blockwise.fit(**self.fit_params)
result_df = blockwise.stats()

# Check that the correct statistics are returned for successful chunks
assert result_df.loc[0, "inlier_count"] == blockwise._meta["step_meta"][0]["inlier_count"]
assert result_df.loc[0, "nmad"] == blockwise._meta["step_meta"][0]["nmad"]
assert result_df.loc[0, "median"] == blockwise._meta["step_meta"][0]["median"]

assert result_df.loc[1, "inlier_count"] == blockwise._meta["step_meta"][1]["inlier_count"]
assert result_df.loc[1, "nmad"] == blockwise._meta["step_meta"][1]["nmad"]
assert result_df.loc[1, "median"] == blockwise._meta["step_meta"][1]["median"]


class TestAffineManipulation:

ref, tba, outlines = load_examples() # Load example reference, to-be-aligned and mask.
Expand Down Expand Up @@ -1147,137 +1009,3 @@ def test_apply_matrix__raster_realdata(self) -> None:
diff_it_gd = z_points_gd[valids] - z_points_it[valids]
assert np.percentile(np.abs(diff_it_gd), 99) < 1 # 99% of values are within a meter (instead of 90%)
assert np.percentile(np.abs(diff_it_gd), 50) < 0.02 # 10 times more precise than above


def test_warp_dem() -> None:
"""Test that the warp_dem function works expectedly."""

small_dem = np.zeros((5, 10), dtype="float32")
small_transform = rio.transform.from_origin(0, 5, 1, 1)

source_coords = np.array([[0, 0, 0], [0, 5, 0], [10, 0, 0], [10, 5, 0]]).astype(small_dem.dtype)

dest_coords = source_coords.copy()
dest_coords[0, 0] = -1e-5

warped_dem = coreg.base.warp_dem(
dem=small_dem,
transform=small_transform,
source_coords=source_coords,
destination_coords=dest_coords,
resampling="linear",
trim_border=False,
)
assert np.nansum(np.abs(warped_dem - small_dem)) < 1e-6

elev_shift = 5.0
dest_coords[1, 2] = elev_shift
warped_dem = coreg.base.warp_dem(
dem=small_dem,
transform=small_transform,
source_coords=source_coords,
destination_coords=dest_coords,
resampling="linear",
)

# The warped DEM should have the value 'elev_shift' in the upper left corner.
assert warped_dem[0, 0] == -elev_shift
# The corner should be zero, so the corner pixel (represents the corner minus resolution / 2) should be close.
# We select the pixel before the corner (-2 in X-axis) to avoid the NaN propagation on the bottom row.
assert warped_dem[-2, -1] < 1

# Synthesise some X/Y/Z coordinates on the DEM.
source_coords = np.array(
[
[0, 0, 200],
[480, 20, 200],
[460, 480, 200],
[10, 460, 200],
[250, 250, 200],
]
)

# Copy the source coordinates and apply some shifts
dest_coords = source_coords.copy()
# Apply in the X direction
dest_coords[0, 0] += 20
dest_coords[1, 0] += 7
dest_coords[2, 0] += 10
dest_coords[3, 0] += 5

# Apply in the Y direction
dest_coords[4, 1] += 5

# Apply in the Z direction
dest_coords[3, 2] += 5
test_shift = 6 # This shift will be validated below
dest_coords[4, 2] += test_shift

# Generate a semi-random DEM
transform = rio.transform.from_origin(0, 500, 1, 1)
shape = (500, 550)
dem = misc.generate_random_field(shape, 100) * 200 + misc.generate_random_field(shape, 10) * 50

# Warp the DEM using the source-destination coordinates.
transformed_dem = coreg.base.warp_dem(
dem=dem, transform=transform, source_coords=source_coords, destination_coords=dest_coords, resampling="linear"
)

# Try to undo the warp by reversing the source-destination coordinates.
untransformed_dem = coreg.base.warp_dem(
dem=transformed_dem,
transform=transform,
source_coords=dest_coords,
destination_coords=source_coords,
resampling="linear",
)
# Validate that the DEM is now more or less the same as the original.
# Due to the randomness, the threshold is quite high, but would be something like 10+ if it was incorrect.
assert spatialstats.nmad(dem - untransformed_dem) < 0.5

# Test with Z-correction disabled
transformed_dem_no_z = coreg.base.warp_dem(
dem=dem,
transform=transform,
source_coords=source_coords,
destination_coords=dest_coords,
resampling="linear",
apply_z_correction=False,
)

# Try to undo the warp by reversing the source-destination coordinates with Z-correction disabled
untransformed_dem_no_z = coreg.base.warp_dem(
dem=transformed_dem_no_z,
transform=transform,
source_coords=dest_coords,
destination_coords=source_coords,
resampling="linear",
apply_z_correction=False,
)

# Validate that the DEM is now more or less the same as the original, with Z-correction disabled.
# The result should be similar to the original, but with no Z-shift applied.
assert spatialstats.nmad(dem - untransformed_dem_no_z) < 0.5

# The difference between the two DEMs should be the vertical shift.
# We expect the difference to be approximately equal to the average vertical shift.
expected_vshift = np.mean(dest_coords[:, 2] - source_coords[:, 2])

# Check that the mean difference between the DEMs matches the expected vertical shift.
assert np.nanmean(transformed_dem_no_z - transformed_dem) == pytest.approx(expected_vshift, rel=0.3)

if False:
import matplotlib.pyplot as plt

plt.figure(dpi=200)
plt.subplot(141)

plt.imshow(dem, vmin=0, vmax=300)
plt.subplot(142)
plt.imshow(transformed_dem, vmin=0, vmax=300)
plt.subplot(143)
plt.imshow(untransformed_dem, vmin=0, vmax=300)

plt.subplot(144)
plt.imshow(dem - untransformed_dem, cmap="coolwarm_r", vmin=-10, vmax=10)
plt.show()
Loading
Loading