Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Draft] Patchwise training and inference #4

Closed
wants to merge 158 commits into from
Closed

Conversation

davidwilby
Copy link
Owner

@davidwilby davidwilby commented Aug 23, 2024

Just a draft PR so we can easily compare and discuss this branch outside of the project's main repo. Do not merge

TO DO

  • notebooks for
    • patchwise training
    • patchwise prediction
  • linting see lint patchwise code #13
  • clean up any remaining commented out code etc.

Hey @tom-andersson - at long last, the long-awaited patchwise training and prediction feature that @nilsleh and @MartinSJRogers have been working on.

This PR adds patching capabilities to DeepSensor during training and inference.

Training

Optional args patching_strategy, patch_size, stride and num_samples_per_date are added to TaskLoader.__call__.

There are two available patching strategies: random_window and sliding_window. The random_window option randomly selects points in the x1 and x2 extent as the centroid of the patch. The number of patches is defined by the num_samples_per_date argument. The sliding_window function starts in the top left of the dataset and convolves from left to right and top to bottom over the data using the user-defined patch_size and stride.

TaskLoader.__call__ now contains additional conditional logic depending upon the patching strategy selected. If no patching strategy is selected, task_generator() runs exactly as before. If random_window (sliding_window) is selected the bounding boxes for the patches are generated using the sample_random_window() (sample_sliding_window()) methods. The bounding boxes are appended to the list bboxes, and passed to task_generator().

Within task_generator() after the sampling strategies are applied, the data is spatially sliced using each bbox in bboxes using the self.spatial_slice_variable() function.

When using a patching strategy, TaskLoader produces a list of tasks per date, rather than an individual task per date. A small change has been made to Task's summarise_str method to avoid an error when printing patched Tasks and to output more meaningful information.

Inference

To run patchwise predictions, a new method has been created in model.py called predict_patch(). This method iterates through and applies the pre-exisiting predict() method to each patched task. The predict() method has not been changed. Within each iteration, prior to running predict() for each patch, the bounding box of each patch is unnormalized, so the X_t of each patch can be passed to the predict() function. The patchwise predictions are stored in the list preds for subsequent stitching.

It is only possible to use the sliding_window patching function during inference, and the stride and patch size are defined when the user generates the test tasks within the task_loader() call. The data_processor must also be passed to predict_patch() method to enable unnormalisation of the coordinates of the bboxes in model.py.

Once the list of patchwise predictions are generated, stitch_clipped_predictions() is used to form a prediction at the original X_t extent. Currently, functionality is provided to subset or clip each patchwise prediction so there is no overlap between adjacent patches and then merge the patches using xr.combine_by_coords(). The modular nature of the code means there is scope for additional stitching strategies to be added after this PR, for example applying a weighting function to overlapping predictions. To ensure the patches are clipped by the correct amount, get_patch_overlap() calculates the overlap between adjacent patches. stitch_clipped_predictions() also contains code to handle patches at the edge or bottom of the dataset, where the overlap may be different.

The output from predict_patch() is the identical DeepSensor object produced in model.predict(), hence DeepSensor’s plotting functionality can subsequently be used in the same way.

Documentation and Testing

New notebook(s) are added illustrating the usage of both patchwise training and prediction.

New tests are added to verify the new behaviour.

Limitations

  • Patchwise prediction does not currently support predicting at more than one timestamp - calling predict_patch with more than one date raises a NotImplementedError.
  • predict_patch is a new, distinct function due to all the pre-processing it needs to do, the patchwise behaviour may be better served as an option in predict - let me know what you think.
  • Patched tasks don't exactly follow the proportions from patch_size, e.g. for a 'square' patch patch_size=(0.5,0.5) the exact dimensions won't be exactly square, this is accounted for in stitching of patches, but is slightly inelegant at the moment so we may want to come back and find a more refined solution in the future.
  • In test_model.test_patchwise_prediction I've temporarily commented-out the asserts checking for correct prediction shape, these fail with test datasets for now, but with real datasets the shapes are correct, see the patchwise_training_and_prediction.ipynb notebook.

davidwilby and others added 29 commits December 13, 2024 15:04
…erge

Replace combine_by_coords with np.where() to stich patched predictions
Tidy up patchwise prediction arguments
Co-authored-by: David Wilby <24752124+davidwilby@users.noreply.github.com>
Co-authored-by: David Wilby <24752124+davidwilby@users.noreply.github.com>
…iction_objects

Simplify stitching by retaining prediction objects
Edit some markup text in new methods in pred module.
Move stitching code to prediction module
@davidwilby davidwilby closed this Feb 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants