Skip to content

Commit 6f96b75

Browse files
committed
formatted with black
1 parent f655f21 commit 6f96b75

12 files changed

+1020
-632
lines changed

README.md

+33-22
Original file line numberDiff line numberDiff line change
@@ -1,49 +1,60 @@
11
# System setup
2-
xView2 inference requires a tremendous amount of computing power. Currently, CPU inference is wildly
2+
3+
xView2 inference requires a tremendous amount of computing power. Currently, CPU inference is wildly
34
impractical. To that end, unless you have a dedicated workstation with ample GPU power such as an Nvidia DGX station,
45
we recommend a cloud based solution such as AWS or Google Cloud Compute utilizing a GPU optimized instance. Prices vary
56
on instance type and area to be inferred. Example instances:
7+
68
1. AWS EC2
7-
1. P4d.24xlarge
8-
2. P3.16xlarge
9+
1. P4d.24xlarge
10+
2. P3.16xlarge
911
2. G Cloud
10-
1. Todo!
12+
1. Todo!
1113

1214
# Installation
15+
1316
## Install from source
17+
1418
**Note**: Only tested on Linux systems.
19+
1520
1. Close repository: `git clone https://github.com/fdny-imt/xView2_FDNY.git`.
1621
2. Create Conda environment: `conda create --name xv2 --file spec-file.txt`.
1722
3. Activate conda environment: `conda activate xv2`.
23+
1824
## Docker
25+
1926
Todo.
2027

2128
# Usage
22-
|Argument|Required|Default|Help
23-
|---|---|---|---|
24-
|--pre_directory|Yes|None|Directory containing pre-disaster imagery. This is searched recursively.|
25-
|--post_directory|Yes|None|Directory containing post-disaster imagery. This is searched recursively.|
26-
|--output_directory|Yes|None|Directory to store output files. This will be created if it does not exist. Existing files may be overwritten.|
27-
|--n_procs|Yes|8|Number of processors for multiprocessing|
28-
|--batch_size|Yes|2|Number of chips to run inference on at once|
29-
|--num_workers|Yes|4|Number of workers loading data into RAM. Recommend 4 * num_gpu|
30-
|--pre_crs|No|None|The Coordinate Reference System (CRS) for the pre-disaster imagery. This will only be utilized if images lack CRS data.|
31-
|--post_crs|No|None|The Coordinate Reference System (CRS) for the post-disaster imagery. This will only be utilized if images lack CRS data.|
32-
|--destination_crs|No|EPSG:4326|The Coordinate Reference System (CRS) for the output overlays.|
33-
|--output_resolution|No|None|Override minimum resolution calculator. This should be a lower resolution (higher number) than source imagery for decreased inference time. Must be in units of destinationCRS.|
34-
|--dp_mode|No|False|Run models serially, but using DataParallel|
35-
|--save_intermediates|No|False|Store intermediate runfiles|
36-
|--aoi_file|No|None|Shapefile or GeoJSON file of AOI polygons
29+
30+
| Argument | Required | Default | Help |
31+
| -------------------- | -------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
32+
| --pre_directory | Yes | None | Directory containing pre-disaster imagery. This is searched recursively. |
33+
| --post_directory | Yes | None | Directory containing post-disaster imagery. This is searched recursively. |
34+
| --output_directory | Yes | None | Directory to store output files. This will be created if it does not exist. Existing files may be overwritten. |
35+
| --n_procs | Yes | 8 | Number of processors for multiprocessing |
36+
| --batch_size | Yes | 2 | Number of chips to run inference on at once |
37+
| --num_workers | Yes | 4 | Number of workers loading data into RAM. Recommend 4 \* num_gpu |
38+
| --pre_crs | No | None | The Coordinate Reference System (CRS) for the pre-disaster imagery. This will only be utilized if images lack CRS data. |
39+
| --post_crs | No | None | The Coordinate Reference System (CRS) for the post-disaster imagery. This will only be utilized if images lack CRS data. |
40+
| --destination_crs | No | EPSG:4326 | The Coordinate Reference System (CRS) for the output overlays. |
41+
| --output_resolution | No | None | Override minimum resolution calculator. This should be a lower resolution (higher number) than source imagery for decreased inference time. Must be in units of destinationCRS. |
42+
| --dp_mode | No | False | Run models serially, but using DataParallel |
43+
| --save_intermediates | No | False | Store intermediate runfiles |
44+
| --aoi_file | No | None | Shapefile or GeoJSON file of AOI polygons |
3745

3846
# Example invocation for damage assessment
47+
3948
On 2 GPUs:
4049
`CUDA_VISIBLE_DEVICES=0,1 python handler.py --pre_directory <pre dir> --post_directory <post dir> --output_directory <output dir> --aoi_file <aoi file (GeoJSON or shapefile)> --n_procs <n_proc> --batch_size 2 --num_workers 6`
4150

4251
# Notes:
43-
- CRS between input types (pre/post/building footprints/AOI) need not match. However CRS *within* input types must match.
4452

53+
- CRS between input types (pre/post/building footprints/AOI) need not match. However CRS _within_ input types must match.
4554

4655
# Sources
56+
4757
**xView2 1st place solution**
48-
- Model weights from 1st place solution for "xView2: Assess Building Damage" challenge. https://www.xview2.org
49-
- More information from original submission see commit: 3fe4a7327f1a19b8c516e0b0930c38c29ac3662b
58+
59+
- Model weights from 1st place solution for "xView2: Assess Building Damage" challenge. https://github.com/DIUx-xView/xView2_first_place
60+
- More information from original submission see commit: 3fe4a7327f1a19b8c516e0b0930c38c29ac3662b

dataset.py

+24-22
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,14 @@
44
import numpy as np
55
from PIL import Image
66

7+
78
def preprocess_inputs(x):
8-
x = np.asarray(x, dtype='float32')
9+
x = np.asarray(x, dtype="float32")
910
x /= 127
1011
x -= 1
1112
return x
1213

14+
1315
class XViewDataset(Dataset):
1416
"Dataset for xView"
1517

@@ -20,45 +22,45 @@ def __init__(self, pairs, mode, return_geo=False):
2022
:param transform: PyTorch transforms to be used on each example
2123
"""
2224
self.pairs = pairs
23-
self.return_geo=return_geo
25+
self.return_geo = return_geo
2426
self.mode = mode
2527

2628
def __len__(self):
27-
return(len(self.pairs))
29+
return len(self.pairs)
2830

2931
def __getitem__(self, idx, return_img=False):
3032
fl = self.pairs[idx]
31-
pre_image = np.array(Image.open(str(fl.opts.in_pre_path)).convert('RGB'))
32-
post_image = np.array(Image.open(str(fl.opts.in_post_path)).convert('RGB'))
33-
if self.mode == 'cls':
33+
pre_image = np.array(Image.open(str(fl.opts.in_pre_path)).convert("RGB"))
34+
post_image = np.array(Image.open(str(fl.opts.in_post_path)).convert("RGB"))
35+
if self.mode == "cls":
3436
img = np.concatenate([pre_image, post_image], axis=2)
35-
elif self.mode == 'loc':
37+
elif self.mode == "loc":
3638
img = pre_image
3739
else:
38-
raise ValueError('Incorrect mode! Must be cls or loc')
39-
40+
raise ValueError("Incorrect mode! Must be cls or loc")
41+
4042
img = preprocess_inputs(img)
4143

4244
inp = []
4345
inp.append(img)
4446
inp.append(img[::-1, ...])
4547
inp.append(img[:, ::-1, ...])
4648
inp.append(img[::-1, ::-1, ...])
47-
inp = np.asarray(inp, dtype='float')
49+
inp = np.asarray(inp, dtype="float")
4850
inp = torch.from_numpy(inp.transpose((0, 3, 1, 2))).float()
49-
51+
5052
out_dict = {}
51-
out_dict['in_pre_path'] = str(fl.opts.in_pre_path)
52-
out_dict['in_post_path'] = str(fl.opts.in_post_path)
53-
out_dict['poly_chip'] = str(fl.opts.poly_chip)
53+
out_dict["in_pre_path"] = str(fl.opts.in_pre_path)
54+
out_dict["in_post_path"] = str(fl.opts.in_post_path)
55+
out_dict["poly_chip"] = str(fl.opts.poly_chip)
5456
if return_img:
55-
out_dict['pre_image'] = pre_image
56-
out_dict['post_image'] = post_image
57-
out_dict['img'] = inp
58-
out_dict['idx'] = idx
59-
out_dict['out_cls_path'] = str(fl.opts.out_cls_path)
60-
out_dict['out_loc_path'] = str(fl.opts.out_loc_path)
61-
out_dict['out_overlay_path'] = str(fl.opts.out_overlay_path)
62-
out_dict['is_vis'] = fl.opts.is_vis
57+
out_dict["pre_image"] = pre_image
58+
out_dict["post_image"] = post_image
59+
out_dict["img"] = inp
60+
out_dict["idx"] = idx
61+
out_dict["out_cls_path"] = str(fl.opts.out_cls_path)
62+
out_dict["out_loc_path"] = str(fl.opts.out_loc_path)
63+
out_dict["out_overlay_path"] = str(fl.opts.out_overlay_path)
64+
out_dict["is_vis"] = fl.opts.is_vis
6365

6466
return out_dict

0 commit comments

Comments
 (0)