Skip to content

Commit 8d0da86

Browse files
committed
Mention DALI proxy in the readme
Signed-off-by: Joaquin Anton Guirao <janton@nvidia.com>
1 parent f370cc7 commit 8d0da86

File tree

2 files changed

+43
-2
lines changed

2 files changed

+43
-2
lines changed

docs/examples/use_cases/pytorch/efficientnet/readme.rst

+31-2
Original file line numberDiff line numberDiff line change
@@ -89,11 +89,26 @@ You may need to adjust ``--batch-size`` parameter for your machine.
8989

9090
You can change the data loader and automatic augmentation scheme that are used by adding:
9191

92-
* ``--data-backend``: ``dali`` | ``pytorch`` | ``synthetic``,
92+
* ``--data-backend``: ``dali`` | ``dali_proxy`` | ``pytorch`` | ``synthetic``,
9393
* ``--automatic-augmentation``: ``disabled`` | ``autoaugment`` | ``trivialaugment`` (the last one only for DALI),
9494
* ``--dali-device``: ``cpu`` | ``gpu`` (only for DALI).
9595

96-
By default DALI GPU-variant with AutoAugment is used.
96+
By default DALI GPU-variant with AutoAugment is used (``dali`` and ``dali_proxy`` backends).
97+
98+
Data Backends
99+
-------------
100+
101+
- **dali**:
102+
Leverages a DALI pipeline along with DALI's PyTorch iterator for data loading, preprocessing, and augmentation.
103+
104+
- **dali_proxy**:
105+
Uses a DALI pipeline for preprocessing and augmentation while relying on PyTorch's data loader. DALI Proxy facilitates the transfer of data to DALI for processing.
106+
107+
- **pytorch**:
108+
Employs the native PyTorch data loader for data preprocessing and augmentation.
109+
110+
- **synthetic**:
111+
Creates synthetic data on the fly, which is useful for testing and benchmarking purposes. This backend eliminates the need for actual datasets, providing a convenient way to simulate data loading.
97112

98113
For example to run the EfficientNet with AMP on a batch size of 128 with DALI using TrivialAugment you need to invoke:
99114

@@ -161,6 +176,20 @@ To run training benchmarks with different data loaders and automatic augmentatio
161176
--workspace $RESULT_WORKSPACE
162177
--report-file bench_report_dali_ta.json $PATH_TO_IMAGENET
163178
179+
# DALI proxy with AutoAugment
180+
python multiproc.py --nproc_per_node 8 ./main.py --amp --static-loss-scale 128
181+
--batch-size 128 --epochs 4 --no-checkpoints --training-only
182+
--data-backend dali_proxy --automatic-augmentation autoaugment
183+
--workspace $RESULT_WORKSPACE
184+
--report-file bench_report_dali_proxy_aa.json $PATH_TO_IMAGENET
185+
186+
# DALI proxy with TrivialAugment
187+
python multiproc.py --nproc_per_node 8 ./main.py --amp --static-loss-scale 128
188+
--batch-size 128 --epochs 4 --no-checkpoints --training-only
189+
--data-backend dali_proxy --automatic-augmentation trivialaugment
190+
--workspace $RESULT_WORKSPACE
191+
--report-file bench_report_dali_proxy_ta.json $PATH_TO_IMAGENET
192+
164193
# PyTorch without automatic augmentations
165194
python multiproc.py --nproc_per_node 8 ./main.py --amp --static-loss-scale 128
166195
--batch-size 128 --epochs 4 --no-checkpoints --training-only

docs/examples/use_cases/pytorch/resnet50/pytorch-resnet50.rst

+12
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,18 @@ The default learning rate schedule starts at 0.1 and decays by a factor of 10 ev
4444
4545
python main.py -a alexnet --lr 0.01 [imagenet-folder with train and val folders]
4646
47+
Data loaders
48+
------------
49+
50+
- **dali**:
51+
Leverages a DALI pipeline along with DALI's PyTorch iterator for data loading, preprocessing, and augmentation.
52+
53+
- **dali_proxy**:
54+
Uses a DALI pipeline for preprocessing and augmentation while relying on PyTorch's data loader. DALI Proxy facilitates the transfer of data to DALI for processing.
55+
56+
- **pytorch**:
57+
Employs the native PyTorch data loader for data preprocessing and augmentation.
58+
4759
Usage
4860
-----
4961

0 commit comments

Comments
 (0)