You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/examples/use_cases/pytorch/resnet50/pytorch-resnet50.rst
+39-26
Original file line number
Diff line number
Diff line change
@@ -54,29 +54,42 @@ Usage
54
54
PyTorch ImageNet Training
55
55
56
56
positional arguments:
57
-
DIR path(s) to dataset (if one path is provided, it is assumed to have subdirectories named "train" and "val"; alternatively, train and val paths can be specified directly by providing both paths as arguments)
58
-
59
-
optional arguments (for the full list please check `Apex ImageNet example
-j N, --workers N number of data loading workers (default: 4)
67
-
--epochs N number of total epochs to run
68
-
--start-epoch N manual epoch number (useful on restarts)
69
-
-b N, --batch-size N mini-batch size (default: 256)
70
-
--lr LR, --learning-rate LR initial learning rate
71
-
--momentum M momentum
72
-
--weight-decay W, --wd W weight decay (default: 1e-4)
73
-
--print-freq N, -p N print frequency (default: 10)
74
-
--resume PATH path to latest checkpoint (default: none)
75
-
-e, --evaluate evaluate model on validation set
76
-
--pretrained use pre-trained model
77
-
--dali_cpu use CPU based pipeline for DALI, for heavy GPU
78
-
networks it may work better, for IO bottlenecked
79
-
one like RN18 GPU default should be faster
80
-
--data_loader Select data loader: "pytorch"for native PyTorch data loader,
81
-
"dali"for DALI data loader, or "dali_proxy"for PyTorch dataloader with DALI proxy preprocessing.
82
-
--fp16-mode enables mixed precision mode
57
+
DIR path(s) to dataset (if one path is provided, it is assumed to have subdirectories named "train" and "val"; alternatively, train and val paths can be specified
-j N, --workers N number of data loading workers (default: 4)
72
+
--epochs N number of total epochs to run
73
+
--start-epoch N manual epoch number (useful on restarts)
74
+
-b N, --batch-size N mini-batch size per process (default: 256)
75
+
--lr LR, --learning-rate LR
76
+
Initial learning rate. Will be scaled by <global batch size>/256: args.lr = args.lr*float(args.batch_size*args.world_size)/256. A warmup schedule will also be
77
+
applied over the first 5 epochs.
78
+
--momentum M momentum
79
+
--weight-decay W, --wd W
80
+
weight decay (default: 1e-4)
81
+
--print-freq N, -p N print frequency (default: 10)
82
+
--resume PATH path to latest checkpoint (default: none)
83
+
-e, --evaluate evaluate model on validation set
84
+
--pretrained use pre-trained model
85
+
--dali_cpu Runs CPU based version of DALI pipeline.
86
+
--data_loader {pytorch,dali,dali_proxy}
87
+
Select data loader: "pytorch"for native PyTorch data loader, "dali"for DALI data loader, or "dali_proxy"for PyTorch dataloader with DALI proxy preprocessing.
88
+
--prof PROF Only run 10 iterations for profiling.
89
+
--deterministic If enabled, random seeds are fixed to ensure deterministic results for reproducibility.
90
+
--fp16-mode Enable half precision mode.
91
+
--loss-scale LOSS_SCALE
92
+
Loss scaling factor for mixed precision training. Default is 1.
93
+
--channels-last CHANNELS_LAST
94
+
Use channels-last memory format for model and data. Default is False.
0 commit comments