Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove gpu's description in md #401

Merged
merged 1 commit into from
Dec 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 5 additions & 8 deletions GETTING_STARTED.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ This document provides a brief introduction to the usage of built-in command-lin
```
# Run with Ascend (By default)
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg

# Run with GPU
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg --device_target=GPU
```


Expand Down Expand Up @@ -48,23 +45,23 @@ to understand their behavior. Some common arguments are:
```
</details>

* To train a model on 1 NPU/GPU/CPU:
* To train a model on 1 NPU/CPU:
```
python train.py --config ./configs/yolov7/yolov7.yaml
```
* To train a model on 8 NPUs/GPUs:
* To train a model on 8 NPUs:
```
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python train.py --config ./configs/yolov7/yolov7.yaml --is_parallel True
```
* To evaluate a model's performance on 1 NPU/GPU/CPU:
* To evaluate a model's performance on 1 NPU/CPU:
```
python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt
```
* To evaluate a model's performance 8 NPUs/GPUs:
* To evaluate a model's performance 8 NPUs:
```
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt --is_parallel True
```
*Notes: (1) The default hyper-parameter is used for 8-card training, and some parameters need to be adjusted in the case of a single card. (2) The default device is Ascend, and you can modify it by specifying 'device_target' as Ascend/GPU/CPU, as these are currently supported.*
*Notes: (1) The default hyper-parameter is used for 8-card training, and some parameters need to be adjusted in the case of a single card. (2) The default device is Ascend, and you can modify it by specifying 'device_target' as Ascend/CPU, as these are currently supported.*
* For more options, see `train/test.py -h`.

* Notice that if you are using `msrun` startup with 2 devices, please add `--bind_core=True` to improve performance. For example:
Expand Down
13 changes: 5 additions & 8 deletions GETTING_STARTED_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@
```shell
# NPU (默认)
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg

# GPU
python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_ckpt/WEIGHT.ckpt --image_path /path_to_image/IMAGE.jpg --device_target=GPU
```

有关命令行参数的详细信息,请参阅`demo/predict.py -h`,或查看其[源代码](https://github.com/mindspore-lab/mindyolo/blob/master/deploy/predict.py)。
Expand Down Expand Up @@ -45,24 +42,24 @@ python demo/predict.py --config ./configs/yolov7/yolov7.yaml --weight=/path_to_c
```
</details>

* 在单卡NPU/GPU/CPU上训练模型:
* 在单卡NPU/CPU上训练模型:
```shell
python train.py --config ./configs/yolov7/yolov7.yaml
```
* 在多卡NPU/GPU上进行分布式模型训练,以8卡为例:
* 在多卡NPU上进行分布式模型训练,以8卡为例:
```shell
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python train.py --config ./configs/yolov7/yolov7.yaml --is_parallel True
```
* 在单卡NPU/GPU/CPU上评估模型的精度:
* 在单卡NPU/CPU上评估模型的精度:
```shell
python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt
```
* 在多卡NPU/GPU上进行分布式评估模型的精度
* 在多卡NPU上进行分布式评估模型的精度
```shell
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python test.py --config ./configs/yolov7/yolov7.yaml --weight /path_to_ckpt/WEIGHT.ckpt --is_parallel True
```

*注意:默认超参为8卡训练,单卡情况需调整部分参数。 默认设备为Ascend,您可以指定'device_target'的值为Ascend/GPU/CPU。*
*注意:默认超参为8卡训练,单卡情况需调整部分参数。 默认设备为Ascend,您可以指定'device_target'的值为Ascend/CPU。*
* 有关更多选项,请参阅 `train/test.py -h`.
* 在云脑上进行训练,请在[这里](./tutorials/cloud/modelarts_CN.md)查看

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov10/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov10_log python train.py --config ./configs/yolov10/yolov10n.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -64,7 +63,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov19/yolov10n.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,10 @@ python mindyolo/utils/convert_weight_darknet53.py

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov3_log python train.py --config ./configs/yolov3/yolov3.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -53,7 +52,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov3/yolov3.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov4/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,10 @@ python mindyolo/utils/convert_weight_cspdarknet53.py

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov4_log python train.py --config ./configs/yolov4/yolov4-silu.yaml --device_target Ascend --is_parallel True --epochs 320
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -72,7 +71,7 @@ multiprocessing/semaphore_tracker.py: 144 UserWarning: semaphore_tracker: There
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov4/yolov4-silu.yaml --device_target Ascend --epochs 320
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov5_log python train.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -41,7 +40,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov5/yolov5n.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov7/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov7_log python train.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -44,7 +43,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov7/yolov7.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov8/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov8_log python train.py --config ./configs/yolov8/yolov8n.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -42,7 +41,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov8/yolov8n.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolov9/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolov9_log python train.py --config ./configs/yolov9/yolov9-t.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -72,7 +71,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on a CPU/Ascend device
python train.py --config ./configs/yolov9/yolov9-t.yaml --device_target Ascend
```

Expand Down
5 changes: 2 additions & 3 deletions configs/yolox/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,10 @@ Please refer to the [GETTING_STARTED](https://github.com/mindspore-lab/mindyolo/

It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run
```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple Ascend devices
msrun --worker_num=8 --local_worker_num=8 --bind_core=True --log_dir=./yolox_log python train.py --config ./configs/yolox/yolox-s.yaml --device_target Ascend --is_parallel True
```

Similarly, you can train the model on multiple GPU devices with the above msrun command.
**Note:** For more information about msrun configuration, please refer to [here](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.3.1/parallel/msrun_launcher.html).

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindyolo/blob/master/mindyolo/utils/config.py).
Expand All @@ -41,7 +40,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please firstly run:

```shell
# standalone 1st stage training on a CPU/GPU/Ascend device
# standalone 1st stage training on a CPU/Ascend device
python train.py --config ./configs/yolox/yolox-s.yaml --device_target Ascend
```

Expand Down
3 changes: 3 additions & 0 deletions demo/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .predict import detect

__all__ = ['detect']
Loading
Loading