diff --git a/docs/en/advanced_tutorials/data_element.md b/docs/en/advanced_tutorials/data_element.md index 38965e8177..8a204c27f2 100644 --- a/docs/en/advanced_tutorials/data_element.md +++ b/docs/en/advanced_tutorials/data_element.md @@ -1008,7 +1008,7 @@ In this section, we use MMDetection to demonstrate how to migrate the abstract d ### 1. Simplify the module interface -Detector's external interfaces can be significantly simplified and unified. In the training process of a single-stage detection and segmentation algorithm in MMDet 2.X, `SingleStageDetector` requires `img`, `img_metas`, `gt_bboxes`, `gt_labels` and `gt_bboxes_ignore` as the inputs, but `SingleStageInstanceSegmentor` requires `gt_masks` as well. This causes inconsistency in the training interface and affects flexibility. +Detector's external interfaces can be significantly simplified and unified. In the training process of a single-stage detection and segmentation algorithm in MMDet 2.X, `SingleStageDetector` requires `img`, `img_metas`, `gt_bboxes`, `gt_labels` and `gt_bboxes_ignore` as the inputs, but `SingleStageInstanceSegmentor` requires `gt_masks` as well. This causes inconsistency in the training interface and affects flexibility. ```python class SingleStageDetector(BaseDetector): diff --git a/docs/en/common_usage/better_optimizers.md b/docs/en/common_usage/better_optimizers.md index c66b5e949c..23f4f075bf 100644 --- a/docs/en/common_usage/better_optimizers.md +++ b/docs/en/common_usage/better_optimizers.md @@ -4,7 +4,7 @@ This document provides some third-party optimizers supported by MMEngine, which ## D-Adaptation -[D-Adaptation](https://github.com/facebookresearch/dadaptation) provides `DAdaptAdaGrad`, `DAdaptAdam` and `DAdaptSGD` optimziers。 +[D-Adaptation](https://github.com/facebookresearch/dadaptation) provides `DAdaptAdaGrad`, `DAdaptAdam` and `DAdaptSGD` optimizers. ```{note} If you use the optimizer provided by D-Adaptation, you need to upgrade mmengine to `0.6.0`. @@ -35,7 +35,7 @@ runner.train() ## Lion-Pytorch -[lion-pytorch](https://github.com/lucidrains/lion-pytorch) provides the `Lion` optimizer。 +[lion-pytorch](https://github.com/lucidrains/lion-pytorch) provides the `Lion` optimizer. ```{note} If you use the optimizer provided by Lion-Pytorch, you need to upgrade mmengine to `0.6.0`. @@ -93,7 +93,7 @@ runner.train() ## bitsandbytes -[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) provides `AdamW8bit`, `Adam8bit`, `Adagrad8bit`, `PagedAdam8bit`, `PagedAdamW8bit`, `LAMB8bit`, `LARS8bit`, `RMSprop8bit`, `Lion8bit`, `PagedLion8bit` and `SGD8bit` optimziers。 +[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) provides `AdamW8bit`, `Adam8bit`, `Adagrad8bit`, `PagedAdam8bit`, `PagedAdamW8bit`, `LAMB8bit`, `LARS8bit`, `RMSprop8bit`, `Lion8bit`, `PagedLion8bit` and `SGD8bit` optimizers. ```{note} If you use the optimizer provided by bitsandbytes, you need to upgrade mmengine to `0.9.0`. @@ -124,7 +124,7 @@ runner.train() ## transformers -[transformers](https://github.com/huggingface/transformers) provides `Adafactor` optimzier。 +[transformers](https://github.com/huggingface/transformers) provides `Adafactor` optimzier. ```{note} If you use the optimizer provided by transformers, you need to upgrade mmengine to `0.9.0`. diff --git a/docs/en/common_usage/debug_tricks.md b/docs/en/common_usage/debug_tricks.md index 641077f260..6df05055ea 100644 --- a/docs/en/common_usage/debug_tricks.md +++ b/docs/en/common_usage/debug_tricks.md @@ -30,7 +30,7 @@ train_dataloader = dict( type=dataset_type, data_prefix='data/cifar10', test_mode=False, - indices=5000, # set indices=5000,represent every epoch only iterator 5000 samples + indices=5000, # set indices=5000, represent every epoch only iterator 5000 samples pipeline=train_pipeline), sampler=dict(type='DefaultSampler', shuffle=True), ) diff --git a/docs/en/design/infer.md b/docs/en/design/infer.md index f0f426e03d..c340a10f9f 100644 --- a/docs/en/design/infer.md +++ b/docs/en/design/infer.md @@ -87,10 +87,10 @@ OpenMMLab requires the `inferencer(img)` to output a `dict` containing two field When performing inference, the following steps are typically executed: -1. preprocess:Input data preprocessing, including data reading, data preprocessing, data format conversion, etc. +1. preprocess: Input data preprocessing, including data reading, data preprocessing, data format conversion, etc. 2. forward: Execute `model.forwward` -3. visualize:Visualization of predicted results. -4. postprocess:Post-processing of predicted results, including result format conversion, exporting predicted results, etc. +3. visualize: Visualization of predicted results. +4. postprocess: Post-processing of predicted results, including result format conversion, exporting predicted results, etc. To improve the user experience of the inferencer, we do not want users to have to configure parameters for each step when performing inference. In other words, we hope that users can simply configure parameters for the `__call__` interface without being aware of the above process and complete the inference. @@ -173,8 +173,8 @@ Initializes and returns the `visualizer` required by the inferencer, which is eq Input arguments: -- inputs:Input data, passed into `__call__`, usually a list of image paths or image data. -- batch_size:batch size, passed in by the user when calling `__call__`. +- inputs: Input data, passed into `__call__`, usually a list of image paths or image data. +- batch_size: batch size, passed in by the user when calling `__call__`. - Other parameters: Passed in by the user and specified in `preprocess_kwargs`. Return: @@ -187,7 +187,7 @@ The `preprocess` function is a generator function by default, which applies the Input arguments: -- inputs:The batch data processed by `preprocess` function. +- inputs: The batch data processed by `preprocess` function. - Other parameters: Passed in by the user and specified in `forward_kwargs`. Return: @@ -204,9 +204,9 @@ This is an abstract method that must be implemented by the subclass. Input arguments: -- inputs:The input data, which is the raw data without preprocessing. -- preds:Predicted results of the model. -- show:Whether to visualize. +- inputs: The input data, which is the raw data without preprocessing. +- preds: Predicted results of the model. +- show: Whether to visualize. - Other parameters: Passed in by the user and specified in `visualize_kwargs`. Return: @@ -221,12 +221,12 @@ This is an abstract method that must be implemented by the subclass. Input arguments: -- preds:The predicted results of the model, which is a `list` type. Each element in the list represents the prediction result for a single data item. In the OpenMMLab series of algorithm libraries, the type of each element in the prediction result is `BaseDataElement`. -- visualization:Visualization results +- preds: The predicted results of the model, which is a `list` type. Each element in the list represents the prediction result for a single data item. In the OpenMMLab series of algorithm libraries, the type of each element in the prediction result is `BaseDataElement`. +- visualization: Visualization results - return_datasample: Whether to maintain datasample for return. When set to `False`, the returned result is converted to a `dict`. - Other parameters: Passed in by the user and specified in `postprocess_kwargs`. -Return: +Return: - The type of the returned value is a dictionary containing both the visualization and prediction results. OpenMMLab requires the returned dictionary to have two keys: `predictions` and `visualization`. @@ -234,9 +234,9 @@ Return: Input arguments: -- inputs:The input data, usually a list of image paths or image data. Each element in `inputs` can also be other types of data as long as it can be processed by the `pipeline` returned by [init_pipeline](#_init_pipeline). When there is only one inference data in `inputs`, it does not have to be a `list`, `__call__` will internally wrap it into a list for further processing. +- inputs: The input data, usually a list of image paths or image data. Each element in `inputs` can also be other types of data as long as it can be processed by the `pipeline` returned by [init_pipeline](#_init_pipeline). When there is only one inference data in `inputs`, it does not have to be a `list`, `__call__` will internally wrap it into a list for further processing. - return_datasample: Whether to convert datasample to dict for return. -- batch_size:Batch size for inference, which will be further passed to the `preprocess` function. +- batch_size: Batch size for inference, which will be further passed to the `preprocess` function. - Other parameters: Additional parameters assigned to `preprocess`, `forward`, `visualize`, and `postprocess` methods. Return: diff --git a/docs/en/design/logging.md b/docs/en/design/logging.md index 8110ef8579..68a976bfc1 100644 --- a/docs/en/design/logging.md +++ b/docs/en/design/logging.md @@ -74,11 +74,11 @@ history_buffer.min() # 1, the global minimum history_buffer.max(2) -# 3,the maximum in [2, 3] +# 3, the maximum in [2, 3] history_buffer.min() # 3, the global maximum history_buffer.mean(2) -# 2.5,the mean value in [2, 3], (2 + 3) / (1 + 1) +# 2.5, the mean value in [2, 3], (2 + 3) / (1 + 1) history_buffer.mean() # 2, the global mean, (1 + 2 + 3) / (1 + 1 + 1) history_buffer = HistoryBuffer([1, 2, 3], [2, 2, 2]) # Cases when counts are not 1 @@ -431,7 +431,7 @@ In the case of multiple processes in multiple nodes without storage, logs are or ```text # without shared storage -# node 0: +# node 0: work_dir/20230228_141908 ├── 20230306_183634_${hostname}_device0_rank0.log ├── 20230306_183634_${hostname}_device1_rank1.log @@ -442,7 +442,7 @@ work_dir/20230228_141908 ├── 20230306_183634_${hostname}_device6_rank6.log ├── 20230306_183634_${hostname}_device7_rank7.log -# node 7: +# node 7: work_dir/20230228_141908 ├── 20230306_183634_${hostname}_device0_rank56.log ├── 20230306_183634_${hostname}_device1_rank57.log diff --git a/docs/en/get_started/15_minutes.md b/docs/en/get_started/15_minutes.md index 7902c5dbee..7ec7ed09d7 100644 --- a/docs/en/get_started/15_minutes.md +++ b/docs/en/get_started/15_minutes.md @@ -1,6 +1,6 @@ # 15 minutes to get started with MMEngine -In this tutorial, we'll take training a ResNet-50 model on CIFAR-10 dataset as an example. We will build a complete and configurable pipeline for both training and validation in only 80 lines of code with `MMEgnine`. +In this tutorial, we'll take training a ResNet-50 model on CIFAR-10 dataset as an example. We will build a complete and configurable pipeline for both training and validation in only 80 lines of code with `MMEngine`. The whole process includes the following steps: - [15 minutes to get started with MMEngine](#15-minutes-to-get-started-with-mmengine) diff --git a/docs/en/migration/hook.md b/docs/en/migration/hook.md index 0d4ac06dd2..a6276a2dd4 100644 --- a/docs/en/migration/hook.md +++ b/docs/en/migration/hook.md @@ -156,7 +156,7 @@ This tutorial compares the difference in function, mount point, usage and implem