Skip to content

Commit

Permalink
fix docs and links
Browse files Browse the repository at this point in the history
  • Loading branch information
Lupin1998 committed Apr 27, 2023
1 parent f9367cc commit 1d06d87
Show file tree
Hide file tree
Showing 14 changed files with 50 additions and 47 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The main branch works with **PyTorch 1.8** (required by some self-supervised met
<summary>Major Features</summary>

- **Modular Design.**
OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplantable to OpenMMLab projects (e.g., [MMSelfSup](https://github.com/open-mmlab/mmselfsup)).
OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplantable to OpenMMLab projects (e.g., [MMPreTrain](https://github.com/open-mmlab/mmpretrain)).

- **All in One.**
OpenMixup provides popular backbones, mixup methods, semi-supervised, and self-supervised algorithms. Users can perform image classification (CNN & Transformer) and self-supervised pre-training (contrastive and autoregressive) under the same framework.
Expand Down Expand Up @@ -246,8 +246,8 @@ This project is released under the [Apache 2.0 license](LICENSE). See `LICENSE`

## Acknowledgement

- OpenMixup is an open-source project for mixup methods created by researchers in **CAIRI AI Lab**. We encourage researchers interested in visual representation learning and mixup methods to contribute to OpenMixup!
- This repo borrows the architecture design and part of the code from [MMSelfSup](https://github.com/open-mmlab/mmselfsup) and [MMClassification](https://github.com/open-mmlab/mmclassification).
- OpenMixup is an open-source project for mixup methods and visual representation learning created by researchers in **CAIRI AI Lab**. We encourage researchers interested in backbone architectures, mixup augmentations, and self-supervised learning methods to contribute to OpenMixup!
- This project borrows the architecture design and part of the code from [MMPreTrain](https://github.com/open-mmlab/mmpretrain) and the official implementations of supported algorisms.

<p align="right">(<a href="#top">back to top</a>)</p>

Expand All @@ -269,7 +269,7 @@ If you find this project useful in your research, please consider star `OpenMixu

## Contributors and Contact

For help, new features, or reporting bugs associated with OpenMixup, please open a [GitHub issue](https://github.com/Westlake-AI/openmixup/issues) and [pull request](https://github.com/Westlake-AI/openmixup/pulls) with the tag "help wanted" or "enhancement". For now, the direct contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)), and Zicheng Liu ([@pone7](https://github.com/pone7)). We thank all public contributors and contributors from MMSelfSup and MMClassification!
For help, new features, or reporting bugs associated with OpenMixup, please open a [GitHub issue](https://github.com/Westlake-AI/openmixup/issues) and [pull request](https://github.com/Westlake-AI/openmixup/pulls) with the tag "help wanted" or "enhancement". For now, the direct contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)), and Zicheng Liu ([@pone7](https://github.com/pone7)). We thank all public contributors and contributors from MMPreTrain (MMSelfSup and MMClassification)!

This repo is currently maintained by:

Expand Down
3 changes: 1 addition & 2 deletions configs/classification/imagenet/automix/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,7 @@ Data mixing augmentation have proved to be effective in improving the generaliza
| Swin-T | AutoMix | 224x224 | 28.29 | 300 | 81.80 | [config](./swin/swin_t_l2_a2_near_lam_cat_switch0_8_8x128_ep300.py) | model / log |
| ConvNeXt-T | AutoMix | 224x224 | 28.59 | 300 | 82.28 | [config](./convnext/convnext_t_l2_a2_near_lam_cat_switch0_8_8x128_accu4_ep300.py) | model / log |


We will update configs and models for AutoMix soon. Please refer to [Model Zoo](https://github.com/Westlake-AI/openmixup/tree/main/docs/en/model_zoos/Model_Zoo_sup.md) for image classification results.
We will update configs and models (ResNets, ViTs, Swin-T, and ConvNeXt-T) for AutoMix soon (please contact us if you want the models right now). Please refer to [Model Zoo](https://github.com/Westlake-AI/openmixup/tree/main/docs/en/model_zoos/Model_Zoo_sup.md) for image classification results.


## Citation
Expand Down
15 changes: 9 additions & 6 deletions configs/classification/imagenet/context_cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,15 @@ This page is based on the [official repo](https://github.com/ma-xu/Context-Clust

| Model | Params(M) | Flops(G) | Top-1 (%) | Throughputs | Config | Download |
| :---: | :-------: | :------: | :-------: | :---------: | :----: | :------: |
| ContextCluster-tiny\* | 5.3 | 1.0 | 71.8 | 518.4 | [config](coc_tiny_8xb256_ep300.py) | [model](https://drive.google.com/drive/folders/1Q_6W3xKMX63aQOBaqiwX5y1fCj4hVOIA?usp=sharing) |
| ContextCluster-tiny_plain\* | 5.3 | 1.0 | 72.9 | - | [config](coc_tiny_plain_8xb256_ep300.py) | [model](https://web.northeastern.edu/smilelab/xuma/ContextCluster/checkpoints/coc_tiny_plain/coc_tiny_plain.pth.tar) |
| ContextCluster-small\* | 5.3 | 1.0 | 71.8 | 518.4 | [config](coc_small_8xb256_ep300.py) | [model](https://drive.google.com/drive/folders/1WSmnbSgy1I1HOTTTAQgOKEzXSvd3Kmh-?usp=sharing) |
| ContextCluster-medium\* | 5.3 | 1.0 | 71.8 | 518.4 | [config](coc_medium_8xb256_ep300.py) | [model](https://drive.google.com/drive/folders/1sPxnEHb2AHDD9bCQh6MA0I_-7EBrvlT5?usp=sharing) |

We follow the original training setting provided by the [official repo](https://github.com/ma-xu/Context-Cluster). *Models with * are converted from the [official repo](https://github.com/ma-xu/Context-Cluster).*
| ContextCluster-tiny\* | 5.6 | 1.10 | 71.8 | 518.4 | [config](coc_tiny_8xb256_ep300.py) | [model](https://drive.google.com/drive/folders/1Q_6W3xKMX63aQOBaqiwX5y1fCj4hVOIA?usp=sharing) |
| ContextCluster-tiny_plain\* (w/o region partition) | 5.6 | 1.10 | 72.9 | - | [config](coc_tiny_plain_8xb256_ep300.py) | [model](https://web.northeastern.edu/smilelab/xuma/ContextCluster/checkpoints/coc_tiny_plain/coc_tiny_plain.pth.tar) |
| ContextCluster-small\* | 14.7 | 2.78 | 77.5 | 513.0 | [config](coc_small_8xb256_ep300.py) | [model](https://drive.google.com/drive/folders/1WSmnbSgy1I1HOTTTAQgOKEzXSvd3Kmh-?usp=sharing) |
| ContextCluster-medium\* | 29.3 | 5.90 | 81.0 | 325.2 | [config](coc_medium_8xb256_ep300.py) | [model](https://drive.google.com/drive/folders/1sPxnEHb2AHDD9bCQh6MA0I_-7EBrvlT5?usp=sharing) |
| ContextCluster-tiny | 5.6 | 1.10 | 72.7 | 518.4 | [config](coc_tiny_8xb256_ep300.py) | [model](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/coc_tiny_8xb256_ep300.pth) \| [log](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/) |
| ContextCluster-tiny_plain (w/o region partition) | 5.6 | 1.10 | 73.2 | - | [config](coc_tiny_plain_8xb256_ep300.py) | [model](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/coc_tiny_plain_8xb256_ep300.pth) \| [log](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/coc_tiny_plain_8xb256_ep300.log.json) |
| ContextCluster-small | 14.7 | 2.78 | 77.7 | 513.0 | [config](coc_small_8xb256_ep300.py) | [model](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/coc_small_8xb256_ep300.pth) \| [log](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/coc_small_8xb256_ep300.log.json) |

We follow the original training setting provided by the [official repo](https://github.com/ma-xu/Context-Cluster) to reproduce better performance of ContextCluster variants. *Models with * are converted from the [official repo](https://github.com/ma-xu/Context-Cluster).*

## Citation

Expand Down
2 changes: 1 addition & 1 deletion configs/classification/imagenet/convnext/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This page is based on documents in [MMClassification](https://github.com/open-mm

| Model | Pretrain | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download |
| :-----------: | :----------: | :-------: | :------: | :-------: | :-------: | :-------------------------------------------------------------------: | :---------------------------------------------------------------------: |
| ConvNeXt-T | From scratch | 28.59 | 4.46 | 82.16 | 95.91 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext/convnext_tiny_8xb256_accu2_fp16_ep300.py) | model | log |
| ConvNeXt-T | From scratch | 28.59 | 4.46 | 82.16 | 95.91 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext/convnext_tiny_8xb256_accu2_fp16_ep300.py) | [model](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/convnext_small_8xb128_accu4_fp16_ep300.pth) \| [log](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/convnext_small_8xb128_accu4_fp16_ep300.log.json) |
| ConvNeXt-T\* | From scratch | 28.59 | 4.46 | 82.05 | 95.86 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext/convnext_tiny_8xb256_accu2_fp16_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-tiny_3rdparty_32xb128_in1k_20220124-18abde00.pth) |
| ConvNeXt-S\* | From scratch | 50.22 | 8.69 | 83.13 | 96.44 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext/convnext_small_8xb128_accu4_fp16_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-small_3rdparty_32xb128_in1k_20220124-d39b5192.pth) |
| ConvNeXt-B\* | From scratch | 88.59 | 15.36 | 83.85 | 96.74 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext/convnext_base_8xb128_accu4_fp16_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/convnext/convnext-base_3rdparty_32xb128_in1k_20220124-d0915162.pth) |
Expand Down
2 changes: 1 addition & 1 deletion configs/classification/imagenet/deit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This page is based on documents in [MMClassification](https://github.com/open-mm
| :-------: | :----------: | :-------: | :------: | :-------: | :-------: | :-------------------------------------------------------------------: | :---------------------------------------------------------------------: |
| DeiT-T | From scratch | 5.72 | 1.08 | 73.56 | 91.16 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_tiny_8xb128_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny_pt-4xb256_in1k_20220218-13b382a0.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny_pt-4xb256_in1k_20220218-13b382a0.log.json) |
| DeiT-T\* | From scratch | 5.72 | 1.08 | 72.20 | 91.10 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_tiny_8xb128_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-tiny-distilled_3rdparty_pt-4xb256_in1k_20211216-c429839a.pth) |
| DeiT-S | From scratch | 22.05 | 4.24 | 79.93 | 95.14 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_small_8xb128_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-small_pt-4xb256_in1k_20220218-9425b9bb.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/deit/deit-small_pt-4xb256_in1k_20220218-9425b9bb.log.json) |
| DeiT-S | From scratch | 22.05 | 4.24 | 79.93 | 95.14 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_small_8xb128_ep300.py) | [model](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/deit_small_8xb128_ep300.pth) \| [log](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/deit_small_8xb128_ep300.log.json) |
| DeiT-S\* | From scratch | 22.05 | 4.24 | 79.90 | 95.10 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_small_8xb128_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-small-distilled_3rdparty_pt-4xb256_in1k_20211216-4de1d725.pth) |
| DeiT-B | From scratch | 86.57 | 16.86 | 81.82 | 95.57 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_base_8xb128_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_pt-16xb64_in1k_20220216-db63c16c.pth) \| [log](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_pt-16xb64_in1k_20220216-db63c16c.log.json) |
| DeiT-B\* | From scratch | 86.57 | 16.86 | 81.80 | 95.60 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/deit_base_8xb128_ep300.py) | [model](https://download.openmmlab.com/mmclassification/v0/deit/deit-base_3rdparty_pt-16xb64_in1k_20211124-6f40c188.pth) |
Expand Down
9 changes: 4 additions & 5 deletions configs/classification/imagenet/lit_v2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,10 @@ Vision Transformers (ViTs) have triggered the most recent and significant breakt

| Model | Pretrain | resolution | Params(M) | Flops(G) | Throughput (imgs/s) | Top-1 (%) | Config | Download |
| :-------: | :----------: | :--------: | :-------: | :------: | :-------: | :-------: | :-----------------------------------------------------------------: | :-------------------------------------------------------------------: |
| LITv2-S | From scratch | 224x224 | 28 | 3.7 | 1,471 | 81.7 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_small_8xb128_cos_fp16_ep300.py) | model / log |
| LITv2-S\* | From scratch | 224x224 | 28 | 3.7 | 1,471 | 82.0 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_small_8xb128_cos_fp16_ep300.py) | [model](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_s.pth) / [log](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_m_log.txt) |
| LITv2-M\* | From scratch | 224x224 | 49 | 7.5 | 812 | 83.3 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_medium_8xb128_cos_fp16_ep300.py) | [model](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_m.pth) / [log](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_m_log.txt) |
| LITv2-B\* | From scratch | 224x224 | 87 | 13.2 | 602 | 84.7 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_base_8xb128_cos_fp16_ep300.py) | [model](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_b.pth) / [log](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_b_log.txt) |

| LITv2-S | From scratch | 224x224 | 28 | 3.7 | 1,471 | 81.7 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_small_8xb128_cos_fp16_ep300.py) | [model](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/lit_v2_small_8xb128_cos_fp16_ep300.pth) \| [log](https://github.com/Westlake-AI/openmixup/releases/download/open-in1k-weights/lit_v2_small_8xb128_cos_fp16_ep300.log.json) |
| LITv2-S\* | From scratch | 224x224 | 28 | 3.7 | 1,471 | 82.0 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_small_8xb128_cos_fp16_ep300.py) | [model](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_s.pth) / [log](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_m_log.txt) |
| LITv2-M\* | From scratch | 224x224 | 49 | 7.5 | 812 | 83.3 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_medium_8xb128_cos_fp16_ep300.py) | [model](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_m.pth) / [log](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_m_log.txt) |
| LITv2-B\* | From scratch | 224x224 | 87 | 13.2 | 602 | 84.7 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/lit_v2_base_8xb128_cos_fp16_ep300.py) | [model](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_b.pth) / [log](https://github.com/ziplab/LITv2/releases/download/v1.0/litv2_b_log.txt) |

We follow the original training setting provided by the [official repo](https://github.com/ziplab/LITv2) and throughput is averaged over 30 runs. *Note that models with \* are converted from the [official repo](https://github.com/ziplab/LITv2).* We reproduce LITv2-S training 300 epochs.

Expand Down
2 changes: 1 addition & 1 deletion configs/classification/imagenet/samix/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Mixup is a popular data-dependent augmentation technique for deep neural network
| ResNet-101 | SAMix | 224x224 | 42.51 | 300 | 80.98 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/samix/basic/r101_l2_a2_bili_val_dp01_mul_mb_mlr1e_3_bb_mlr0_4xb64.py) | model / log |
| ResNeXt-101 | SAMix | 224x224 | 44.18 | 100 | 80.89 | [config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/samix/basic/rx101_l2_a2_bili_val_dp01_mul_mb_mlr1e_3_bb_mlr0_4xb64.py) | model / log |

We will update configs and models for SAMix soon. Please refer to [Model Zoo](https://github.com/Westlake-AI/openmixup/tree/main/docs/en/model_zoos/Model_Zoo_sup.md) for image classification results.
We will update configs and models (ResNets, ViTs, Swin-T, and ConvNeXt-T) for SAMix soon (please contact us if you want the models right now). Please refer to [Model Zoo](https://github.com/Westlake-AI/openmixup/tree/main/docs/en/model_zoos/Model_Zoo_sup.md) for image classification results.

## Citation

Expand Down
Loading

0 comments on commit 1d06d87

Please sign in to comment.