We provide installation instructions for pre-training and fine-tuning experiments here.
Install OpenMixup>=0.2.7 for A2MIM experiments. Here are installation steps with a new conda virtual environment. You can modify the PyTorch version according to your own environment.
conda create -n a2mim python=3.8 -y
conda activate a2mim
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install openmim
mim install mmcv-full
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py install
cd ..
rm -r openmixup # you can keep the source code to view implementation details
Then, you can setup MMDetection and MMSegmentation for downstream tasks.
pip install mmdet
pip install mmseg
It is recommended to symlink your dataset root (assuming $DATA_ROOT
) to $A2MIM/data
by ln -s $DATA_ROOT ./data
. If your folder structure is different, you may need to change the corresponding paths in config files.
Prepare the meta files of ImageNet from OpenMixup with following scripts:
mkdir data/meta
cd data/meta
wget https://github.com/Westlake-AI/openmixup/releases/download/dataset/meta.zip
unzip meta.zip
rm meta.zip
Download the ImageNet-1K classification dataset (train and val) and structure the data as follows.
Download COCO2017 and prepare COCO experiments according to the guidelines in MMDetection.
Prepare ADE20K according to the guidelines in MMSegmentation. Please use the 2016 version of ADE20K dataset, which can be downloaded from ADEChallengeData2016 or Baidu Cloud (7ycz).
At last, the folder looks like:
root
├── configs
├── data
│ ├── ade
│ ├── coco
│ ├── meta [used for 'ImageList' dataset]
│ ├── ImageNet
│ │ ├── train
│ │ | ├── n01440764
│ │ | ├── n01443537
│ │ | ...
│ │ | ├── n15075141
│ │ ├── val
│ │ | ├── ILSVRC2012_val_00000001.JPEG
│ │ | ...