Skip to content

Commit f70609a

Browse files
authored
Update README.md
1 parent bae3c36 commit f70609a

File tree

1 file changed

+34
-19
lines changed

1 file changed

+34
-19
lines changed

README.md

+34-19
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,29 @@
11
# A continual learning survey: Defying forgetting in classification tasks
2-
Source code for the [Continual Learning survey paper](https://arxiv.org/abs/1909.08383):
3-
4-
```
5-
@article{de2019continual,
6-
title={A continual learning survey: Defying forgetting in classification tasks},
7-
author={De Lange, Matthias and Aljundi, Rahaf and Masana, Marc and Parisot, Sarah and Jia, Xu and Leonardis, Ale{\v{s}} and Slabaugh, Gregory and Tuytelaars, Tinne},
8-
journal={arXiv preprint arXiv:1909.08383},
9-
year={2019}
10-
}
11-
```
12-
13-
The code contains a generalizing framework for 11 SOTA methods and 4 baselines in Pytorch:
14-
- Methods: SI, EWC, MAS, mean/mode-IMM, LWF, EBLL, PackNet, HAT, GEM, iCaRL
15-
- Baselines
16-
- Joint: Learn from all task data at once with a single head (multi-task learning baseline).
17-
- Finetuning: standard SGD
18-
- Finetuning with Full Memory replay: Allocate memory dynamically to incoming tasks.
19-
- Finetuning with Partial Memory replay: Divide memory a priori over all tasks.
2+
3+
This is the original source code for the Continual Learning survey paper _"A continual learning survey: Defying forgetting in classification tasks"_ published at TPAMI [[TPAMI paper]](https://ieeexplore.ieee.org/abstract/document/9349197) [[Open-Access paper]](https://arxiv.org/abs/1909.08383).
4+
5+
This work allows comparing the state-of-the-art in a fair fashion using the **Continual Hyperparameter Framework**, which sets the hyperparameters dynamically based on the stability-plasticity dilemma.
6+
This addresses the longstanding problem in literature to set hyperparameters for different methods in a fair fashion, using ONLY the current task data (hence without using iid validation data, which is not available in continual learning).
7+
8+
The code contains a generalizing framework for 11 SOTA methods and 4 baselines in Pytorch. </br>
9+
Implemented task-incremental methods are
10+
<div align="center">
11+
<p align="center"><b>
12+
SI | EWC | MAS | mean/mode-IMM | LWF | EBLL | PackNet | HAT | GEM | iCaRL
13+
</b></p>
14+
</div>
15+
16+
These are compared with 4 baselines:
17+
<div align="center">
18+
<p align="center"><b>
19+
Joint | Finetuning | Finetuning-FM | Finetuning-PM
20+
</b></p>
21+
</div>
22+
23+
- **Joint**: Learn from all task data at once with a single head (multi-task learning baseline).
24+
- **Finetuning**: standard SGD
25+
- **Finetuning with Full Memory replay**: Allocate memory dynamically to incoming tasks.
26+
- **Finetuning with Partial Memory replay**: Divide memory a priori over all tasks.
2027

2128

2229
This source code is released under a Attribution-NonCommercial 4.0 International
@@ -26,7 +33,7 @@ license, find out more about it in the [LICENSE file](LICENSE).
2633

2734

2835
## Pipeline
29-
**Reproducability**: Results from the paper can be obtained from [src/main_'dataset'.sh](src/main_tinyimagenet.sh).
36+
**Reproducibility**: Results from the paper can be obtained from [src/main_'dataset'.sh](src/main_tinyimagenet.sh).
3037
Full pipeline example in [src/main_tinyimagenet.sh](src/main_tinyimagenet.sh) .
3138

3239
**Pipeline**: Constructing a custom pipeline typically requires the following steps.
@@ -79,6 +86,14 @@ The class "YourMethod" will call this code for training/eval/processing of a sin
7986

8087
## Credits
8188
- Consider citing our work upon using this repo.
89+
```
90+
@ARTICLE{delange2021clsurvey,
91+
author={M. {Delange} and R. {Aljundi} and M. {Masana} and S. {Parisot} and X. {Jia} and A. {Leonardis} and G. {Slabaugh} and T. {Tuytelaars}},
92+
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
93+
title={A continual learning survey: Defying forgetting in classification tasks},
94+
year={2021},volume={},number={},pages={1-1},
95+
doi={10.1109/TPAMI.2021.3057446}}
96+
```
8297
- Thanks to Huawei for funding this project.
8398
- Thanks to the following repositories:
8499
- https://github.com/rahafaljundi/MAS-Memory-Aware-Synapses

0 commit comments

Comments
 (0)