You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+34-19
Original file line number
Diff line number
Diff line change
@@ -1,22 +1,29 @@
1
1
# A continual learning survey: Defying forgetting in classification tasks
2
-
Source code for the [Continual Learning survey paper](https://arxiv.org/abs/1909.08383):
3
-
4
-
```
5
-
@article{de2019continual,
6
-
title={A continual learning survey: Defying forgetting in classification tasks},
7
-
author={De Lange, Matthias and Aljundi, Rahaf and Masana, Marc and Parisot, Sarah and Jia, Xu and Leonardis, Ale{\v{s}} and Slabaugh, Gregory and Tuytelaars, Tinne},
8
-
journal={arXiv preprint arXiv:1909.08383},
9
-
year={2019}
10
-
}
11
-
```
12
-
13
-
The code contains a generalizing framework for 11 SOTA methods and 4 baselines in Pytorch:
- Joint: Learn from all task data at once with a single head (multi-task learning baseline).
17
-
- Finetuning: standard SGD
18
-
- Finetuning with Full Memory replay: Allocate memory dynamically to incoming tasks.
19
-
- Finetuning with Partial Memory replay: Divide memory a priori over all tasks.
2
+
3
+
This is the original source code for the Continual Learning survey paper _"A continual learning survey: Defying forgetting in classification tasks"_ published at TPAMI [[TPAMI paper]](https://ieeexplore.ieee.org/abstract/document/9349197)[[Open-Access paper]](https://arxiv.org/abs/1909.08383).
4
+
5
+
This work allows comparing the state-of-the-art in a fair fashion using the **Continual Hyperparameter Framework**, which sets the hyperparameters dynamically based on the stability-plasticity dilemma.
6
+
This addresses the longstanding problem in literature to set hyperparameters for different methods in a fair fashion, using ONLY the current task data (hence without using iid validation data, which is not available in continual learning).
7
+
8
+
The code contains a generalizing framework for 11 SOTA methods and 4 baselines in Pytorch. </br>
9
+
Implemented task-incremental methods are
10
+
<divalign="center">
11
+
<palign="center"><b>
12
+
SI | EWC | MAS | mean/mode-IMM | LWF | EBLL | PackNet | HAT | GEM | iCaRL
0 commit comments