-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add parameter --max-epoch #165
Conversation
Hi Luca ! It seems in this PR that you consider that if However, it's not the case normally, we want to train with early stop until we reach a max epoch. I usually use it as max epoch threshold in early stop training too (to avoid training for too long). The training method for the different models implementation actually use early stop with max-epoch combined. |
Yes, I though that currently, the default values with earlyStop=True are good enough because it's going to stop anyway after 20-30 iterations. |
Well not necessarily, especially if patience is set higher.
Personally I rely on max-epoch with early stop to avoid training too long.
Personally I am not a fan of all these command line parameters (I prefer a config file or simply a dict somewhere, more readable), but yes that would be consistent with the training implementation. |
OK, then I will decouple early stop and max_epoch.
I agree that the number of parameters are getting a bit hard to follow. |
So far I've implemented number 1, because it seems less confusing, I can change it if requested |
This PR adds the parameter --max-epoch which override the default value and disable Early stop.
I've implemented only for sequence labelling