Skip to content

Commit

Permalink
rephrase improving versus changing
Browse files Browse the repository at this point in the history
  • Loading branch information
kermitt2 committed Nov 29, 2023
1 parent f4c9d89 commit 194767d
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 9 deletions.
4 changes: 2 additions & 2 deletions delft/applications/datasetTagger.py
Original file line number Diff line number Diff line change
Expand Up @@ -306,8 +306,8 @@ def annotate_text(texts, output_format, architecture='BidLSTM_CRF', features=Non
parser.add_argument("--max-epoch", type=int, default=-1,
help="Maximum number of epochs.")
parser.add_argument("--early-stop", type=t_or_f, default=None,
help="Force training early termination when evaluation scores at the end of "
"n epochs are not changing.")
help="Force early training termination when metrics scores are not improving " +
"after a number of epochs equals to the patience parameter.")
parser.add_argument("--multi-gpu", default=False,
help="Enable the support for distributed computing (the batch size needs to be set accordingly using --batch-size)",
action="store_true")
Expand Down
4 changes: 2 additions & 2 deletions delft/applications/grobidTagger.py
Original file line number Diff line number Diff line change
Expand Up @@ -422,8 +422,8 @@ class Tasks:
parser.add_argument("--max-epoch", type=int, default=-1,
help="Maximum number of epochs for training.")
parser.add_argument("--early-stop", type=t_or_f, default=None,
help="Force training early termination when evaluation scores at the end of "
"n epochs are not changing.")
help="Force early training termination when metrics scores are not improving " +
"after a number of epochs equals to the patience parameter.")

parser.add_argument("--multi-gpu", default=False,
help="Enable the support for distributed computing (the batch size needs to be set accordingly using --batch-size)",
Expand Down
4 changes: 2 additions & 2 deletions delft/applications/insultTagger.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,8 @@ def annotate(texts, output_format, architecture='BidLSTM_CRF', transformer=None,
help="Maximum number of epochs.")
parser.add_argument("--batch-size", type=int, default=-1, help="batch-size parameter to be used.")
parser.add_argument("--early-stop", type=t_or_f, default=None,
help="Force training early termination when evaluation scores at the end of "
"n epochs are not changing.")
help="Force early training termination when metrics scores are not improving " +
"after a number of epochs equals to the patience parameter.")

parser.add_argument("--multi-gpu", default=False,
help="Enable the support for distributed computing (the batch size needs to be set accordingly using --batch-size)",
Expand Down
4 changes: 2 additions & 2 deletions delft/applications/nerTagger.py
Original file line number Diff line number Diff line change
Expand Up @@ -635,8 +635,8 @@ def annotate(output_format,
parser.add_argument("--max-epoch", type=int, default=-1,
help="Maximum number of epochs.")
parser.add_argument("--early-stop", type=t_or_f, default=None,
help="Force training early termination when evaluation scores at the end of "
"n epochs are not changing.")
help="Force early training termination when metrics scores are not improving " +
"after a number of epochs equals to the patience parameter.")

parser.add_argument("--multi-gpu", default=False,
help="Enable the support for distributed computing (the batch size needs to be set accordingly using --batch-size)",
Expand Down
2 changes: 1 addition & 1 deletion doc/ner.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Results with transformer fine-tuning for CoNLL-2003 NER dataset, including a fin
| --- | --- | --- | --- |
| BERT | bert-base-cased | DeLFT | 91.19 |
| BERT_CRF | bert-base-cased +CRF| DeLFT | 91.25 |
| BERT_ChainCRF | bert-base-cased +CRF| DeLFT | |
| BERT_ChainCRF | bert-base-cased +CRF| DeLFT | 91.22 |
| BERT | roberta-base | DeLFT | 91.64 |

Note: DeLFT uses `BERT` as architecture name for transformers in general, but the transformer model could be in principle any transformer variants preset in HuggingFace Hub. DeLFT supports 2 implementations of a CRF layer to be combined with RNN and transformer architectures: `CRF` based on TensorFlow Addons and `ChainCRF` a custom implementation. Both should produce similar accuracy results, but `ChainCRF` is significantly faster and robust.
Expand Down

0 comments on commit 194767d

Please sign in to comment.