Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pair classification inconsistency #945

Merged
merged 6 commits into from
Jun 18, 2024

Conversation

dokato
Copy link
Collaborator

@dokato dokato commented Jun 17, 2024

Addresses issue reported here #582. Brings consistency with standard followed in STS and BiText Mining tasks: 'sentence1', 'sentence2' and 'label'.

Checklist

  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding datasets checklist

Reason for dataset addition:

Added unfinished work from here #581 . It's a good dataset as it expands on Indic languages, adds two new languages to this task (mal, tel). I think this was blocked by this issue.

  • I have run the following models on the task (adding the results to the pr). These can be run using the mteb -m {model_name} -t {task_name} command.
    • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
    • intfloat/multilingual-e5-small
  • I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
  • If the dataset is too big (e.g. >2048 examples), considering using self.stratified_subsampling() under dataset_transform()
  • I have filled out the metadata object in the dataset file (find documentation on it here).
  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Copy link
Contributor

@KennethEnevoldsen KennethEnevoldsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @dokato! In the future will you leave the start of the checklist there as well (if you have't run tests/lint just leave it unchecked).

From my reading of the issue, the issue pertains to the structure of the tasks as a "one-line" dataset, which contains all the samples, not the naming of the column.

So, while it is a good fix (for other reasons), it doesn't address the stated issue.

@dokato
Copy link
Collaborator Author

dokato commented Jun 17, 2024

Thanks @KennethEnevoldsen for taking a look.

From my reading of the issue, the issue pertains to the structure of the tasks as a "one-line" dataset, which contains all the samples, not the naming of the column.

I think the remark about "one-liner" was only for one specific dataset; others were formatted accordingly , e.g. by creating such lists _dataset[lang]["test"] = [...] and evaluation function was adapted. We could change it and modify PairClassificationEvaluator but as long as we remain consistent I don't think it's worth it. I focus here on the second thing mentioned there which is inconsisteny with STS and BiText Mining. And then we can close it!

In the future will you leave the start of the checklist there as well (if you have't run tests/lint just leave it unchecked).

Sorry for that... I forgot to WIP it, as I considered IndicXnliPairClassification from #581 as it is a good dataset, adding of which might have been blocked by it. I added it to this PR to make it less trivial.

@KennethEnevoldsen KennethEnevoldsen merged commit 6660f43 into embeddings-benchmark:main Jun 18, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants