Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset/update climate fever #1873

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

mina-parham
Copy link

@mina-parham mina-parham commented Jan 25, 2025

This PR has been created related to the following issue, which updates the ClimateFEVER dataset:

Closes #1498 (comment)

I tried to use the same metadata as the original ClimateFEVER class, but while running the test, I got an error related to some metadata fields needing to be filled in. We need to review and ensure these fields are correct.

I ran the tests locally, and nothing broke related to the code I added. However, before making my changes, the tests already failed in 7 parts related to other parts of the codebase. These failures seem unrelated to the changes introduced in this PR.

Also the following are the results related to paraphrase-multilingual-MiniLM-L12-v2

{'test': [{'ndcg_at_1': 0.21368,
   'ndcg_at_3': 0.18424,
   'ndcg_at_5': 0.18343,
   'ndcg_at_10': 0.2273,
   'ndcg_at_20': 0.26617,
   'ndcg_at_100': 0.33216,
   'ndcg_at_1000': 0.38438,
   'map_at_1': 0.05607,
   'map_at_3': 0.10247,
   'map_at_5': 0.12562,
   'map_at_10': 0.1504,
   'map_at_20': 0.16521,
   'map_at_100': 0.1794,
   'map_at_1000': 0.18304,
   'recall_at_1': 0.05607,
   'recall_at_3': 0.13024,
   'recall_at_5': 0.18484,
   'recall_at_10': 0.27878,
   'recall_at_20': 0.38471,
   'recall_at_100': 0.62575,
   'recall_at_1000': 0.90718,
   'precision_at_1': 0.22997,
   'precision_at_3': 0.17416,
   'precision_at_5': 0.1458,
   'precision_at_10': 0.10697,
   'precision_at_20': 0.07244,
   'precision_at_100': 0.02403,
   'precision_at_1000': 0.00353,
   'mrr_at_1': 0.22996742671009773,
   'mrr_at_3': 0.29391965255157404,
   'mrr_at_5': 0.3111834961997822,
   'mrr_at_10': 0.32628612791479217,
   'mrr_at_20': 0.3352755299726383,
   'mrr_at_100': 0.33967423313252554,
   'mrr_at_1000': 0.3401334410242214,
   'nauc_ndcg_at_1_max': np.float64(-0.0019234851874547624),
   'nauc_ndcg_at_1_std': np.float64(0.015845014332759703),
   'nauc_ndcg_at_1_diff1': np.float64(0.1602888604282906),
   'nauc_ndcg_at_3_max': np.float64(0.006504475396564762),
   'nauc_ndcg_at_3_std': np.float64(0.019252832669093754),
   'nauc_ndcg_at_3_diff1': np.float64(0.15930909924117406),
   'nauc_ndcg_at_5_max': np.float64(0.02789195207464064),
   'nauc_ndcg_at_5_std': np.float64(0.04525851173767934),
   'nauc_ndcg_at_5_diff1': np.float64(0.14963391837900944),
   'nauc_ndcg_at_10_max': np.float64(0.04165608690052415),
   'nauc_ndcg_at_10_std': np.float64(0.06497707335457095),
   'nauc_ndcg_at_10_diff1': np.float64(0.14367742699605052),
   'nauc_ndcg_at_20_max': np.float64(0.0499609558950235),
   'nauc_ndcg_at_20_std': np.float64(0.0930850566174989),
   'nauc_ndcg_at_20_diff1': np.float64(0.14062762711781499),
   'nauc_ndcg_at_100_max': np.float64(0.03679505342422458),
   'nauc_ndcg_at_100_std': np.float64(0.10921307979333825),
   'nauc_ndcg_at_100_diff1': np.float64(0.13739073361698195),
   'nauc_ndcg_at_1000_max': np.float64(-0.0029603215313893996),
   'nauc_ndcg_at_1000_std': np.float64(0.06220708428717415),
   'nauc_ndcg_at_1000_diff1': np.float64(0.12703641622493242),
   'nauc_map_at_1_max': np.float64(0.06585674375394879),
   'nauc_map_at_1_std': np.float64(0.03387621145079985),
   'nauc_map_at_1_diff1': np.float64(0.13870026707847363),
   'nauc_map_at_3_max': np.float64(0.05209072027992851),
   'nauc_map_at_3_std': np.float64(0.02500661099875313),
   'nauc_map_at_3_diff1': np.float64(0.15204324685496393),
   'nauc_map_at_5_max': np.float64(0.041590262268235),
   'nauc_map_at_5_std': np.float64(0.04323790717333005),
   'nauc_map_at_5_diff1': np.float64(0.14869449966766685),
   'nauc_map_at_10_max': np.float64(0.04966941967456139),
   'nauc_map_at_10_std': np.float64(0.06206853068958281),
   'nauc_map_at_10_diff1': np.float64(0.14548773934093587),
   'nauc_map_at_20_max': np.float64(0.05081575210496686),
   'nauc_map_at_20_std': np.float64(0.07478139737770217),
   'nauc_map_at_20_diff1': np.float64(0.14463361708594288),
   'nauc_map_at_100_max': np.float64(0.0498499256284044),
   'nauc_map_at_100_std': np.float64(0.08403376312172636),
   'nauc_map_at_100_diff1': np.float64(0.1436296737376477),
   'nauc_map_at_1000_max': np.float64(0.04728458570811444),
   'nauc_map_at_1000_std': np.float64(0.08139368738857587),
   'nauc_map_at_1000_diff1': np.float64(0.14265262401070744),
   'nauc_recall_at_1_max': np.float64(0.06585674375394879),
   'nauc_recall_at_1_std': np.float64(0.03387621145079985),
   'nauc_recall_at_1_diff1': np.float64(0.13870026707847363),
   'nauc_recall_at_3_max': np.float64(0.059476696920827195),
   'nauc_recall_at_3_std': np.float64(0.034443636834688846),
   'nauc_recall_at_3_diff1': np.float64(0.1558645713498114),
   'nauc_recall_at_5_max': np.float64(0.05746914402960373),
   'nauc_recall_at_5_std': np.float64(0.06894288259734185),
   'nauc_recall_at_5_diff1': np.float64(0.1416913575737667),
   'nauc_recall_at_10_max': np.float64(0.08738775270123154),
   'nauc_recall_at_10_std': np.float64(0.10191968500526576),
   'nauc_recall_at_10_diff1': np.float64(0.11678198599293008),
   'nauc_recall_at_20_max': np.float64(0.11122722651805075),
   'nauc_recall_at_20_std': np.float64(0.1668044754001878),
   'nauc_recall_at_20_diff1': np.float64(0.1048694737943809),
   'nauc_recall_at_100_max': np.float64(0.10468943557246045),
   'nauc_recall_at_100_std': np.float64(0.23738724902466038),
   'nauc_recall_at_100_diff1': np.float64(0.0991202980837292),
   'nauc_recall_at_1000_max': np.float64(-0.1459649841610364),
   'nauc_recall_at_1000_std': np.float64(0.022700530929968727),
   'nauc_recall_at_1000_diff1': np.float64(-0.030751585189625525),
   'nauc_precision_at_1_max': np.float64(0.008642308953872736),
   'nauc_precision_at_1_std': np.float64(0.022085659624854568),
   'nauc_precision_at_1_diff1': np.float64(0.15871292733793976),
   'nauc_precision_at_3_max': np.float64(9.266309504084029e-05),
   'nauc_precision_at_3_std': np.float64(0.030200788846920713),
   'nauc_precision_at_3_diff1': np.float64(0.15154340257956853),
   'nauc_precision_at_5_max': np.float64(-0.015248230715225491),
   'nauc_precision_at_5_std': np.float64(0.04899655942318418),
   'nauc_precision_at_5_diff1': np.float64(0.13680027114060297),
   'nauc_precision_at_10_max': np.float64(-0.010456492896417638),
   'nauc_precision_at_10_std': np.float64(0.06916217658440739),
   'nauc_precision_at_10_diff1': np.float64(0.12333331677082864),
   'nauc_precision_at_20_max': np.float64(-0.005590771090966164),
   'nauc_precision_at_20_std': np.float64(0.10436389158796387),
   'nauc_precision_at_20_diff1': np.float64(0.11178770693212199),
   'nauc_precision_at_100_max': np.float64(-0.08343459183959545),
   'nauc_precision_at_100_std': np.float64(0.09685864745479719),
   'nauc_precision_at_100_diff1': np.float64(0.08680619210270112),
   'nauc_precision_at_1000_max': np.float64(-0.33324270531865396),
   'nauc_precision_at_1000_std': np.float64(-0.1538931809392369),
   'nauc_precision_at_1000_diff1': np.float64(-0.015206965982458742),
   'nauc_mrr_at_1_max': np.float64(0.008642308953872736),
   'nauc_mrr_at_1_std': np.float64(0.022085659624854568),
   'nauc_mrr_at_1_diff1': np.float64(0.15871292733793976),
   'nauc_mrr_at_3_max': np.float64(0.011133929851962262),
   'nauc_mrr_at_3_std': np.float64(0.03964160402051868),
   'nauc_mrr_at_3_diff1': np.float64(0.15374210241883887),
   'nauc_mrr_at_5_max': np.float64(0.010706625350049776),
   'nauc_mrr_at_5_std': np.float64(0.042068832957823064),
   'nauc_mrr_at_5_diff1': np.float64(0.14946749699691358),
   'nauc_mrr_at_10_max': np.float64(0.010918895262789627),
   'nauc_mrr_at_10_std': np.float64(0.0407928429615466),
   'nauc_mrr_at_10_diff1': np.float64(0.1449392386351715),
   'nauc_mrr_at_20_max': np.float64(0.011238584254940794),
   'nauc_mrr_at_20_std': np.float64(0.04312959809739289),
   'nauc_mrr_at_20_diff1': np.float64(0.14513578582729145),
   'nauc_mrr_at_100_max': np.float64(0.009403376648689244),
   'nauc_mrr_at_100_std': np.float64(0.041305620748898826),
   'nauc_mrr_at_100_diff1': np.float64(0.145356417117423),
   'nauc_mrr_at_1000_max': np.float64(0.009220267666528224),
   'nauc_mrr_at_1000_std': np.float64(0.041034758410318506),
   'nauc_mrr_at_1000_diff1': np.float64(0.1452834935310141),
   'main_score': 0.2273,
   'hf_subset': 'default',
   'languages': ['eng-Latn']}]}

model: intfloat/multilingual-e5-small

{'test': [{'ndcg_at_1': 0.20521,
   'ndcg_at_3': 0.18213,
   'ndcg_at_5': 0.18539,
   'ndcg_at_10': 0.22614,
   'ndcg_at_20': 0.26334,
   'ndcg_at_100': 0.32934,
   'ndcg_at_1000': 0.38059,
   'map_at_1': 0.0573,
   'map_at_3': 0.10438,
   'map_at_5': 0.12824,
   'map_at_10': 0.15073,
   'map_at_20': 0.16549,
   'map_at_100': 0.17949,
   'map_at_1000': 0.18305,
   'recall_at_1': 0.0573,
   'recall_at_3': 0.13216,
   'recall_at_5': 0.19063,
   'recall_at_10': 0.27633,
   'recall_at_20': 0.37623,
   'recall_at_100': 0.62208,
   'recall_at_1000': 0.89304,
   'precision_at_1': 0.22215,
   'precision_at_3': 0.17112,
   'precision_at_5': 0.14736,
   'precision_at_10': 0.10638,
   'precision_at_20': 0.07179,
   'precision_at_100': 0.02357,
   'precision_at_1000': 0.00347,
   'mrr_at_1': 0.2221498371335505,
   'mrr_at_3': 0.2854505971769811,
   'mrr_at_5': 0.30717698154180156,
   'mrr_at_10': 0.3221725867328468,
   'mrr_at_20': 0.3301374406338547,
   'mrr_at_100': 0.33493775099686235,
   'mrr_at_1000': 0.3353161636491783,
   'nauc_ndcg_at_1_max': np.float64(0.036197356402233455),
   'nauc_ndcg_at_1_std': np.float64(0.04306869469336829),
   'nauc_ndcg_at_1_diff1': np.float64(0.06354828563153381),
   'nauc_ndcg_at_3_max': np.float64(0.05744633906772305),
   'nauc_ndcg_at_3_std': np.float64(0.05356132400288305),
   'nauc_ndcg_at_3_diff1': np.float64(0.06652974968759166),
   'nauc_ndcg_at_5_max': np.float64(0.09032225896135936),
   'nauc_ndcg_at_5_std': np.float64(0.08197675899288988),
   'nauc_ndcg_at_5_diff1': np.float64(0.07322651282330493),
   'nauc_ndcg_at_10_max': np.float64(0.10250939800228867),
   'nauc_ndcg_at_10_std': np.float64(0.10704476459144112),
   'nauc_ndcg_at_10_diff1': np.float64(0.08699029288022454),
   'nauc_ndcg_at_20_max': np.float64(0.11424453190796985),
   'nauc_ndcg_at_20_std': np.float64(0.1303291218541926),
   'nauc_ndcg_at_20_diff1': np.float64(0.08618226712016512),
   'nauc_ndcg_at_100_max': np.float64(0.11747650492629069),
   'nauc_ndcg_at_100_std': np.float64(0.15105782405090826),
   'nauc_ndcg_at_100_diff1': np.float64(0.08089023982075633),
   'nauc_ndcg_at_1000_max': np.float64(0.06437973732216404),
   'nauc_ndcg_at_1000_std': np.float64(0.09880612768668198),
   'nauc_ndcg_at_1000_diff1': np.float64(0.06802567754850719),
   'nauc_map_at_1_max': np.float64(0.06027785225283198),
   'nauc_map_at_1_std': np.float64(0.05495794539106963),
   'nauc_map_at_1_diff1': np.float64(0.06050059257000751),
   'nauc_map_at_3_max': np.float64(0.09022855263038915),
   'nauc_map_at_3_std': np.float64(0.07431817390646686),
   'nauc_map_at_3_diff1': np.float64(0.07472514116361355),
   'nauc_map_at_5_max': np.float64(0.10304203687640019),
   'nauc_map_at_5_std': np.float64(0.08508387421619124),
   'nauc_map_at_5_diff1': np.float64(0.08219406594334867),
   'nauc_map_at_10_max': np.float64(0.11361626422942277),
   'nauc_map_at_10_std': np.float64(0.108433143501649),
   'nauc_map_at_10_diff1': np.float64(0.0938199344474812),
   'nauc_map_at_20_max': np.float64(0.11694883735377425),
   'nauc_map_at_20_std': np.float64(0.12357791141880775),
   'nauc_map_at_20_diff1': np.float64(0.09140405808287454),
   'nauc_map_at_100_max': np.float64(0.11887266872456236),
   'nauc_map_at_100_std': np.float64(0.1331045335054451),
   'nauc_map_at_100_diff1': np.float64(0.08956314186533343),
   'nauc_map_at_1000_max': np.float64(0.11512001385020777),
   'nauc_map_at_1000_std': np.float64(0.12993879110043752),
   'nauc_map_at_1000_diff1': np.float64(0.08862737171868612),
   'nauc_recall_at_1_max': np.float64(0.06027785225283198),
   'nauc_recall_at_1_std': np.float64(0.05495794539106963),
   'nauc_recall_at_1_diff1': np.float64(0.06050059257000751),
   'nauc_recall_at_3_max': np.float64(0.1080100751086333),
   'nauc_recall_at_3_std': np.float64(0.08488275402041916),
   'nauc_recall_at_3_diff1': np.float64(0.07132204337540524),
   'nauc_recall_at_5_max': np.float64(0.13798833890249398),
   'nauc_recall_at_5_std': np.float64(0.11099970870969147),
   'nauc_recall_at_5_diff1': np.float64(0.0815563473939471),
   'nauc_recall_at_10_max': np.float64(0.15707044398340275),
   'nauc_recall_at_10_std': np.float64(0.14984486583696252),
   'nauc_recall_at_10_diff1': np.float64(0.1054729250200018),
   'nauc_recall_at_20_max': np.float64(0.186436730437142),
   'nauc_recall_at_20_std': np.float64(0.19991341052425027),
   'nauc_recall_at_20_diff1': np.float64(0.10275699317641636),
   'nauc_recall_at_100_max': np.float64(0.23123173243694364),
   'nauc_recall_at_100_std': np.float64(0.28284580915512664),
   'nauc_recall_at_100_diff1': np.float64(0.08923044738404862),
   'nauc_recall_at_1000_max': np.float64(-0.03548465676751121),
   'nauc_recall_at_1000_std': np.float64(0.06090303693913847),
   'nauc_recall_at_1000_diff1': np.float64(0.01258392812478064),
   'nauc_precision_at_1_max': np.float64(0.034595588449677805),
   'nauc_precision_at_1_std': np.float64(0.04660977944123649),
   'nauc_precision_at_1_diff1': np.float64(0.06358950960459754),
   'nauc_precision_at_3_max': np.float64(0.05312879891269882),
   'nauc_precision_at_3_std': np.float64(0.05147445243648432),
   'nauc_precision_at_3_diff1': np.float64(0.07246658202046051),
   'nauc_precision_at_5_max': np.float64(0.056131850923910646),
   'nauc_precision_at_5_std': np.float64(0.0720250181043127),
   'nauc_precision_at_5_diff1': np.float64(0.06614849098085035),
   'nauc_precision_at_10_max': np.float64(0.056207028892006336),
   'nauc_precision_at_10_std': np.float64(0.11227590211304779),
   'nauc_precision_at_10_diff1': np.float64(0.07297911681668362),
   'nauc_precision_at_20_max': np.float64(0.04877820864839892),
   'nauc_precision_at_20_std': np.float64(0.13326655971187326),
   'nauc_precision_at_20_diff1': np.float64(0.06034685456352835),
   'nauc_precision_at_100_max': np.float64(0.004128751821027512),
   'nauc_precision_at_100_std': np.float64(0.13046795252943222),
   'nauc_precision_at_100_diff1': np.float64(0.027258512610983304),
   'nauc_precision_at_1000_max': np.float64(-0.2950027510827786),
   'nauc_precision_at_1000_std': np.float64(-0.13827913584450302),
   'nauc_precision_at_1000_diff1': np.float64(-0.06353493578423805),
   'nauc_mrr_at_1_max': np.float64(0.034595588449677805),
   'nauc_mrr_at_1_std': np.float64(0.04660977944123649),
   'nauc_mrr_at_1_diff1': np.float64(0.06358950960459754),
   'nauc_mrr_at_3_max': np.float64(0.041772779737535994),
   'nauc_mrr_at_3_std': np.float64(0.05352988767326582),
   'nauc_mrr_at_3_diff1': np.float64(0.06272347806616227),
   'nauc_mrr_at_5_max': np.float64(0.043263442270575665),
   'nauc_mrr_at_5_std': np.float64(0.061024579036635204),
   'nauc_mrr_at_5_diff1': np.float64(0.059956072655078886),
   'nauc_mrr_at_10_max': np.float64(0.03946858686167938),
   'nauc_mrr_at_10_std': np.float64(0.060853714901655157),
   'nauc_mrr_at_10_diff1': np.float64(0.05970475432202936),
   'nauc_mrr_at_20_max': np.float64(0.041461076284007754),
   'nauc_mrr_at_20_std': np.float64(0.060231765065109424),
   'nauc_mrr_at_20_diff1': np.float64(0.06037500455982566),
   'nauc_mrr_at_100_max': np.float64(0.040206382196548526),
   'nauc_mrr_at_100_std': np.float64(0.05942815516881157),
   'nauc_mrr_at_100_diff1': np.float64(0.06036567216562132),
   'nauc_mrr_at_1000_max': np.float64(0.039907615693334675),
   'nauc_mrr_at_1000_std': np.float64(0.05911620014933184),
   'nauc_mrr_at_1000_diff1': np.float64(0.06027514387810212),
   'main_score': 0.22614,
   'hf_subset': 'default',
   'languages': ['eng-Latn']}]}

Checklist

  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding datasets checklist

Reason for dataset addition: ...

The reason for updating this dataset is explained here:

#1498 (comment)

  • I have run the following models on the task (adding the results to the pr). These can be run using the mteb -m {model_name} -t {task_name} command.
    • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
    • intfloat/multilingual-e5-small
  • I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
  • If the dataset is too big (e.g. >2048 examples), considering using self.stratified_subsampling() under dataset_transform()
  • I have filled out the metadata object in the dataset file (find documentation on it here).
  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

@Muennighoff
Copy link
Contributor

Looks amazing!! Can you maybe share the results on the old ClimateFEVER for one of those models? Overall, do you think this is a net improvement of ClimateFEVER? If so, maybe worth incorporating it in some benchmarks (/for future benchmarks) cc @KennethEnevoldsen

Copy link
Contributor

@KennethEnevoldsen KennethEnevoldsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wonderful addition! A few minor changes.

We sadly can't update the actual benchmark (this will break backward compatibility and require us to rerun all models on the leaderboard).

however future versions of the benchmark will likely use this updated version.

@@ -72,3 +72,39 @@ class ClimateFEVERHardNegatives(AbsTaskRetrieval):
primaryClass={cs.CL}
}""",
)


class ClimateFEVERv2(AbsTaskRetrieval):
Copy link
Contributor

@KennethEnevoldsen KennethEnevoldsen Feb 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You will need to add supeseded_by to ClimateFEVER

If we want to consistently name tasks we should probably call this

Suggested change
class ClimateFEVERv2(AbsTaskRetrieval):
class ClimateFEVERRetrievalv2(AbsTaskRetrieval):

The same with the name

description="CLIMATE-FEVER is a dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change. ",
reference="https://www.sustainablefinance.uzh.ch/en/research/climate-fever.html",
dataset={
"path": "Mina76/climate-fever",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would love to move this over to the mteb org to ensure that it doesn't get taken down (I have sent you an invite to the org).

Not to say that you would do it, but it has happened sometimes (often people just cleaning up the datasets)

domains=["Academic"],
task_subtypes=["Question answering"],
license="cc-by-sa-4.0",
annotations_creators="human-annotated",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the metadata is not filled out in the old one - could you move this up there as well?

main_score="ndcg_at_10",
date=("2020-12-11", "2020-12-11"),
domains=["Academic"],
task_subtypes=["Question answering"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it Claim Verification?

eval_langs=["eng-Latn"],
main_score="ndcg_at_10",
date=("2020-12-11", "2020-12-11"),
domains=["Academic"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
domains=["Academic"],
domains=["Academic", "Written"],

What is the source data of climate fever? Research articles? (would be great to update the description to make this clearer

eval_splits=["test"],
eval_langs=["eng-Latn"],
main_score="ndcg_at_10",
date=("2020-12-11", "2020-12-11"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The date should refer to when the source data was written. E.g. articles from the period 2014-2018.

@KennethEnevoldsen
Copy link
Contributor

@Samoed shouldn't the test fail due to missing descriptive stats?

@Samoed
Copy link
Collaborator

Samoed commented Feb 2, 2025

I've added these test only for v2 branch, since we have updated format there. After merge, I'll add descriptive stats

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Investigate/Fix ClimateFever
4 participants