From 9dd1672f8a264dd2057ae51fafc214ff21f72100 Mon Sep 17 00:00:00 2001 From: Li Bo Date: Sat, 27 Jul 2024 13:43:24 +1000 Subject: [PATCH] Dev/ov evals (#147) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * fix doc * [WIP] adding mmbench dev evaluation (#75) * WIP * Update GPT evaluation model name and sys prompt * 🛠️ Scale accuracy to percentage The accuracy value is now multiplied by 100 in the aggregation function to represent it as a percentage. Regarding the evaluation process, `math` module importation and refactoring reduce progress log verbosity by logging every 100 evaluations instead of 10. It prevents potential logging overflow. Handling of NaN values is added to ensure 'default_value' is set in case of missing data, avoiding errors in split, category, and l2-category assignments. Finally, reporting of categorical and l2-categorical accuracies is streamlined through a new `calculate_hit_rates` function, improving code readability and maintenance. Issue refs: #1427, #1533 * Update GPT evaluation model name and API configuration * Refactor MMBench_Evaluator class to handle missing columns * Add print statements for detailed results in MMBench-CN(CC), MMBench-CN(Dev), and MMBench-EN(Dev) evaluations * Refactor MMBench-CN and MMBench-EN evaluation functions * 🔄 Refactor result processing and logging logic - Simplified the result processing functions across different utility modules (`cc_utils.py`, `cn_utils.py`, `en_utils.py`) to unify the handling of multiple-choice options. Now, all options ("A" to "E") are dynamically added to the result data, and default to "nan" if not provided in the document. - Removed redundant keys directly from the process results dict creation to avoid clutter and align with the new dynamic addition of options. - In `mmbench_evals.py`, removed the unnecessary check for all splits being 'dev' and streamlined the evaluation loop by eliminating the progress bar (tqdm) for a cleaner log output. - Commented-out code and verbose logging during evaluation, which may have interfered with performance, has been removed for a more efficient and less intrusive logging experience. This cleanup reduces redundancy in the codebase and improves evaluation performance. Refs #2045 --------- Co-authored-by: Bo Li (cherry picked from commit a19278c2ea6ddcbca64d3cc7f4efec7fe5775121) * Create README.md * Add files via upload * Add MathVerse * Fix typo in qwen_vl that was causing "reference before assignment" * convert contexts to list if necessary and remove unnecessary construction of `questions` * refactor query construction for clarity * Create ScreenSpot on clean branch * Update README to reflect new tasks * Add README file specific to ScreenSpot * slight update * Init webSRC * Draft README for WebSRC * Update main README with new task names * Draft and validate websrc eval on dev split * Add code to enable compilation of submission for WebSRC test split * Bugfix: WebSRC should be token-level F1 NOT character-level * Add qwen vl api * Fix llava conv template for llama3 * Fix llava_hf generation for 1.6 * Parse result for llava_hf 1.6 * Add model_name parameter to Llava constructor * Fix endless warning for llava_hf generation * Fix llava_hf image tokens number issue * Create LICENSE * Update LICENSE * Update LICENSE * Better task list_with_num * Fix idefics2 llava in the wild bugs * Remove redundant code in fuyu * Fix instructblip qformer size mismatch and multi-images problem * Comment out parse result in xcomposer * Comment out Spice in caption task so that don't need to download stanford nlp model * Update gitignore * Add separated pope tasks by category * Fix pope random name in pope full * Set printing info for llava_hf to debug level * Adding Phi3v model. * Adding prompt arguments for Phi3v on MathVista-TestMini * Adding documentation of Phi3v class. * [Fix] import issues of multilingual llava and olympiadbench * fix compatibility issue of older version llava * add upd * add upd * add upd * add upd * add upd * add upd * Group MMMU images into one image (#83) * update * update font * Add matplotlib.font_manager import in utils.py * Refactor font handling in add_order_label function in utils.py * group mmmu --------- Co-authored-by: Li Bo * merge model_specific_prompt_kwargs and dataset_name into each task yaml * Add MathVerse in README.md * slightly change query_prompt for the reproduction * update utils.py for leaderboard submission * add conbench * update README * Update README.md * init include vcr * modify the form of VCR * switch logic * add crossed_text to vcr_wiki output * include the try-except logic for spacy * update vcr_wiki tasks * update vcr_wiki tasks in README.md * include std and confidence interval * update gpt-3.5-turbo version * update gpt-3.5-turbo version * chore: Remove unnecessary files and code related to live_bench and sft_eval tasks * Bump version to 0.2.0.dev0 * chore: Update lmms-eval to support video evaluations for LLaVA models * Update llava conv_template in lmms_eval/models/llava.py * Update image alignment in README.md * chore: Update lmms-eval to support video evaluations for LLaVA models * chore: Update lmms-eval to support video evaluations for LLaVA models * Update README.md * Update README.md * update aggregation function for vcr_wiki * update README.md * Update README.md * update version * add II-Bench * fix dataset_path * Add qbench, qbench2, abench; fix phi3v as its current implementation does not support multi-image * add tinyllava * LongVideoBench support: image LMMs (idefics2, phi3) and video LMMs (LLaVA-Next-Video-34B) * fix #117, allow auto download with tar format videos * fix #117, allow auto download with tar format videos * fix typo * feat: Add support for auto downloading tar format videos * Release llava-wilder * chore: Update dependencies to fix potential risks and improve compatibility * tutorial * docs * update preparation * small fix * small fix * lint * to sh script * update readme * Remove handling non-visual loop in llava * Add llava_hf back to registry * Update README.md * Update README.md * update ablation for videomme datasets * chore: Handle ImportError when importing models Handle the ImportError exception when importing models in the lmms_eval package. This change adds a try-except block to catch the ImportError and print an error message indicating the failed import. This will help with troubleshooting and identifying any issues with the model imports. * chore: Remove unused models from lmms_eval package * feat: Allow loading model configurations from other packages * feat: Allow including external tasks from plugins * chore: Add loguru for logging in lmms_eval package * Remove unnecessary lines since use batched visuals now in llava * Add longva * Revise model registry for llava_hf and longva * Delete unnecessary lines * Remove unnecessary lines for video llava * Update pyproject.toml * Update activitynetqa_generation.yaml * Fix vid mme post prompt issue * new task gqa-ru * add mmbench_ru_dev * change prompt to ru * create new task vitatecs * Update README.md * Add wild vision 0617 * Hardcode to keep image for wild vision * Fixing scoring logic * Fixing dataset name * Fixing handling None filtered score * Add detailcaps * Add install capture_metric in env * Add files via upload * feat: Add tie_weights parameter to Llava model initialization * Upgrade lmms-eval to support more models and evaluation tasks * Upgrade lmms-eval to version 0.2.1 * Rename xcomposer 4KHD * chore: Update lmms_eval/models/vila.py and lmms_eval/tasks/__init__.py * Update utils.py * Update _default_template_vcr_yaml * add process sync via temp file in lmms_eval/evaluator.py * Update utils.py * Update _default_template_vcr_yaml * Add muirbench * Squashed commit of the following: commit dfdba507b5fbe985b0030ffec575f9f2638bc1ed Author: Li Bo Date: Tue Jul 16 11:13:52 2024 +0800 merge ov evals (#144) * chore: Update gpt_eval_model_name to "gpt-3.5-turbo" in mathvista.yaml * Squashed commit of the following: commit 994c9f97a2f8db3e9b7d7933d1e1680acde5b70b Author: Yan Shu <570533048@qq.com> Date: Mon Jul 8 17:21:23 2024 +0800 Add files via upload * Squashed commit of the following: commit e31cd7883d4555c7530795c7f102b8d78cbd372f Author: Bo Li Date: Wed Jul 10 12:08:08 2024 +1000 chore: Update lmms_eval/models/vila.py and lmms_eval/tasks/__init__.py commit 1d8c980d1089f9d7702c3b92d5c85039f2809c6d Author: kcz358 Date: Tue Jul 9 02:08:52 2024 +0000 Rename xcomposer 4KHD commit 6da76f36ecf5f9aa73057e767a4fcb60c99ff896 Author: Bo Li Date: Tue Jul 9 11:55:56 2024 +1000 Upgrade lmms-eval to version 0.2.1 commit cd1858523fcd8630082cbefba8710e0de3ee8805 Author: Bo Li Date: Tue Jul 9 11:52:23 2024 +1000 Upgrade lmms-eval to support more models and evaluation tasks commit 672d7e5bb49dcb34e1b2fdeb09f3f4588dc583a6 Author: Bo Li Date: Tue Jul 9 11:43:41 2024 +1000 feat: Add tie_weights parameter to Llava model initialization commit 2037a86261b55fa42b8ba3a04eab192b3e69d6ea Merge: e6844db1 a5c18692 Author: Bo Li Date: Tue Jul 9 11:37:12 2024 +1000 Fix gen kwargs image aspect ratio in internvl2 commit a5c186925de989b616f58a35ece36065a32b4594 Merge: 2ebec77f 557083a1 Author: Li Bo Date: Tue Jul 9 09:15:56 2024 +0800 Merge pull request #137 from shuyansy/main add MLVU task commit 557083a156c3dd67ac79e22b4202e9b69b6b00f4 Author: Yan Shu <570533048@qq.com> Date: Mon Jul 8 16:56:50 2024 +0800 Add files via upload commit 2ebec77f5606d79e9a7b995970e32792050606a1 Merge: 211bfede b23d349e Author: Li Bo Date: Mon Jul 8 11:53:06 2024 +0800 Merge pull request #136 from Dousia/main Add detailcaps commit b23d349e46d60dc149ffaa54d6e019f4996ed92d Author: ByteDance Date: Sun Jul 7 23:24:19 2024 +0800 Add install capture_metric in env commit c6e211d5f9dbb7572d3a141b6504cb1ca2007c33 Author: ByteDance Date: Sun Jul 7 23:04:13 2024 +0800 Add detailcaps commit 211bfedebad243ef82a8b0be36c3b5a9b9cb2f72 Merge: 7c208b76 79514eee Author: Li Bo Date: Tue Jul 2 23:05:12 2024 +0800 Merge pull request #133 from EvolvingLMMs-Lab/dev/wild_vision Add wild vision bench commit 79514eeebcfd6f655be2a10c776037d12a7b7214 Author: kcz358 Date: Mon Jul 1 15:10:02 2024 +0000 Fixing handling None filtered score commit 725fac2781446958b905e1e6c6eb3c0a8e582e49 Author: kcz358 Date: Mon Jul 1 08:25:42 2024 +0000 Fixing dataset name commit 8d963e132ac03fc0d835d480cfcfcabe72af143c Author: kcz358 Date: Mon Jul 1 08:24:51 2024 +0000 Fixing scoring logic commit e2990d0a69e876721256fdf946c68ba7ae0cbdc1 Author: kcz358 Date: Mon Jul 1 06:06:57 2024 +0000 Hardcode to keep image for wild vision commit ed381736730d8fb785b4ee919fdb751734ecef25 Author: kcz358 Date: Mon Jul 1 06:06:38 2024 +0000 Add wild vision 0617 commit 7c208b76640c986cfe94233dce735c3ca4ad4319 Author: Li Bo Date: Mon Jul 1 11:53:31 2024 +0800 Update README.md commit 39d40dea47bc59ff04e8b0cbc445345098debc9a Merge: e19b43a3 ba7081c0 Author: Li Bo Date: Mon Jul 1 11:47:09 2024 +0800 Merge pull request #129 from Dannoopsy/mmbench_ru add task MMBench-ru commit e19b43a3a1e7212e623061b164b0419cc0dda689 Merge: 11fd7e3f a0de8970 Author: Li Bo Date: Mon Jul 1 11:46:58 2024 +0800 Merge pull request #128 from Dannoopsy/gqa-ru add task gqa-ru commit 11fd7e3fc05908aeb01e4a6161a7b55cd38b3122 Merge: 383e7fea a7522592 Author: Li Bo Date: Mon Jul 1 11:46:16 2024 +0800 Merge pull request #130 from lscpku/vitatecs Add task VITATECS commit a75225926e5954f85466d257f99acf0163fde596 Author: lscpku Date: Fri Jun 28 20:37:06 2024 +0800 create new task vitatecs commit ba7081c0abac840002d320e30733e891298dfa11 Author: Dannoopsy <63581325+Dannoopsy@users.noreply.github.com> Date: Fri Jun 28 12:21:05 2024 +0300 change prompt to ru commit 27ea9c0055a8abf3a8198829b8617018479918e2 Author: Dannoopsy Date: Thu Jun 27 17:17:29 2024 +0000 add mmbench_ru_dev commit 383e7fead3138aedf62e9c0ec48303835ef26e2a Merge: 06fa000f ed2e7f79 Author: Li Bo Date: Fri Jun 28 00:14:10 2024 +0800 Merge pull request #126 from lorenzomammana/feature/external-package-integration External package integration using plugins commit ed2e7f792151d21bce8f1c498270b9391e1d5c85 Merge: 03947e14 06fa000f Author: Lorenzo Mammana Date: Thu Jun 27 15:38:10 2024 +0000 Merge branch 'main' into feature/external-package-integration commit a0de89708d5e6f259bb17f0eaace3c5b901b275c Author: Dannoopsy Date: Tue Jun 25 11:11:37 2024 +0000 new task gqa-ru commit 06fa000f60d3e4d160fac8ceb9959ae92a98f752 Author: kcz358 Date: Tue Jun 25 06:41:13 2024 +0000 Fix vid mme post prompt issue commit b388d79e0df6f60068196cb7047453ebd22d6ef1 Author: Li Bo Date: Sun Jun 23 22:31:16 2024 +0800 Update activitynetqa_generation.yaml commit 8f9d620fcd9d0a0742ee6bcf51ea63bd6b088a36 Author: Li Bo Date: Sun Jun 23 14:02:25 2024 +0800 Update pyproject.toml commit 6341b7c15ce9fb28eb06b067ddb299d6cf2e16c3 Merge: fce85f1b 903b042b Author: Li Bo Date: Sun Jun 23 14:02:02 2024 +0800 Merge pull request #125 from EvolvingLMMs-Lab/dev/interleave [Model] aligned llava-interleave model results on video tasks commit 903b042be016016d4ebeecb07701f3076a2d323c Author: kcz358 Date: Sat Jun 22 12:07:13 2024 +0000 Remove unnecessary lines for video llava commit d78ec86407b729a964906a8c2e50704b4bc74d06 Merge: ebe7217a fce85f1b Author: Li Bo Date: Sat Jun 22 13:57:31 2024 +0800 Merge branch 'main' into dev/interleave commit ebe7217a486c1e754e42c2cbdb834e09fbbcc9b0 Author: kcz358 Date: Sat Jun 22 02:57:08 2024 +0000 Delete unnecessary lines commit 120c474b056f9177c74e1fd9691d59e2f234b785 Author: kcz358 Date: Fri Jun 21 08:38:41 2024 +0000 Revise model registry for llava_hf and longva commit 7d6201f921088afd3f52a35076e3c6fcc9aa518c Author: kcz358 Date: Fri Jun 21 08:38:24 2024 +0000 Add longva commit 12f480699c71a12a24d4349d9b0681933201a3a6 Author: kcz358 Date: Fri Jun 21 08:35:39 2024 +0000 Remove unnecessary lines since use batched visuals now in llava commit 12cea76f1f0f14b1fd1007c9d39a9b0557368637 Author: Bo Li Date: Thu Jun 20 18:15:32 2024 +0000 chore: Add loguru for logging in lmms_eval package commit 03947e14a46fd25b412931f7c9c25f4a2971d0b4 Author: Lorenzo Mammana Date: Wed Jun 5 13:40:41 2024 +0000 feat: Allow including external tasks from plugins commit b80a91f73e15ddd0b0ce1322d7d121fa14030eed Author: Lorenzo Mammana Date: Wed Jun 5 13:04:55 2024 +0000 feat: Allow loading model configurations from other packages commit 8ef24740dd48a11c97eb627f2fff4aca107fef0d Author: Bo Li Date: Thu Jun 20 12:11:03 2024 +0000 chore: Remove unused models from lmms_eval package commit af38885fc2e066f5ea44388f33e07176f836fe28 Author: Bo Li Date: Thu Jun 20 12:07:09 2024 +0000 chore: Handle ImportError when importing models Handle the ImportError exception when importing models in the lmms_eval package. This change adds a try-except block to catch the ImportError and print an error message indicating the failed import. This will help with troubleshooting and identifying any issues with the model imports. commit fce85f1b03ff7043b29dee787c5d17a08dd2687a Merge: dbe63293 d94f83cb Author: Li Bo Date: Thu Jun 20 20:02:12 2024 +0800 Merge pull request #120 from EvolvingLMMs-Lab/pufanyi/hf_dataset_docs Add docs for datasets upload to HF commit dbe63293245a5141fdfd80bda7657c304f6bd32f Author: choiszt Date: Thu Jun 20 15:14:21 2024 +0800 update ablation for videomme datasets commit d94f83cb3f08b61a2c75cc4326e58792100605b3 Author: Li Bo Date: Thu Jun 20 13:30:59 2024 +0800 Update README.md commit cab8159ff35db330536c0b6dfb4b0a3b24142209 Author: Li Bo Date: Thu Jun 20 13:30:29 2024 +0800 Update README.md commit 45876652a877a8006b828f32f5cc4660629f9190 Author: kcz358 Date: Thu Jun 20 03:55:30 2024 +0000 Add llava_hf back to registry commit 3463651b8c54d36cd94169e3d376f5ed225a195a Author: kcz358 Date: Thu Jun 20 03:54:33 2024 +0000 Remove handling non-visual loop in llava commit cb0d3f49b72790b081f981e0e6147131542f7f68 Author: Fanyi Pu Date: Thu Jun 20 02:11:18 2024 +0800 update readme commit 813877bfe5ac590cdbe92dd74d18f83a2091f748 Author: Fanyi Pu Date: Wed Jun 19 15:37:52 2024 +0800 to sh script commit a14684b8557d5894976448a5c559ed7a66a6cf16 Author: Fanyi Pu Date: Wed Jun 19 15:37:04 2024 +0800 lint commit d0f8851d42ba31f5da2a7a65e91499db45174dbc Author: Fanyi Pu Date: Wed Jun 19 15:36:48 2024 +0800 small fix commit 63748e9718f287ad433afc90e340b5e17a89c1ed Author: Fanyi Pu Date: Wed Jun 19 15:36:43 2024 +0800 small fix commit 7f1159a1fe04cfb783dc31d4fbdef3bda0ce19e4 Author: Fanyi Pu Date: Wed Jun 19 15:35:05 2024 +0800 update preparation commit 19f9bd621c76a483ff98f8c7eb78f64753da683a Author: Fanyi Pu Date: Wed Jun 19 15:23:24 2024 +0800 docs commit ce6f889ba02d819979c7922f6336cf4f1f718f65 Author: Fanyi Pu Date: Wed Jun 19 15:04:16 2024 +0800 tutorial commit f513c520c2a3dad26d2b2ca5c4ed4db05a493c73 Author: Bo Li Date: Wed Jun 19 06:51:19 2024 +0000 chore: Update dependencies to fix potential risks and improve compatibility commit efb529552c5e4ba039a4cba8e9aa5cb7ba65bf90 Author: kcz358 Date: Wed Jun 19 10:25:58 2024 +0800 Release llava-wilder commit 742651fc9daf97e2f57831ed6e6e7ee7ead7d555 Author: Fanyi Pu Date: Wed Jun 19 07:44:26 2024 +0800 feat: Add support for auto downloading tar format videos commit 511b6259828212fcba954cdeb8cf90d6e5daabf8 Merge: 22a4958e 050b2c37 Author: Bo Li Date: Tue Jun 18 17:01:03 2024 +0000 Merge branch 'main' of https://github.com/EvolvingLMMs-Lab/lmms-eval commit 050b2c370017e9b97475dd6cf01fd051b5ca5c86 Merge: 74facb41 ef306512 Author: Li Bo Date: Tue Jun 18 13:13:38 2024 +0800 Merge pull request #114 from zjysteven/add-tinyllava add tinyllava commit ef306512e5135f76dffa383f600b8733015836e8 Author: Jingyang Zhang Date: Mon Jun 17 17:57:02 2024 -0400 fix typo commit 9bab67732a4238097725deddf867fb1946ffee40 Merge: dbfb2387 74facb41 Author: Jingyang Zhang Date: Sun Jun 16 10:56:05 2024 -0400 Merge branch 'EvolvingLMMs-Lab:main' into add-tinyllava commit 74facb41a826691dfce4458cf1d8659b34fc5bf5 Merge: 8ba192f9 d5df72de Author: Li Bo Date: Sun Jun 16 17:59:19 2024 +0800 Merge pull request #118 from teowu/main Fix the potential risk by PR #117 commit d5df72de2d03108d6b365818ecc3551ac9aa6302 Merge: 5bf59ed2 8ba192f9 Author: Teo (Timothy) Wu Haoning <38696372+teowu@users.noreply.github.com> Date: Sun Jun 16 15:32:13 2024 +0800 Merge branch 'EvolvingLMMs-Lab:main' into main commit 5bf59ed250da98a408a94e214a73caa400cba842 Author: teowu Date: Sun Jun 16 07:27:28 2024 +0000 fix #117, allow auto download with tar format videos commit 98b3955cb808e36303c030aea78eb037d1ec59ce Merge: a056f118 be9dada8 Author: teowu Date: Sun Jun 16 07:25:07 2024 +0000 Merge branch 'main' of https://github.com/teowu/lmms-eval into main commit a056f118704eccec86ce32ab86981ce4bc1e1deb Author: teowu Date: Sun Jun 16 07:23:54 2024 +0000 fix #117, allow auto download with tar format videos commit 8ba192f94edf5d99598983445d5faa4f8807c49f Merge: 7cc28907 be9dada8 Author: Li Bo Date: Sat Jun 15 17:30:59 2024 +0800 Merge pull request #117 from teowu/main LongVideoBench for LMMs-Eval commit be9dada8b4189c53c08e1674ab273242cf2f80a0 Merge: 62ea8ceb 7cc28907 Author: Teo (Timothy) Wu Haoning <38696372+teowu@users.noreply.github.com> Date: Sat Jun 15 16:39:20 2024 +0800 Merge pull request #1 from EvolvingLMMs-Lab/main Merge pull request #113 from teowu/main commit 62ea8ceb223ef2b51ebab2bcd50d5cf339c35cfe Author: teowu Date: Sat Jun 15 08:30:11 2024 +0000 LongVideoBench support: image LMMs (idefics2, phi3) and video LMMs (LLaVA-Next-Video-34B) commit 7cc28907edbb4eb58ee1398772a48110ea35dd96 Merge: 4bc7224d ea14cd4b Author: Li Bo Date: Sat Jun 15 14:10:22 2024 +0800 Merge pull request #113 from teowu/main Q-Bench, Q-Bench2, A-Bench commit dbfb23873979f789477f4797ee2d6071e0fd921e Author: Jingyang Date: Fri Jun 14 16:20:42 2024 -0400 add tinyllava commit ea14cd4b361f4c95b3665cbdb95bc51754090eb5 Author: teowu Date: Fri Jun 14 15:01:52 2024 +0000 Add qbench, qbench2, abench; fix phi3v as its current implementation does not support multi-image commit 4bc7224dcd27fe8b288bfc3fed4d7a9da9635658 Merge: 2797987f bf14cb85 Author: Li Bo Date: Fri Jun 14 02:14:43 2024 +0800 Merge pull request #111 from XinrunDu/main add II-Bench commit bf14cb8527b2b7ac438a36567a875168bc02d294 Author: XinrunDu Date: Thu Jun 13 09:37:02 2024 +0000 fix dataset_path commit 6248113f4e11a0ac396d31fa1b032a142fea8cb4 Author: XinrunDu Date: Thu Jun 13 09:32:06 2024 +0000 add II-Bench commit 2797987f5b88b87bd172714b678a75a1d8051826 Merge: 63d82f1f 66d4bb2d Author: Li Bo Date: Thu Jun 13 11:14:47 2024 +0800 Merge pull request #109 from EvolvingLMMs-Lab/pufanyi/update_version [Small Update] Update the version of LMMs-Eval commit 66d4bb2d9c9afbbdea40196d4ad80e214d0b14b6 Author: Fanyi Pu Date: Thu Jun 13 11:13:00 2024 +0800 update version commit 63d82f1ff11eb430d91a15d6788a1f0b4d596850 Author: Li Bo Date: Thu Jun 13 11:04:32 2024 +0800 Update README.md commit 44a33799671cb668f55366d5e5a4ddb051a3a1b4 Merge: 5ed00356 0ce46d08 Author: Li Bo Date: Thu Jun 13 04:00:12 2024 +0800 Merge pull request #105 from tianyu-z/main Include VCR commit 0ce46d088e473d12d63de44f17c67dceab25658c Author: Suyuchen Date: Wed Jun 12 15:56:34 2024 -0400 update README.md commit 46a88d8b0199ed44d2ff459fb372f2e006960cea Merge: 47b13b9b 5ed00356 Author: Suyuchen Date: Wed Jun 12 15:50:26 2024 -0400 merged readme.md commit 47b13b9b320d36ac53b3622557e31239f7c22621 Author: Suyuchen Date: Wed Jun 12 15:30:52 2024 -0400 update aggregation function for vcr_wiki commit 5ed00356676cf5d0ff056cf27d1b519b8e303ff7 Author: Li Bo Date: Thu Jun 13 03:21:42 2024 +0800 Update README.md commit ed8806839db5988ced672bd162b7b046edb4863a Author: Li Bo Date: Thu Jun 13 03:13:59 2024 +0800 Update README.md commit fea3806026932a6e2bd6e538bcc413e33abdf245 Merge: d99a24ab 05dc8e85 Author: Li Bo Date: Thu Jun 13 03:11:49 2024 +0800 Merge pull request #108 from EvolvingLMMs-Lab/internal_main_dev [Upgrade to v0.2] Embracing Video Evaluations with LMMs-Eval commit 05dc8e853eab7c6bc782a1e2662d2efe7422f767 Author: Bo Li Date: Wed Jun 12 15:56:04 2024 +0000 chore: Update lmms-eval to support video evaluations for LLaVA models commit cbeee20bc4ffb510a2b23d96cdaf4077be7c2a9e Author: Bo Li Date: Wed Jun 12 15:50:30 2024 +0000 chore: Update lmms-eval to support video evaluations for LLaVA models commit f00d5498b69dd4f7e54c907ac906abc7c128f000 Author: Bo Li Date: Wed Jun 12 15:46:33 2024 +0000 Update image alignment in README.md commit 34156335db74cef9e3f0915d7172fd6b22456c15 Author: Bo Li Date: Wed Jun 12 15:43:16 2024 +0000 Update llava conv_template in lmms_eval/models/llava.py commit 50575a950736bc8fc1e191310314cbb5fdff5720 Author: Bo Li Date: Wed Jun 12 15:39:03 2024 +0000 chore: Update lmms-eval to support video evaluations for LLaVA models commit c9b2252fb8a15dd04252af5e6b4613855afd6ada Author: Bo Li Date: Wed Jun 12 15:33:48 2024 +0000 Bump version to 0.2.0.dev0 commit 465bd4205e8097e9c037b24a3ed08dd6a7694efa Merge: e43bd840 d99a24ab Author: Bo Li Date: Wed Jun 12 15:04:25 2024 +0000 Merge branch 'main' of https://github.com/EvolvingLMMs-Lab/lmms-eval into internal_main_dev commit e43bd840b63eb499856e36d9d2ba45c924abcead Author: Bo Li Date: Wed Jun 12 14:54:06 2024 +0000 chore: Remove unnecessary files and code related to live_bench and sft_eval tasks commit d99a24abd06df10d07e5a4d0ad5030613f92f2e7 Merge: 374590be a66003be Author: Li Bo Date: Wed Jun 12 19:45:57 2024 +0800 Merge pull request #107 from AtsuMiyai/new_task/upd_update update gpt-3.5-turbo version commit a66003befe4175824a1be6ed59f5f5b88c15f792 Author: AtsuMiyai Date: Wed Jun 12 17:05:17 2024 +0900 update gpt-3.5-turbo version commit ee91f272985f32eeb9cd6faa41afdd8eb49cac30 Author: AtsuMiyai Date: Wed Jun 12 16:50:53 2024 +0900 update gpt-3.5-turbo version commit 326b9694fc77398592b8caf3ba0bc2e2bb903813 Author: tianyu-z Date: Mon Jun 10 20:07:40 2024 -0400 include std and confidence interval commit cd050d4a721d01a2ace0cd030cf7f8dc67eb8c4d Author: Suyuchen Date: Mon Jun 10 18:49:47 2024 -0400 update vcr_wiki tasks in README.md commit 205721e0aad76dde30255e56149bbed121883356 Author: Suyuchen Date: Mon Jun 10 18:43:15 2024 -0400 update vcr_wiki tasks commit db8e718b502469e8536ee359c5559de87635ffc7 Author: tianyu-z Date: Mon Jun 10 16:13:58 2024 -0400 include the try-except logic for spacy commit 427dabb790118f538b64e4e5bf6a7aab9689b3d9 Author: Suyuchen Date: Mon Jun 10 15:51:05 2024 -0400 add crossed_text to vcr_wiki output commit 043b483eb55f7be4fea75c9bc0b9b03d251b109b Author: tianyu-z Date: Mon Jun 10 15:47:00 2024 -0400 switch logic commit e1f04db8f58dd10591fde335ea13f74cda7c79bd Author: tianyu-z Date: Mon Jun 10 02:38:21 2024 -0400 modify the form of VCR commit 96e8d9867c9549ab7490f4b12cfeb6a06238e0aa Author: tianyu-z Date: Mon Jun 10 00:10:30 2024 -0400 init include vcr commit 374590be62f988a76cf6704cfe394cd8ae7d4cb6 Merge: 504685e2 cb3b9ce7 Author: Kaichen Zhang - NTU Date: Fri Jun 7 20:25:48 2024 +0800 Merge pull request #101 from Gumpest/main Update conbench in README commit 504685e20b17659b913cf46f3012c16bf429e09d Author: Li Bo Date: Thu Jun 6 15:42:15 2024 +0800 Update README.md commit cb3b9ce71411da862ff01342a9122a3c656ffbd1 Merge: c9793b38 67b64ea4 Author: Yuan Zhang <56063339+Gumpest@users.noreply.github.com> Date: Thu Jun 6 11:22:24 2024 +0800 Merge branch 'EvolvingLMMs-Lab:main' into main commit c9793b3883714f254a700230b7bee781d6110e73 Author: Yuan Zhang Date: Thu Jun 6 11:21:05 2024 +0800 update README commit 67b64ea44a5a39d96c7a196a8a8345a7486bd912 Merge: 8ee7848a 5fd68451 Author: Li Bo Date: Wed Jun 5 23:12:58 2024 +0800 Merge pull request #100 from Gumpest/main add Conbench commit 5fd684515c55ef643726c1b6c720c7cbd2183ba1 Author: Yuan Zhang Date: Wed Jun 5 21:52:31 2024 +0800 add conbench commit 8ee7848aaa6383aa1f919c3f21199c81db3fff89 Merge: 747e1978 6fefaf7c Author: Li Bo Date: Tue Jun 4 17:09:33 2024 +0800 Merge pull request #95 from AtsuMiyai/new_task/upd add MM-UPD commit 747e19782996065cdce7157ee8c5e15beb5b6c59 Merge: 4854a34d 05843072 Author: Li Bo Date: Tue Jun 4 17:09:04 2024 +0800 Merge pull request #97 from CaraJ7/update Add MathVerse in README.md commit 6fefaf7cea504e35583ee7217449da290295a7a4 Author: AtsuMiyai Date: Tue Jun 4 17:36:39 2024 +0900 update utils.py for leaderboard submission commit 5f4fe360def1c48ea0cb1da6409d192784882308 Author: AtsuMiyai Date: Sun Jun 2 23:28:27 2024 +0900 slightly change query_prompt for the reproduction commit 05843072d608b970bcada1cd0db65a3c80864060 Author: CaraJ7 <1350074492@qq.com> Date: Sun Jun 2 17:05:28 2024 +0800 Add MathVerse in README.md commit 0581ab3cfb362e2024988b46fbbb00324f1233c9 Author: AtsuMiyai Date: Fri May 31 16:09:45 2024 +0900 merge model_specific_prompt_kwargs and dataset_name into each task yaml commit 4854a34d4d37efb5e201f2691ecdb054590cf20b Author: Pu Fanyi Date: Sat May 4 19:23:39 2024 +0800 Group MMMU images into one image (#83) * update * update font * Add matplotlib.font_manager import in utils.py * Refactor font handling in add_order_label function in utils.py * group mmmu --------- Co-authored-by: Li Bo commit d224794c49520f4d28a31862cf977198cd6cbc5e Author: AtsuMiyai Date: Wed May 29 15:15:59 2024 +0900 add upd commit 453e7936424220f02b99517059ca71babfbe5f5a Author: AtsuMiyai Date: Wed May 29 15:03:30 2024 +0900 add upd commit 909edd6769ddcf8a546be4fdd129416687516878 Author: AtsuMiyai Date: Wed May 29 12:52:21 2024 +0900 add upd commit 7c1ac9706cafc4801fa4da181d2f610b7838c7b8 Author: AtsuMiyai Date: Wed May 29 12:50:32 2024 +0900 add upd commit 811301c5280ddd74986645086f026ab730c8848c Author: AtsuMiyai Date: Wed May 29 12:46:58 2024 +0900 add upd commit 71401bafd1d515f704f86ab4817a758542bc4672 Author: AtsuMiyai Date: Wed May 29 12:41:21 2024 +0900 add upd commit 24dc435908d921e9f1a5706e3141b12e5d838d18 Author: Bo Li Date: Mon May 27 10:17:32 2024 +0000 fix compatibility issue of older version llava commit 616edf43731415b35f0f5e97748ed2e017a2891d Author: Bo Li Date: Mon May 27 09:32:26 2024 +0000 [Fix] import issues of multilingual llava and olympiadbench commit 4c5a99e21a63fb0ee1c7d15546d18066e1d9894b Merge: 45c05b2b b05c3e22 Author: Li Bo Date: Mon May 27 14:19:53 2024 +0800 Merge pull request #87 from vfragoso/vifragos/phi3v Adding microsoft/Phi-3-vision-128k-instruct model. commit b05c3e222fabd308dd7af4e04c1c6a0812962fe6 Author: Victor Fragoso Date: Fri May 24 16:36:37 2024 +0000 Adding documentation of Phi3v class. commit c2008971308ce8168d57c24d00b725832f099244 Author: Victor Fragoso Date: Fri May 24 16:25:02 2024 +0000 Adding prompt arguments for Phi3v on MathVista-TestMini commit 7f9fb6bcc6cd24a7b8011b8753d0ea98cc2451fd Author: Victor Fragoso Date: Fri May 24 13:24:16 2024 +0000 Adding Phi3v model. commit 45c05b2b2bece76e06849a52a0d034f9c0ac2367 Author: kcz358 Date: Thu May 23 03:47:36 2024 +0000 Set printing info for llava_hf to debug level commit 53f013ed8278776551ca992562253387cc9968d2 Author: kcz358 Date: Thu May 23 03:41:39 2024 +0000 Fix pope random name in pope full commit 22520a95f13334b75eee0cf0387151067a6bf516 Author: kcz358 Date: Thu May 23 03:41:14 2024 +0000 Add separated pope tasks by category commit d1eefb1565014b47287ffa6b350229062f8f602f Author: kcz358 Date: Thu May 9 08:36:02 2024 +0000 Update gitignore commit b2b4dbd2dc13432c79208db35abf7f55c97f1790 Author: kcz358 Date: Mon May 20 07:45:11 2024 +0000 Comment out Spice in caption task so that don't need to download stanford nlp model commit 662f05ce4c62a46a83f819d3a5925a9bd20059b5 Author: kcz358 Date: Mon May 20 03:13:13 2024 +0000 Comment out parse result in xcomposer commit 09329322916bfbb604d72ddaf50441a0947f8805 Author: kcz358 Date: Thu May 16 03:55:39 2024 +0000 Fix instructblip qformer size mismatch and multi-images problem commit 557a6a3b15e07e506bc05e2cc76ff6a2f8c93964 Author: kcz358 Date: Thu May 16 03:11:41 2024 +0000 Remove redundant code in fuyu commit 6aeb5504e74ed1980b53700d8e4d4dcf7d1b38fc Author: kcz358 Date: Thu May 16 01:45:24 2024 +0000 Fix idefics2 llava in the wild bugs commit aea80e6a71f716951353e1e5d68380243396b4d6 Author: kcz358 Date: Wed May 15 11:07:35 2024 +0000 Better task list_with_num commit 3c12a080d66b9c38f615b961befca7c30f82fa39 Author: Li Bo Date: Sat May 18 02:35:52 2024 +0800 Update LICENSE commit 82317a635a4978b32e095a06cc295d0ae23661c2 Author: Li Bo Date: Sat May 18 02:29:09 2024 +0800 Update LICENSE commit a8bba1cdb51061a0d27bf9a98cca1505b5c58ea5 Author: Li Bo Date: Sat May 18 02:28:03 2024 +0800 Create LICENSE commit caa5893b5fd2c1d32c72b97f371ccd9a8d9ec3a0 Merge: c0944486 423b0060 Author: Li Bo Date: Mon May 13 11:45:26 2024 +0800 Merge pull request #73 from EvolvingLMMs-Lab/kc/qwen_vl_api [Feat] Add qwen vl api commit c09444860362a136f17641f8b2a1f91c2bbc3715 Author: kcz358 Date: Sat May 11 06:11:19 2024 +0000 Fix llava_hf image tokens number issue commit 64f07e497f53e5bcbe9e8fb5830cc7a1daaf7ff1 Author: kcz358 Date: Thu May 9 02:04:10 2024 +0000 Fix endless warning for llava_hf generation commit 8aaa828108da8514dd9cd23a9d6d83a8b67f2d65 Author: Bo Li Date: Thu May 2 06:13:56 2024 +0000 Add model_name parameter to Llava constructor commit 7847dc4d8efe60605102414bb071b1da9851228e Author: kcz358 Date: Tue May 7 03:15:59 2024 +0000 Parse result for llava_hf 1.6 commit 3e56b4f92db39a2ce92903b0c43a34f1d14d59ec Author: kcz358 Date: Tue May 7 03:09:56 2024 +0000 Fix llava_hf generation for 1.6 commit fa3ff92b07ea5aaa633a2039818c310744f84d07 Author: kcz358 Date: Mon May 6 08:32:57 2024 +0000 Fix llava conv template for llama3 commit 423b00606aa77fd6b324c19e3d480b73ab852db6 Author: kcz358 Date: Sun May 5 07:54:52 2024 +0000 Add qwen vl api commit b7fd7a9f7aa3c0e1e50374047dfffc46a7462b90 Merge: 986139a9 c5a130b6 Author: Li Bo Date: Sun May 5 13:19:48 2024 +0800 Merge pull request #59 from EvolvingLMMs-Lab/add_idefics2 add idefics2 commit 986139a9a31154679bdea029b09639f84712db27 Merge: b46239ca 8d3526c0 Author: Li Bo Date: Fri May 3 01:18:18 2024 +0800 Merge pull request #36 from cocoshe/main [Fix] repr llava doc commit b46239cabab7b545ec99d9eae6c851e531b18374 Merge: bc69a744 373265f2 Author: Li Bo Date: Fri May 3 01:17:34 2024 +0800 Merge pull request #56 from gagan3012/main Multilingual LLava bench commit bc69a744d2cffeb06eba62e843bcc7869e27613a Merge: eef3aeb6 626e8a91 Author: Li Bo Date: Fri May 3 01:12:14 2024 +0800 Merge pull request #70 from hunterheiden/hsh/new_task/WebSRC Bugfix: WebSRC should be token-level F1 NOT character-level commit 626e8a91a4af2dd5dd774fc130cc2f4d74b2bc37 Author: Hunter Heidenreich Date: Thu May 2 09:31:03 2024 -0400 Bugfix: WebSRC should be token-level F1 NOT character-level commit eef3aeb6ab589bb1d5045af5b5c1984a69402d19 Merge: c4e9dd9f 9bca4413 Author: Li Bo Date: Thu May 2 14:38:17 2024 +0800 Merge pull request #69 from hunterheiden/hsh/new_task/WebSRC [New Task] WebSRC (multimodal Q&A on web screenshots) commit 9bca441376325173128e5c50087f068e519c48da Author: Hunter Heidenreich Date: Wed May 1 11:07:29 2024 -0400 Add code to enable compilation of submission for WebSRC test split commit 7687495b1ed552eeba088cb9ad5aaf1170e7fff9 Author: Hunter Heidenreich Date: Wed May 1 10:47:32 2024 -0400 Draft and validate websrc eval on dev split commit 4eebd3e5d7ab3b8c3116eea57318db72d2ce32bb Author: Hunter Heidenreich Date: Wed May 1 10:46:54 2024 -0400 Update main README with new task names commit 35fe80b67656114a8824eb59574089663bdc4c9a Author: Hunter Heidenreich Date: Wed May 1 10:46:20 2024 -0400 Draft README for WebSRC commit 955bd0635cc6c14a96ad869f1002e6dbefdc5071 Author: Hunter Heidenreich Date: Tue Apr 30 10:16:21 2024 -0400 Init webSRC commit c4e9dd9f6e40e8586587c4a75987aa109a37f14b Merge: d8a3a99f 319afccb Author: Li Bo Date: Fri Apr 26 14:37:22 2024 +0800 Merge pull request #63 from hunterheiden/hsh/new_task/screenspot New Task: ScreenSpot - Grounding (REC) and instruction generation (REG) on screens commit 319afccbe713ddf40a8a6fa28501e64c0ad34725 Author: Hunter Heidenreich Date: Thu Apr 25 11:44:34 2024 -0400 slight update commit 2f3811ca1bbad6a441016b05fde09a571900fca8 Author: Hunter Heidenreich Date: Thu Apr 25 11:41:04 2024 -0400 Add README file specific to ScreenSpot commit 28962cbe83631ec5d6481aaea4907a7c96fec848 Author: Hunter Heidenreich Date: Wed Apr 24 11:52:33 2024 -0400 Update README to reflect new tasks commit e457cfb4f2d6869e8367d6d5b03ad25ee4acc363 Author: Hunter Heidenreich Date: Tue Apr 23 18:33:16 2024 -0400 Create ScreenSpot on clean branch commit d8a3a99ff6142fe101fa3c188cc7f29593c44345 Merge: 3dcd0158 ed171293 Author: Li Bo Date: Tue Apr 23 10:34:03 2024 +0800 Merge pull request #61 from tupini07/patch-1 Fix typo in Qwen-VL that was causing "reference before assignment" commit ed171293d1e82075c5c6a847fc91ecbfd45cf89f Author: Andrea Tupini Date: Mon Apr 22 14:56:41 2024 -0600 refactor query construction for clarity commit cd874201c46f32a2903ddffae85f9db73e14adfd Author: Andrea Tupini Date: Mon Apr 22 14:54:29 2024 -0600 convert contexts to list if necessary and remove unnecessary construction of `questions` commit 85573674e90c8d505312ba18c5102e0051255078 Author: Andrea Tupini Date: Mon Apr 22 14:47:33 2024 -0600 Fix typo in qwen_vl that was causing "reference before assignment" commit 3dcd01582b719555bcf8eb25d91cc5e42abd2c5f Merge: 95df9fee 743673a1 Author: Li Bo Date: Sat Apr 20 22:03:16 2024 +0800 Merge pull request #60 from CaraJ7/main Add MathVerse commit 743673a1419b6e729e18c96f148745cc739d4c71 Merge: c1a54721 95df9fee Author: CaraJ7 <1350074492@qq.com> Date: Sat Apr 20 21:49:02 2024 +0800 Merge branch 'main' of https://github.com/EvolvingLMMs-Lab/lmms-eval commit c1a5472135c3b84061b64d997ab50dda0412ba4f Author: CaraJ7 <1350074492@qq.com> Date: Sat Apr 20 21:45:34 2024 +0800 Add MathVerse commit 373265f24e7a89cbd49ab724a2e388cc0930be78 Author: Gagan Bhatia <49101362+gagan3012@users.noreply.github.com> Date: Fri Apr 12 17:21:39 2024 -0700 Add files via upload commit d8530514a5ef9378d2adeaceb228b60ec25a6718 Author: Gagan Bhatia <49101362+gagan3012@users.noreply.github.com> Date: Fri Apr 12 17:19:49 2024 -0700 Create README.md commit 22a4958e993463edff352ac033014f9a485706cc Author: Bo Li Date: Thu Apr 4 17:12:43 2024 +0000 [WIP] adding mmbench dev evaluation (#75) * WIP * Update GPT evaluation model name and sys prompt * 🛠️ Scale accuracy to percentage The accuracy value is now multiplied by 100 in the aggregation function to represent it as a percentage. Regarding the evaluation process, `math` module importation and refactoring reduce progress log verbosity by logging every 100 evaluations instead of 10. It prevents potential logging overflow. Handling of NaN values is added to ensure 'default_value' is set in case of missing data, avoiding errors in split, category, and l2-category assignments. Finally, reporting of categorical and l2-categorical accuracies is streamlined through a new `calculate_hit_rates` function, improving code readability and maintenance. Issue refs: #1427, #1533 * Update GPT evaluation model name and API configuration * Refactor MMBench_Evaluator class to handle missing columns * Add print statements for detailed results in MMBench-CN(CC), MMBench-CN(Dev), and MMBench-EN(Dev) evaluations * Refactor MMBench-CN and MMBench-EN evaluation functions * 🔄 Refactor result processing and logging logic - Simplified the result processing functions across different utility modules (`cc_utils.py`, `cn_utils.py`, `en_utils.py`) to unify the handling of multiple-choice options. Now, all options ("A" to "E") are dynamically added to the result data, and default to "nan" if not provided in the document. - Removed redundant keys directly from the process results dict creation to avoid clutter and align with the new dynamic addition of options. - In `mmbench_evals.py`, removed the unnecessary check for all splits being 'dev' and streamlined the evaluation loop by eliminating the progress bar (tqdm) for a cleaner log output. - Commented-out code and verbose logging during evaluation, which may have interfered with performance, has been removed for a more efficient and less intrusive logging experience. This cleanup reduces redundancy in the codebase and improves evaluation performance. Refs #2045 --------- Co-authored-by: Bo Li (cherry picked from commit a19278c2ea6ddcbca64d3cc7f4efec7fe5775121) commit 8d3526c0869f0ad7747ff6bb02441140792b461c Author: cocoshe <1228759711@qq.com> Date: Thu Mar 28 13:38:36 2024 +0800 fix doc * feat: Add LlavaOneVision model to available models chore: Update sqlitedict dependency to version 2.1.0 * Revert "Squashed commit of the following:" This reverts commit 11b00999df3c43cb225482e030b791b2d454124c. * Refactor available models in lmms_eval Remove duplicate entries for "llava_hf", "llava_onevision", and "longva" in the AVAILABLE_MODELS dictionary in lmms_eval/models/__init__.py. * fix: Handle import errors in lmms_eval models/__init__.py The code changes in this commit fix the handling of import errors in the lmms_eval/models/__init__.py file. Previously, when an import error occurred, the code simply ignored it. This commit updates the code to log an error message using the logger module when an import error occurs. This commit also removes duplicate entries for "llava_hf", "llava_onevision", and "longva" in the AVAILABLE_MODELS dictionary. Recent user commits: - Refactor available models in lmms_eval - Revert "Squashed commit of the following:" - feat: Add LlavaOneVision model to available models - chore: Update sqlitedict dependency to version 2.1.0 * fix: Handle import errors in lmms_eval models/__init__.py * chore: Remove unused imports in lmms_eval/models/__init__.py and lmms_eval/tasks/vcr_wiki/utils.py * Remove unused imports in lmms_eval/tasks/vcr_wiki/utils.py * chore: Update lmms_eval/tasks/vcr_wiki/utils.py This commit updates the `lmms_eval/tasks/vcr_wiki/utils.py` file. It removes unused imports and fixes the condition for loading Spacy models based on the `load_package` value in the config file. Additionally, it adds a debug log message when the Spacy models are not loaded due to `load_package` being set to False. Remove unused imports in `lmms_eval/tasks/vcr_wiki/utils.py` * feat: Add new subtasks to overall score calculation The code changes in this commit add new subtasks to the overall score calculation in the `overall_score` function. The subtasks "ScanQA", "BLINK", "MathVerse", "SciVerse", and "Mantis" are included in the `categories` dictionary. This ensures that the scores for these subtasks are calculated and included in the evaluation results. Remove unused imports and update subtask categories in `utils.py` * feat: Add new subtasks to overall score calculation * chore: Update lmms_eval/tasks/llava_interleave_bench/_default_template_interleave_yaml Update the image aspect ratio in the default template for the llava_interleave_bench task. Change the value of "image_aspect_ratio" from "original" to "pad". This ensures that the generated images have a padded aspect ratio. * if no response directly return 0 * Squashed commit of the following: commit b2a009b6bbf8353172f5a1dd9c29ea1f67610c02 Author: Pu Fanyi Date: Mon Jul 15 19:12:25 2024 -0700 if no response directly return 0 (#142) commit 5fc5f2f5acf454fc99448b0d62eb52b4bffba0d5 Author: Kaichen Zhang - NTU Date: Tue Jul 16 10:12:11 2024 +0800 Add Muirbench (#143) * handle gen kwargs in internvl2 * Add muirbench * Add files via upload (cherry picked from commit 557083a156c3dd67ac79e22b4202e9b69b6b00f4) * update --------- Co-authored-by: Fanyi Pu Co-authored-by: Yan Shu <570533048@qq.com> commit b2a009b6bbf8353172f5a1dd9c29ea1f67610c02 Author: Pu Fanyi Date: Mon Jul 15 19:12:25 2024 -0700 if no response directly return 0 (#142) commit 5fc5f2f5acf454fc99448b0d62eb52b4bffba0d5 Author: Kaichen Zhang - NTU Date: Tue Jul 16 10:12:11 2024 +0800 Add Muirbench (#143) * handle gen kwargs in internvl2 * Add muirbench commit 4f8db1d37b1f824432927e74d6d82e06bb5aaed1 Author: Pu Fanyi Date: Fri Jul 12 17:26:50 2024 -0700 Upload live_bench results (#140) * upload results * add a readme * chore: Update upload_results.py script to use shell syntax * Update upload_results.py * Update upload_results.py commit 18f3812c4f9af2e49af6b50e8afe7f607b8a75d6 Author: Pu Fanyi Date: Wed Jul 10 18:13:43 2024 -0700 Load tasks only one time (#139) * chore: Initialize tasks only once to avoid re-initialization * chore: Initialize tasks only once to avoid re-initialization * chore: Refactor task initialization to avoid re-initialization * chore: Update task initialization to fix include_path issue * chore: Update task initialization to fix include_path issue * chore: Remove unnecessary line in muirbench.yaml * chore: Remove unnecessary line in muirbench.yaml and update gitignore * chore: Update lmms_eval to use correct variable name for world size * Update mmvet * chore: Update lmms_eval to use correct variable name for world size * chore: Remove unused lmms_eval configuration file * refactor: Update lmms_eval to handle both image and video tasks This commit updates the `Llava_OneVision` class in `llava_onevision.py` to handle both image and video tasks. It introduces conditional logic to differentiate between the two types of tasks and process the input accordingly. Additionally, it sets the image aspect ratio based on the number of visual inputs and the configuration settings. Closes #123 * Fix llava onevision loglikelihood video bug (cherry picked from commit f96e3e69fe86dcd9cb33d2bc18cc4ff2003de8be) * refactor: Update mm_spatial_pool_mode to use bilinear interpolation This commit updates the `mm_spatial_pool_mode` parameter in the `Llava_OneVision` class of `llava_onevision.py` to use bilinear interpolation instead of the previous average pooling mode. This change improves the spatial pooling process for the model. Closes #456 * chore: Update pyproject.toml with protobuf dependency version 3.20 * Squashed commit of the following: commit e106f49ceeb295fd4c89a0877073bc01b4b77c5f Author: Fanyi Pu Date: Thu Jul 25 08:14:03 2024 +0800 livebench_july commit a16295653fdda20d5e8c41c549d731ec422013e3 Author: Fanyi Pu Date: Mon Jul 22 15:09:58 2024 +0800 websites commit 2cdc06ffe6ba53a4c707c1acf9fc5f2e7886b2b8 Author: Fanyi Pu Date: Sun Jul 21 15:34:39 2024 +0800 everything use gpt-4o commit e67538d65526c58903d9e62d1914ebd39924ab67 Author: Fanyi Pu Date: Sun Jul 21 14:29:55 2024 +0800 chore: Update dataset capture settings in create_dataset.py commit 0a3bb33d37cda05bb7bfba4ecf873c2860092a03 Author: Fanyi Pu Date: Sun Jul 21 01:58:14 2024 +0800 gpt-4-turbo => gpt-4o commit 837f8b0400f04f4367f8f8f954afd64666d62fc6 Author: Fanyi Pu Date: Sat Jul 20 16:48:04 2024 +0800 chore: Update dataset name and version for live_bench task commit fa58e730978b5536005c8bd0291abbeddd761205 Author: Fanyi Pu Date: Sat Jul 20 15:05:13 2024 +0800 generate data commit faa96227a7af7bd6546578b2db68dce2acbc2c0c Author: Fanyi Pu Date: Sat Jul 20 13:15:18 2024 +0800 fix commit 60ea7ddb4fcd9f08013cd0d5b9dd8090f7e6b83e Author: Fanyi Pu Date: Sat Jul 20 13:12:31 2024 +0800 fix bugs commit 827d69d0bf967f5d69bfbee9848b4d568ca853b1 Author: Fanyi Pu Date: Sat Jul 20 08:39:41 2024 +0800 use claude to generate commit b7e2619d1a51144cd434861ac151187aed82c8c4 Author: Fanyi Pu Date: Sat Jul 20 07:36:59 2024 +0800 extract information commit f87d55d47cb0d6653765e9e3f988f4bc186f7d4c Author: Fanyi Pu Date: Sat Jul 20 07:24:07 2024 +0800 claude auto detect json mode commit dfdba507b5fbe985b0030ffec575f9f2638bc1ed Author: Li Bo Date: Tue Jul 16 11:13:52 2024 +0800 merge ov evals (#144) * chore: Update gpt_eval_model_name to "gpt-3.5-turbo" in mathvista.yaml * Squashed commit of the following: commit 994c9f97a2f8db3e9b7d7933d1e1680acde5b70b Author: Yan Shu <570533048@qq.com> Date: Mon Jul 8 17:21:23 2024 +0800 Add files via upload * Squashed commit of the following: commit e31cd7883d4555c7530795c7f102b8d78cbd372f Author: Bo Li Date: Wed Jul 10 12:08:08 2024 +1000 chore: Update lmms_eval/models/vila.py and lmms_eval/tasks/__init__.py commit 1d8c980d1089f9d7702c3b92d5c85039f2809c6d Author: kcz358 Date: Tue Jul 9 02:08:52 2024 +0000 Rename xcomposer 4KHD commit 6da76f36ecf5f9aa73057e767a4fcb60c99ff896 Author: Bo Li Date: Tue Jul 9 11:55:56 2024 +1000 Upgrade lmms-eval to version 0.2.1 commit cd1858523fcd8630082cbefba8710e0de3ee8805 Author: Bo Li Date: Tue Jul 9 11:52:23 2024 +1000 Upgrade lmms-eval to support more models and evaluation tasks commit 672d7e5bb49dcb34e1b2fdeb09f3f4588dc583a6 Author: Bo Li Date: Tue Jul 9 11:43:41 2024 +1000 feat: Add tie_weights parameter to Llava model initialization commit 2037a86261b55fa42b8ba3a04eab192b3e69d6ea Merge: e6844db1 a5c18692 Author: Bo Li Date: Tue Jul 9 11:37:12 2024 +1000 Fix gen kwargs image aspect ratio in internvl2 commit a5c186925de989b616f58a35ece36065a32b4594 Merge: 2ebec77f 557083a1 Author: Li Bo Date: Tue Jul 9 09:15:56 2024 +0800 Merge pull request #137 from shuyansy/main add MLVU task commit 557083a156c3dd67ac79e22b4202e9b69b6b00f4 Author: Yan Shu <570533048@qq.com> Date: Mon Jul 8 16:56:50 2024 +0800 Add files via upload commit 2ebec77f5606d79e9a7b995970e32792050606a1 Merge: 211bfede b23d349e Author: Li Bo Date: Mon Jul 8 11:53:06 2024 +0800 Merge pull request #136 from Dousia/main Add detailcaps commit b23d349e46d60dc149ffaa54d6e019f4996ed92d Author: ByteDance Date: Sun Jul 7 23:24:19 2024 +0800 Add install capture_metric in env commit c6e211d5f9dbb7572d3a141b6504cb1ca2007c33 Author: ByteDance Date: Sun Jul 7 23:04:13 2024 +0800 Add detailcaps commit 211bfedebad243ef82a8b0be36c3b5a9b9cb2f72 Merge: 7c208b76 79514eee Author: Li Bo Date: Tue Jul 2 23:05:12 2024 +0800 Merge pull request #133 from EvolvingLMMs-Lab/dev/wild_vision Add wild vision bench commit 79514eeebcfd6f655be2a10c776037d12a7b7214 Author: kcz358 Date: Mon Jul 1 15:10:02 2024 +0000 Fixing handling None filtered score commit 725fac2781446958b905e1e6c6eb3c0a8e582e49 Author: kcz358 Date: Mon Jul 1 08:25:42 2024 +0000 Fixing dataset name commit 8d963e132ac03fc0d835d480cfcfcabe72af143c Author: kcz358 Date: Mon Jul 1 08:24:51 2024 +0000 Fixing scoring logic commit e2990d0a69e876721256fdf946c68ba7ae0cbdc1 Author: kcz358 Date: Mon Jul 1 06:06:57 2024 +0000 Hardcode to keep image for wild vision commit ed381736730d8fb785b4ee919fdb751734ecef25 Author: kcz358 Date: Mon Jul 1 06:06:38 2024 +0000 Add wild vision 0617 commit 7c208b76640c986cfe94233dce735c3ca4ad4319 Author: Li Bo Date: Mon Jul 1 11:53:31 2024 +0800 Update README.md commit 39d40dea47bc59ff04e8b0cbc445345098debc9a Merge: e19b43a3 ba7081c0 Author: Li Bo Date: Mon Jul 1 11:47:09 2024 +0800 Merge pull request #129 from Dannoopsy/mmbench_ru add task MMBench-ru commit e19b43a3a1e7212e623061b164b0419cc0dda689 Merge: 11fd7e3f a0de8970 Author: Li Bo Date: Mon Jul 1 11:46:58 2024 +0800 Merge pull request #128 from Dannoopsy/gqa-ru add task gqa-ru commit 11fd7e3fc05908aeb01e4a6161a7b55cd38b3122 Merge: 383e7fea a7522592 Author: Li Bo Date: Mon Jul 1 11:46:16 2024 +0800 Merge pull request #130 from lscpku/vitatecs Add task VITATECS commit a75225926e5954f85466d257f99acf0163fde596 Author: lscpku Date: Fri Jun 28 20:37:06 2024 +0800 create new task vitatecs commit ba7081c0abac840002d320e30733e891298dfa11 Author: Dannoopsy <63581325+Dannoopsy@users.noreply.github.com> Date: Fri Jun 28 12:21:05 2024 +0300 change prompt to ru commit 27ea9c0055a8abf3a8198829b8617018479918e2 Author: Dannoopsy Date: Thu Jun 27 17:17:29 2024 +0000 add mmbench_ru_dev commit 383e7fead3138aedf62e9c0ec48303835ef26e2a Merge: 06fa000f ed2e7f79 Author: Li Bo Date: Fri Jun 28 00:14:10 2024 +0800 Merge pull request #126 from lorenzomammana/feature/external-package-integration External package integration using plugins commit ed2e7f792151d21bce8f1c498270b9391e1d5c85 Merge: 03947e14 06fa000f Author: Lorenzo Mammana Date: Thu Jun 27 15:38:10 2024 +0000 Merge branch 'main' into feature/external-package-integration commit a0de89708d5e6f259bb17f0eaace3c5b901b275c Author: Dannoopsy Date: Tue Jun 25 11:11:37 2024 +0000 new task gqa-ru commit 06fa000f60d3e4d160fac8ceb9959ae92a98f752 Author: kcz358 Date: Tue Jun 25 06:41:13 2024 +0000 Fix vid mme post prompt issue commit b388d79e0df6f60068196cb7047453ebd22d6ef1 Author: Li Bo Date: Sun Jun 23 22:31:16 2024 +0800 Update activitynetqa_generation.yaml commit 8f9d620fcd9d0a0742ee6bcf51ea63bd6b088a36 Author: Li Bo Date: Sun Jun 23 14:02:25 2024 +0800 Update pyproject.toml commit 6341b7c15ce9fb28eb06b067ddb299d6cf2e16c3 Merge: fce85f1b 903b042b Author: Li Bo Date: Sun Jun 23 14:02:02 2024 +0800 Merge pull request #125 from EvolvingLMMs-Lab/dev/interleave [Model] aligned llava-interleave model results on video tasks commit 903b042be016016d4ebeecb07701f3076a2d323c Author: kcz358 Date: Sat Jun 22 12:07:13 2024 +0000 Remove unnecessary lines for video llava commit d78ec86407b729a964906a8c2e50704b4bc74d06 Merge: ebe7217a fce85f1b Author: Li Bo Date: Sat Jun 22 13:57:31 2024 +0800 Merge branch 'main' into dev/interleave commit ebe7217a486c1e754e42c2cbdb834e09fbbcc9b0 Author: kcz358 Date: Sat Jun 22 02:57:08 2024 +0000 Delete unnecessary lines commit 120c474b056f9177c74e1fd9691d59e2f234b785 Author: kcz358 Date: Fri Jun 21 08:38:41 2024 +0000 Revise model registry for llava_hf and longva commit 7d6201f921088afd3f52a35076e3c6fcc9aa518c Author: kcz358 Date: Fri Jun 21 08:38:24 2024 +0000 Add longva commit 12f480699c71a12a24d4349d9b0681933201a3a6 Author: kcz358 Date: Fri Jun 21 08:35:39 2024 +0000 Remove unnecessary lines since use batched visuals now in llava commit 12cea76f1f0f14b1fd1007c9d39a9b0557368637 Author: Bo Li Date: Thu Jun 20 18:15:32 2024 +0000 chore: Add loguru for logging in lmms_eval package commit 03947e14a46fd25b412931f7c9c25f4a2971d0b4 Author: Lorenzo Mammana Date: Wed Jun 5 13:40:41 2024 +0000 feat: Allow including external tasks from plugins commit b80a91f73e15ddd0b0ce1322d7d121fa14030eed Author: Lorenzo Mammana Date: Wed Jun 5 13:04:55 2024 +0000 feat: Allow loading model configurations from other packages commit 8ef24740dd48a11c97eb627f2fff4aca107fef0d Author: Bo Li Date: Thu Jun 20 12:11:03 2024 +0000 chore: Remove unused models from lmms_eval package commit af38885fc2e066f5ea44388f33e07176f836fe28 Author: Bo Li Date: Thu Jun 20 12:07:09 2024 +0000 chore: Handle ImportError when importing models Handle the ImportError exception when importing models in the lmms_eval package. This change adds a try-except block to catch the ImportError and print an error message indicating the failed import. This will help with troubleshooting and identifying any issues with the model imports. commit fce85f1b03ff7043b29dee787c5d17a08dd2687a Merge: dbe63293 d94f83cb Author: Li Bo Date: Thu Jun 20 20:02:12 2024 +0800 Merge pull request #120 from EvolvingLMMs-Lab/pufanyi/hf_dataset_docs Add docs for datasets upload to HF commit dbe63293245a5141fdfd80bda7657c304f6bd32f Author: choiszt Date: Thu Jun 20 15:14:21 2024 +0800 update ablation for videomme datasets commit d94f83cb3f08b61a2c75cc4326e58792100605b3 Author: Li Bo Date: Thu Jun 20 13:30:59 2024 +0800 Update README.md commit cab8159ff35db330536c0b6dfb4b0a3b24142209 Author: Li Bo Date: Thu Jun 20 13:30:29 2024 +0800 Update README.md commit 45876652a877a8006b828f32f5cc4660629f9190 Author: kcz358 Date: Thu Jun 20 03:55:30 2024 +0000 Add llava_hf back to registry commit 3463651b8c54d36cd94169e3d376f5ed225a195a Author: kcz358 Date: Thu Jun 20 03:54:33 2024 +0000 Remove handling non-visual loop in llava commit cb0d3f49b72790b081f981e0e6147131542f7f68 Author: Fanyi Pu Date: Thu Jun 20 02:11:18 2024 +0800 update readme commit 813877bfe5ac590cdbe92dd74d18f83a2091f748 Author: Fanyi Pu Date: Wed Jun 19 15:37:52 2024 +0800 to sh script commit a14684b8557d5894976448a5c559ed7a66a6cf16 Author: Fanyi Pu Date: Wed Jun 19 15:37:04 2024 +0800 lint commit d0f8851d42ba31f5da2a7a65e91499db45174dbc Author: Fanyi Pu Date: Wed Jun 19 15:36:48 2024 +0800 small fix commit 63748e9718f287ad433afc90e340b5e17a89c1ed Author: Fanyi Pu Date: Wed Jun 19 15:36:43 2024 +0800 small fix commit 7f1159a1fe04cfb783dc31d4fbdef3bda0ce19e4 Author: Fanyi Pu Date: Wed Jun 19 15:35:05 2024 +0800 update preparation commit 19f9bd621c76a483ff98f8c7eb78f64753da683a Author: Fanyi Pu Date: Wed Jun 19 15:23:24 2024 +0800 docs commit ce6f889ba02d819979c7922f6336cf4f1f718f65 Author: Fanyi Pu Date: Wed Jun 19 15:04:16 2024 +0800 tutorial commit f513c520c2a3dad26d2b2ca5c4ed4db05a493c73 Author: Bo Li Date: Wed Jun 19 06:51:19 2024 +0000 chore: Update dependencies to fix potential risks and improve compatibility commit efb529552c5e4ba039a4cba8e9aa5cb7ba65bf90 Author: kcz358 Date: Wed Jun 19 10:25:58 2024 +0800 Release llava-wilder commit 742651fc9daf97e2f57831ed6e6e7ee7ead7d555 Author: Fanyi Pu Date: Wed Jun 19 07:44:26 2024 +0800 feat: Add support for auto downloading tar format videos commit 511b6259828212fcba954cdeb8cf90d6e5daabf8 Merge: 22a4958e 050b2c37 Author: Bo Li Date: Tue Jun 18 17:01:03 2024 +0000 Merge branch 'main' of https://github.com/EvolvingLMMs-Lab/lmms-eval commit 050b2c370017e9b97475dd6cf01fd051b5ca5c86 Merge: 74facb41 ef306512 Author: Li Bo Date: Tue Jun 18 13:13:38 2024 +0800 Merge pull request #114 from zjysteven/add-tinyllava add tinyllava commit ef306512e5135f76dffa383f600b8733015836e8 Author: Jingyang Zhang Date: Mon Jun 17 17:57:02 2024 -0400 fix typo commit 9bab67732a4238097725deddf867fb1946ffee40 Merge: dbfb2387 74facb41 Author: Jingyang Zhang Date: Sun Jun 16 10:56:05 2024 -0400 Merge branch 'EvolvingLMMs-Lab:main' into add-tinyllava commit 74facb41a826691dfce4458cf1d8659b34fc5bf5 Merge: 8ba192f9 d5df72de Author: Li Bo Date: Sun Jun 16 17:59:19 2024 +0800 Merge pull request #118 from teowu/main Fix the potential risk by PR #117 commit d5df72de2d03108d6b365818ecc3551ac9aa6302 Merge: 5bf59ed2 8ba192f9 Author: Teo (Timothy) Wu Haoning <38696372+teowu@users.noreply.github.com> Date: Sun Jun 16 15:32:13 2024 +0800 Merge branch 'EvolvingLMMs-Lab:main' into main commit 5bf59ed250da98a408a94e214a73caa400cba842 Author: teowu Date: Sun Jun 16 07:27:28 2024 +0000 fix #117, allow auto download with tar format videos commit 98b3955cb808e36303c030aea78eb037d1ec59ce Merge: a056f118 be9dada8 Author: teowu Date: Sun Jun 16 07:25:07 2024 +0000 Merge branch 'main' of https://github.com/teowu/lmms-eval into main commit a056f118704eccec86ce32ab86981ce4bc1e1deb Author: teowu Date: Sun Jun 16 07:23:54 2024 +0000 fix #117, allow auto download with tar format videos commit 8ba192f94edf5d99598983445d5faa4f8807c49f Merge: 7cc28907 be9dada8 Author: Li Bo Date: Sat Jun 15 17:30:59 2024 +0800 Merge pull request #117 from teowu/main LongVideoBench for LMMs-Eval commit be9dada8b4189c53c08e1674ab273242cf2f80a0 Merge: 62ea8ceb 7cc28907 Author: Teo (Timothy) Wu Haoning <38696372+teowu@users.noreply.github.com> Date: Sat Jun 15 16:39:20 2024 +0800 Merge pull request #1 from EvolvingLMMs-Lab/main Merge pull request #113 from teowu/main commit 62ea8ceb223ef2b51ebab2bcd50d5cf339c35cfe Author: teowu Date: Sat Jun 15 08:30:11 2024 +0000 LongVideoBench support: image LMMs (idefics2, phi3) and video LMMs (LLaVA-Next-Video-34B) commit 7cc28907edbb4eb58ee1398772a48110ea35dd96 Merge: 4bc7224d ea14cd4b Author: Li Bo Date: Sat Jun 15 14:10:22 2024 +0800 Merge pull request #113 from teowu/main Q-Bench, Q-Bench2, A-Bench commit dbfb23873979f789477f4797ee2d6071e0fd921e Author: Jingyang Date: Fri Jun 14 16:20:42 2024 -0400 add tinyllava commit ea14cd4b361f4c95b3665cbdb95bc51754090eb5 Author: teowu Date: Fri Jun 14 15:01:52 2024 +0000 Add qbench, qbench2, abench; fix phi3v as its current implementation does not support multi-image commit 4bc7224dcd27fe8b288bfc3fed4d7a9da9635658 Merge: 2797987f bf14cb85 Author: Li Bo Date: Fri Jun 14 02:14:43 2024 +0800 Merge pull request #111 from XinrunDu/main add II-Bench commit bf14cb8527b2b7ac438a36567a875168bc02d294 Author: XinrunDu Date: Thu Jun 13 09:37:02 2024 +0000 fix dataset_path commit 6248113f4e11a0ac396d31fa1b032a142fea8cb4 Author: XinrunDu Date: Thu Jun 13 09:32:06 2024 +0000 add II-Bench commit 2797987f5b88b87bd172714b678a75a1d8051826 Merge: 63d82f1f 66d4bb2d Author: Li Bo Date: Thu Jun 13 11:14:47 2024 +0800 Merge pull request #109 from EvolvingLMMs-Lab/pufanyi/update_version [Small Update] Update the version of LMMs-Eval commit 66d4bb2d9c9afbbdea40196d4ad80e214d0b14b6 Author: Fanyi Pu Date: Thu Jun 13 11:13:00 2024 +0800 update version commit 63d82f1ff11eb430d91a15d6788a1f0b4d596850 Author: Li Bo Date: Thu Jun 13 11:04:32 2024 +0800 Update README.md commit 44a33799671cb668f55366d5e5a4ddb051a3a1b4 Merge: 5ed00356 0ce46d08 Author: Li Bo Date: Thu Jun 13 04:00:12 2024 +0800 Merge pull request #105 from tianyu-z/main Include VCR commit 0ce46d088e473d12d63de44f17c67dceab25658c Author: Suyuchen Date: Wed Jun 12 15:56:34 2024 -0400 update README.md commit 46a88d8b0199ed44d2ff459fb372f2e006960cea Merge: 47b13b9b 5ed00356 Author: Suyuchen Date: Wed Jun 12 15:50:26 2024 -0400 merged readme.md commit 47b13b9b320d36ac53b3622557e31239f7c22621 Author: Suyuchen Date: Wed Jun 12 15:30:52 2024 -0400 update aggregation function for vcr_wiki commit 5ed00356676cf5d0ff056cf27d1b519b8e303ff7 Author: Li Bo Date: Thu Jun 13 03:21:42 2024 +0800 Update README.md commit ed8806839db5988ced672bd162b7b046edb4863a Author: Li Bo Date: Thu Jun 13 03:13:59 2024 +0800 Update README.md commit fea3806026932a6e2bd6e538bcc413e33abdf245 Merge: d99a24ab 05dc8e85 Author: Li Bo Date: Thu Jun 13 03:11:49 2024 +0800 Merge pull request #108 from EvolvingLMMs-Lab/internal_main_dev [Upgrade to v0.2] Embracing Video Evaluations with LMMs-Eval commit 05dc8e853eab7c6bc782a1e2662d2efe7422f767 Author: Bo Li Date: Wed Jun 12 15:56:04 2024 +0000 chore: Update lmms-eval to support video evaluations for LLaVA models commit cbeee20bc4ffb510a2b23d96cdaf4077be7c2a9e Author: Bo Li Date: Wed Jun 12 15:50:30 2024 +0000 chore: Update lmms-eval to support video evaluations for LLaVA models commit f00d5498b69dd4f7e54c907ac906abc7c128f000 Author: Bo Li Date: Wed Jun 12 15:46:33 2024 +0000 Update image alignment in README.md commit 34156335db74cef9e3f0915d7172fd6b22456c15 Author: Bo Li Date: Wed Jun 12 15:43:16 2024 +0000 Update llava conv_template in lmms_eval/models/llava.py commit 50575a950736bc8fc1e191310314cbb5fdff5720 Author: Bo Li Date: Wed Jun 12 15:39:03 2024 +0000 chore: Update lmms-eval to support video evaluations for LLaVA models commit c9b2252fb8a15dd04252af5e6b4613855afd6ada Author: Bo Li Date: Wed Jun 12 15:33:48 2024 +0000 Bump version to 0.2.0.dev0 commit 465bd4205e8097e9c037b24a3ed08dd6a7694efa Merge: e43bd840 d99a24ab Author: Bo Li Date: Wed Jun 12 15:04:25 2024 +0000 Merge branch 'main' of https://github.com/EvolvingLMMs-Lab/lmms-eval into internal_main_dev commit e43bd840b63eb499856e36d9d2ba45c924abcead Author: Bo Li Date: Wed Jun 12 14:54:06 2024 +0000 chore: Remove unnecessary files and code related to live_bench and sft_eval tasks commit d99a24abd06df10d07e5a4d0ad5030613f92f2e7 Merge: 374590be a66003be Author: Li Bo Date: Wed Jun 12 19:45:57 2024 +0800 Merge pull request #107 from AtsuMiyai/new_task/upd_update update gpt-3.5-turbo version commit a66003befe4175824a1be6ed59f5f5b88c15f792 Author: AtsuMiyai Date: Wed Jun 12 17:05:17 2024 +0900 update gpt-3.5-turbo version commit ee91f272985f32eeb9cd6faa41afdd8eb49cac30 Author: AtsuMiyai Date: Wed Jun 12 16:50:53 2024 +0900 update gpt-3.5-turbo version commit 326b9694fc77398592b8caf3ba0bc2e2bb903813 Author: tianyu-z Date: Mon Jun 10 20:07:40 2024 -0400 include std and confidence interval commit cd050d4a721d01a2ace0cd030cf7f8dc67eb8c4d Author: Suyuchen Date: Mon Jun 10 18:49:47 2024 -0400 update vcr_wiki tasks in README.md commit 205721e0aad76dde30255e56149bbed121883356 Author: Suyuchen Date: Mon Jun 10 18:43:15 2024 -0400 update vcr_wiki tasks commit db8e718b502469e8536ee359c5559de87635ffc7 Author: tianyu-z Date: Mon Jun 10 16:13:58 2024 -0400 include the try-except logic for spacy commit 427dabb790118f538b64e4e5bf6a7aab9689b3d9 Author: Suyuchen Date: Mon Jun 10 15:51:05 2024 -0400 add crossed_text to vcr_wiki output commit 043b483eb55f7be4fea75c9bc0b9b03d251b109b Author: tianyu-z Date: Mon Jun 10 15:47:00 2024 -0400 switch logic commit e1f04db8f58dd10591fde335ea13f74cda7c79bd Author: tianyu-z Date: Mon Jun 10 02:38:21 2024 -0400 modify the form of VCR commit 96e8d9867c9549ab7490f4b12cfeb6a06238e0aa Author: tianyu-z Date: Mon Jun 10 00:10:30 2024 -0400 init include vcr commit 374590be62f988a76cf6704cfe394cd8ae7d4cb6 Merge: 504685e2 cb3b9ce7 Author: Kaichen Zhang - NTU Date: Fri Jun 7 20:25:48 2024 +0800 Merge pull request #101 from Gumpest/main Update conbench in README commit 504685e20b17659b913cf46f3012c16bf429e09d Author: Li Bo Date: Thu Jun 6 15:42:15 2024 +0800 Update README.md commit cb3b9ce71411da862ff01342a9122a3c656ffbd1 Merge: c9793b38 67b64ea4 Author: Yuan Zhang <56063339+Gumpest@users.noreply.github.com> Date: Thu Jun 6 11:22:24 2024 +0800 Merge branch 'EvolvingLMMs-Lab:main' into main commit c9793b3883714f254a700230b7bee781d6110e73 Author: Yuan Zhang Date: Thu Jun 6 11:21:05 2024 +0800 update README commit 67b64ea44a5a39d96c7a196a8a8345a7486bd912 Merge: 8ee7848a 5fd68451 Author: Li Bo Date: Wed Jun 5 23:12:58 2024 +0800 Merge pull request #100 from Gumpest/main add Conbench commit 5fd684515c55ef643726c1b6c720c7cbd2183ba1 Author: Yuan Zhang Date: Wed Jun 5 21:52:31 2024 +0800 add conbench commit 8ee7848aaa6383aa1f919c3f21199c81db3fff89 Merge: 747e1978 6fefaf7c Author: Li Bo Date: Tue Jun 4 17:09:33 2024 +0800 Merge pull request #95 from AtsuMiyai/new_task/upd add MM-UPD commit 747e19782996065cdce7157ee8c5e15beb5b6c59 Merge: 4854a34d 05843072 Author: Li Bo Date: Tue Jun 4 17:09:04 2024 +0800 Merge pull request #97 from CaraJ7/update Add MathVerse in README.md commit 6fefaf7cea504e35583ee7217449da290295a7a4 Author: AtsuMiyai Date: Tue Jun 4 17:36:39 2024 +0900 update utils.py for leaderboard submission commit 5f4fe360def1c48ea0cb1da6409d192784882308 Author: AtsuMiyai Date: Sun Jun 2 23:28:27 2024 +0900 slightly change query_prompt for the reproduction commit 05843072d608b970bcada1cd0db65a3c80864060 Author: CaraJ7 <1350074492@qq.com> Date: Sun Jun 2 17:05:28 2024 +0800 Add MathVerse in README.md commit 0581ab3cfb362e2024988b46fbbb00324f1233c9 Author: AtsuMiyai Date: Fri May 31 16:09:45 2024 +0900 merge model_specific_prompt_kwargs and dataset_name into each task yaml commit 4854a34d4d37efb5e201f2691ecdb054590cf20b Author: Pu Fanyi Date: Sat May 4 19:23:39 2024 +0800 Group MMMU images into one image (#83) * update * update font * Add matplotlib.font_manager import in utils.py * Refactor font handling in add_order_label function in utils.py * group mmmu --------- Co-authored-by: Li Bo commit d224794c49520f4d28a31862cf977198cd6cbc5e Author: AtsuMiyai Date: Wed May 29 15:15:59 2024 +0900 add upd commit 453e7936424220f02b99517059ca71babfbe5f5a Author: AtsuMiyai Date: Wed May 29 15:03:30 2024 +0900 add upd commit 909edd6769ddcf8a546be4fdd129416687516878 Author: AtsuMiyai Date: Wed May 29 12:52:21 2024 +0900 add upd commit 7c1ac9706cafc4801fa4da181d2f610b7838c7b8 Author: AtsuMiyai Date: Wed May 29 12:50:32 2024 +0900 add upd commit 811301c5280ddd74986645086f026ab730c8848c Author: AtsuMiyai Date: Wed May 29 12:46:58 2024 +0900 add upd commit 71401bafd1d515f704f86ab4817a758542bc4672 Author: AtsuMiyai Date: Wed May 29 12:41:21 2024 +0900 add upd commit 24dc435908d921e9f1a5706e3141b12e5d838d18 Author: Bo Li Date: Mon May 27 10:17:32 2024 +0000 fix compatibility issue of older version llava commit 616edf43731415b35f0f5e97748ed2e017a2891d Author: Bo Li Date: Mon May 27 09:32:26 2024 +0000 [Fix] import issues of multilingual llava and olympiadbench commit 4c5a99e21a63fb0ee1c7d15546d18066e1d9894b Merge: 45c05b2b b05c3e22 Author: Li Bo Date: Mon May 27 14:19:53 2024 +0800 Merge pull request #87 from vfragoso/vifragos/phi3v Adding microsoft/Phi-3-vision-128k-instruct model. commit b05c3e222fabd308dd7af4e04c1c6a0812962fe6 Author: Victor Fragoso Date: Fri May 24 16:36:37 2024 +0000 Adding documentation of Phi3v class. commit c2008971308ce8168d57c24d00b725832f099244 Author: Victor Fragoso Date: Fri May 24 16:25:02 2024 +0000 Adding prompt arguments for Phi3v on MathVista-TestMini commit 7f9fb6bcc6cd24a7b8011b8753d0ea98cc2451fd Author: Victor Fragoso Date: Fri May 24 13:24:16 2024 +0000 Adding Phi3v model. commit 45c05b2b2bece76e06849a52a0d034f9c0ac2367 Author: kcz358 Date: Thu May 23 03:47:36 2024 +0000 Set printing info for llava_hf to debug level commit 53f013ed8278776551ca992562253387cc9968d2 Author: kcz358 Date: Thu May 23 03:41:39 2024 +0000 Fix pope random name in pope full commit 22520a95f13334b75eee0cf0387151067a6bf516 Author: kcz358 Date: Thu May 23 03:41:14 2024 +0000 Add separated pope tasks by category commit d1eefb1565014b47287ffa6b350229062f8f602f Author: kcz358 Date: Thu May 9 08:36:02 2024 +0000 Update gitignore commit b2b4dbd2dc13432c79208db35abf7f55c97f1790 Author: kcz358 Date: Mon May 20 07:45:11 2024 +0000 Comment out Spice in caption task so that don't need to download stanford nlp model commit 662f05ce4c62a46a83f819d3a5925a9bd20059b5 Author: kcz358 Date: Mon May 20 03:13:13 2024 +0000 Comment out parse result in xcomposer commit 09329322916bfbb604d72ddaf50441a0947f8805 Author: kcz358 Date: Thu May 16 03:55:39 2024 +0000 Fix instructblip qformer size mismatch and multi-images problem commit 557a6a3b15e07e506bc05e2cc76ff6a2f8c93964 Author: kcz358 Date: Thu May 16 03:11:41 2024 +0000 Remove redundant code in fuyu commit 6aeb5504e74ed1980b53700d8e4d4dcf7d1b38fc Author: kcz358 Date: Thu May 16 01:45:24 2024 +0000 Fix idefics2 llava in the wild bugs commit aea80e6a71f716951353e1e5d68380243396b4d6 Author: kcz358 Date: Wed May 15 11:07:35 2024 +0000 Better task list_with_num commit 3c12a080d66b9c38f615b961befca7c30f82fa39 Author: Li Bo Date: Sat May 18 02:35:52 2024 +0800 Update LICENSE commit 82317a635a4978b32e095a06cc295d0ae23661c2 Author: Li Bo Date: Sat May 18 02:29:09 2024 +0800 Update LICENSE commit a8bba1cdb51061a0d27bf9a98cca1505b5c58ea5 Author: Li Bo Date: Sat May 18 02:28:03 2024 +0800 Create LICENSE commit caa5893b5fd2c1d32c72b97f371ccd9a8d9ec3a0 Merge: c0944486 423b0060 Author: Li Bo Date: Mon May 13 11:45:26 2024 +0800 Merge pull request #73 from EvolvingLMMs-Lab/kc/qwen_vl_api [Feat] Add qwen vl api commit c09444860362a136f17641f8b2a1f91c2bbc3715 Author: kcz358 Date: Sat May 11 06:11:19 2024 +0000 Fix llava_hf image tokens number issue commit 64f07e497f53e5bcbe9e8fb5830cc7a1daaf7ff1 Author: kcz358 Date: Thu May 9 02:04:10 2024 +0000 Fix endless warning for llava_hf generation commit 8aaa828108da8514dd9cd23a9d6d83a8b67f2d65 Author: Bo Li Date: Thu May 2 06:13:56 2024 +0000 Add model_name parameter to Llava constructor commit 7847dc4d8efe60605102414bb071b1da9851228e Author: kcz358 Date: Tue May 7 03:15:59 2024 +0000 Parse result for llava_hf 1.6 commit 3e56b4f92db39a2ce92903b0c43a34f1d14d59ec Author: kcz358 Date: Tue May 7 03:09:56 2024 +0000 Fix llava_hf generation for 1.6 commit fa3ff92b07ea5aaa633a2039818c310744f84d07 Author: kcz358 Date: Mon May 6 08:32:57 2024 +0000 Fix llava conv template for llama3 commit 423b00606aa77fd6b324c19e3d480b73ab852db6 Author: kcz358 Date: Sun May 5 07:54:52 2024 +0000 Add qwen vl api commit b7fd7a9f7aa3c0e1e50374047dfffc46a7462b90 Merge: 986139a9 c5a130b6 Author: Li Bo Date: Sun May 5 13:19:48 2024 +0800 Merge pull request #59 from EvolvingLMMs-Lab/add_idefics2 add idefics2 commit 986139a9a31154679bdea029b09639f84712db27 Merge: b46239ca 8d3526c0 Author: Li Bo Date: Fri May 3 01:18:18 2024 +0800 Merge pull request #36 from cocoshe/main [Fix] repr llava doc commit b46239cabab7b545ec99d9eae6c851e531b18374 Merge: bc69a744 373265f2 Author: Li Bo Date: Fri May 3 01:17:34 2024 +0800 Merge pull request #56 from gagan3012/main Multilingual LLava bench commit bc69a744d2cffeb06eba62e843bcc7869e27613a Merge: eef3aeb6 626e8a91 Author: Li Bo Date: Fri May 3 01:12:14 2024 +0800 Merge pull request #70 from hunterheiden/hsh/new_task/WebSRC Bugfix: WebSRC should be token-level F1 NOT character-level commit 626e8a91a4af2dd5dd774fc130cc2f4d74b2bc37 Author: Hunter Heidenreich Date: Thu May 2 09:31:03 2024 -0400 Bugfix: WebSRC should be token-level F1 NOT character-level commit eef3aeb6ab589bb1d5045af5b5c1984a69402d19 Merge: c4e9dd9f 9bca4413 Author: Li Bo Date: Thu May 2 14:38:17 2024 +0800 Merge pull request #69 from hunterheiden/hsh/new_task/WebSRC [New Task] WebSRC (multimodal Q&A on web screenshots) commit 9bca441376325173128e5c50087f068e519c48da Author: Hunter Heidenreich Date: Wed May 1 11:07:29 2024 -0400 Add code to enable compilation of submission for WebSRC test split commit 7687495b1ed552eeba088cb9ad5aaf1170e7fff9 Author: Hunter Heidenreich Date: Wed May 1 10:47:32 2024 -0400 Draft and validate websrc eval on dev split commit 4eebd3e5d7ab3b8c3116eea57318db72d2ce32bb Author: Hunter Heidenreich Date: Wed May 1 10:46:54 2024 -0400 Update main README with new task names commit 35fe80b67656114a8824eb59574089663bdc4c9a Author: Hunter Heidenreich Date: Wed May 1 10:46:20 2024 -0400 Draft README for WebSRC commit 955bd0635cc6c14a96ad869f1002e6dbefdc5071 Author: Hunter Heidenreich Date: Tue Apr 30 10:16:21 2024 -0400 Init webSRC commit c4e9dd9f6e40e8586587c4a75987aa109a37f14b Merge: d8a3a99f 319afccb Author: Li Bo Date: Fri Apr 26 14:37:22 2024 +0800 Merge pull request #63 from hunterheiden/hsh/new_task/screenspot New Task: ScreenSpot - Grounding (REC) and instruction generation (REG) on screens commit 319afccbe713ddf40a8a6fa28501e64c0ad34725 Author: Hunter Heidenreich Date: Thu Apr 25 11:44:34 2024 -0400 slight update commit 2f3811ca1bbad6a441016b05fde09a571900fca8 Author: Hunter Heidenreich Date: Thu Apr 25 11:41:04 2024 -0400 Add README file specific to ScreenSpot commit 28962cbe83631ec5d6481aaea4907a7c96fec848 Author: Hunter Heidenreich Date: Wed Apr 24 11:52:33 2024 -0400 Update README to reflect new tasks commit e457cfb4f2d6869e8367d6d5b03ad25ee4acc363 Author: Hunter Heidenreich Date: Tue Apr 23 18:33:16 2024 -0400 Create ScreenSpot on clean branch commit d8a3a99ff6142fe101fa3c188cc7f29593c44345 Merge: 3dcd0158 ed171293 Author: Li Bo Date: Tue Apr 23 10:34:03 2024 +0800 Merge pull request #61 from tupini07/patch-1 Fix typo in Qwen-VL that was causing "reference before assignment" commit ed171293d1e82075c5c6a847fc91ecbfd45cf89f Author: Andrea Tupini Date: Mon Apr 22 14:56:41 2024 -0600 refactor query construction for clarity commit cd874201c46f32a2903ddffae85f9db73e14adfd Author: Andrea Tupini Date: Mon Apr 22 14:54:29 2024 -0600 convert contexts to list if necessary and remove unnecessary construction of `questions` commit 85573674e90c8d505312ba18c5102e0051255078 Author: Andrea Tupini Date: Mon Apr 22 14:47:33 2024 -0600 Fix typo in qwen_vl that was causing "reference before assignment" commit 3dcd01582b719555bcf8eb25d91cc5e42abd2c5f Merge: 95df9fee 743673a1 Author: Li Bo Date: Sat Apr 20 22:03:16 2024 +0800 Merge pull request #60 from CaraJ7/main Add MathVerse commit 743673a1419b6e729e18c96f148745cc739d4c71 Merge: c1a54721 95df9fee Author: CaraJ7 <1350074492@qq.com> Date: Sat Apr 20 21:49:02 2024 +0800 Merge branch 'main' of https://github.com/EvolvingLMMs-Lab/lmms-eval commit c1a5472135c3b84061b64d997ab50dda0412ba4f Author: CaraJ7 <1350074492@qq.com> Date: Sat Apr 20 21:45:34 2024 +0800 Add MathVerse commit 373265f24e7a89cbd49ab724a2e388cc0930be78 Author: Gagan Bhatia <49101362+gagan3012@users.noreply.github.com> Date: Fri Apr 12 17:21:39 2024 -0700 Add files via upload commit d8530514a5ef9378d2adeaceb228b60ec25a6718 Author: Gagan Bhatia <49101362+gagan3012@users.noreply.github.com> Date: Fri Apr 12 17:19:49 2024 -0700 Create README.md commit 22a4958e993463edff352ac033014f9a485706cc Author: Bo Li Date: Thu Apr 4 17:12:43 2024 +0000 [WIP] adding mmbench dev evaluation (#75) * WIP * Update GPT evaluation model name and sys prompt * 🛠️ Scale accuracy to percentage The accuracy value is now multiplied by 100 in the aggregation function to represent it as a percentage. Regarding the evaluation process, `math` module importation and refactoring reduce progress log verbosity by logging every 100 evaluations instead of 10. It prevents potential logging overflow. Handling of NaN values is added to ensure 'default_value' is set in case of missing data, avoiding errors in split, category, and l2-category assignments. Finally, reporting of categorical and l2-categorical accuracies is streamlined through a new `calculate_hit_rates` function, improving code readability and maintenance. Issue refs: #1427, #1533 * Update GPT evaluation model name and API configuration * Refactor MMBench_Evaluator class to handle missing columns * Add print statements for detailed results in MMBench-CN(CC), MMBench-CN(Dev), and MMBench-EN(Dev) evaluations * Refactor MMBench-CN and MMBench-EN evaluation functions * 🔄 Refactor result processing and logging logic - Simplified the result processing functions across different utility modules (`cc_utils.py`, `cn_utils.py`, `en_utils.py`) to unify the handling of multiple-choice options. Now, all options ("A" to "E") are dynamically added to the result data, and default to "nan" if not provided in the document. - Removed redundant keys directly from the process results dict creation to avoid clutter and align with the new dynamic addition of options. - In `mmbench_evals.py`, removed the unnecessary check for all splits being 'dev' and streamlined the evaluation loop by eliminating the progress bar (tqdm) for a cleaner log output. - Commented-out code and verbose logging during evaluation, which may have interfered with performance, has been removed for a more efficient and less intrusive logging experience. This cleanup reduces redundancy in the codebase and improves evaluation performance. Refs #2045 --------- Co-authored-by: Bo Li (cherry picked from commit a19278c2ea6ddcbca64d3cc7f4efec7fe5775121) commit 8d3526c0869f0ad7747ff6bb02441140792b461c Author: cocoshe <1228759711@qq.com> Date: Thu Mar 28 13:38:36 2024 +0800 fix doc * feat: Add LlavaOneVision model to available models chore: Update sqlitedict dependency to version 2.1.0 * Revert "Squashed commit of the following:" This reverts commit 11b00999df3c43cb225482e030b791b2d454124c. * Refactor available models in lmms_eval Remove duplicate entries for "llava_hf", "llava_onevision", and "longva" in the AVAILABLE_MODELS dictionary in lmms_eval/models/__init__.py. * fix: Handle import errors in lmms_eval models/__init__.py The code changes in this commit fix the handling of import errors in the lmms_eval/models/__init__.py file. Previously, when an import error occurred, the code simply ignored it. This commit updates the code to log an error message using the logger module when an import error occurs. This commit also removes duplicate entries for "llava_hf", "llava_onevision", and "longva" in the AVAILABLE_MODELS dictionary. Recent user commits: - Refactor available models in lmms_eval - Revert "Squashed commit of the following:" - feat: Add LlavaOneVision model to available models - chore: Update sqlitedict dependency to version 2.1.0 * fix: Handle import errors in lmms_eval models/__init__.py * chore: Remove unused imports in lmms_eval/models/__init__.py and lmms_eval/tasks/vcr_wiki/utils.py * Remove unused imports in lmms_eval/tasks/vcr_wiki/utils.py * chore: Update lmms_eval/tasks/vcr_wiki/utils.py This commit updates the `lmms_eval/tasks/vcr_wiki/utils.py` file. It removes unused imports and fixes the condition for loading Spacy models based on the `load_package` value in the config file. Additionally, it adds a debug log message when the Spacy models are not loaded due to `load_package` being set to False. Remove unused imports in `lmms_eval/tasks/vcr_wiki/utils.py` * feat: Add new subtasks to overall score calculation The code changes in this commit add new subtasks to the overall score calculation in the `overall_score` function. The subtasks "ScanQA", "BLINK", "MathVerse", "SciVerse", and "Mantis" are included in the `categories` dictionary. This ensures that the scores for these subtasks are calculated and included in the evaluation results. Remove unused imports and update subtask categories in `utils.py` * feat: Add new subtasks to overall score calculation * chore: Update lmms_eval/tasks/llava_interleave_bench/_default_template_interleave_yaml Update the image aspect ratio in the default template for the llava_interleave_bench task. Change the value of "image_aspect_ratio" from "original" to "pad". This ensures that the generated images have a padded aspect ratio. * if no response directly return 0 * Squashed commit of the following: commit b2a009b6bbf8353172f5a1dd9c29ea1f67610c02 Author: Pu Fanyi Date: Mon Jul 15 19:12:25 2024 -0700 if no response directly return 0 (#142) commit 5fc5f2f5acf454fc99448b0d62eb52b4bffba0d5 Author: Kaichen Zhang - NTU Date: Tue Jul 16 10:12:11 2024 +0800 Add Muirbench (#143) * handle gen kwargs in internvl2 * Add muirbench * Add files via upload (cherry picked from commit 557083a156c3dd67ac79e22b4202e9b69b6b00f4) * update --------- Co-authored-by: Fanyi Pu Co-authored-by: Yan Shu <570533048@qq.com> commit b2a009b6bbf8353172f5a1dd9c29ea1f67610c02 Author: Pu Fanyi Date: Mon Jul 15 19:12:25 2024 -0700 if no response directly return 0 (#142) commit 5fc5f2f5acf454fc99448b0d62eb52b4bffba0d5 Author: Kaichen Zhang - NTU Date: Tue Jul 16 10:12:11 2024 +0800 Add Muirbench (#143) * handle gen kwargs in internvl2 * Add muirbench commit 4f8db1d37b1f824432927e74d6d82e06bb5aaed1 Author: Pu Fanyi Date: Fri Jul 12 17:26:50 2024 -0700 Upload live_bench results (#140) * upload results * add a readme * chore: Update upload_results.py script to use shell syntax * Update upload_results.py * Update upload_results.py commit 18f3812c4f9af2e49af6b50e8afe7f607b8a75d6 Author: Pu Fanyi Date: Wed Jul 10 18:13:43 2024 -0700 Load tasks only one time (#139) * chore: Initialize tasks only once to avoid re-initialization * chore: Initialize tasks only once to avoid re-initialization * chore: Refactor task initialization to avoid re-initialization * chore: Update task initialization to fix include_path issue * chore: Update task initialization to fix include_path issue * Merge pull request #158 from skyil7/main Add MMStar * LiveBench July (#146) * claude auto detect json mode * extract information * use claude to generate * fix bugs * fix * generate data * chore: Update dataset name and version for live_bench task * gpt-4-turbo => gpt-4o * chore: Update dataset capture settings in create_dataset.py * everything use gpt-4o * websites * livebench_july * Refactor code to simplify data assignment in example.ipynb * chore: Update dataset name for live_bench task (cherry picked from commit 2e7fd3f7a01cfd24afd5a70c2ee21ce196823aec) * chore: Update dataset name and version for live_bench task --------- Co-authored-by: cocoshe <1228759711@qq.com> Co-authored-by: Bo Li Co-authored-by: Gagan Bhatia <49101362+gagan3012@users.noreply.github.com> Co-authored-by: CaraJ7 <1350074492@qq.com> Co-authored-by: Andrea Tupini Co-authored-by: Hunter Heidenreich Co-authored-by: kcz358 Co-authored-by: Victor Fragoso Co-authored-by: AtsuMiyai Co-authored-by: Pu Fanyi Co-authored-by: Yuan Zhang Co-authored-by: Yuan Zhang <56063339+Gumpest@users.noreply.github.com> Co-authored-by: tianyu-z Co-authored-by: Suyuchen Co-authored-by: XinrunDu Co-authored-by: teowu Co-authored-by: Jingyang Co-authored-by: Teo (Timothy) Wu Haoning <38696372+teowu@users.noreply.github.com> Co-authored-by: choiszt Co-authored-by: Lorenzo Mammana Co-authored-by: Dannoopsy Co-authored-by: Dannoopsy <63581325+Dannoopsy@users.noreply.github.com> Co-authored-by: lscpku Co-authored-by: ByteDance Co-authored-by: Yan Shu <570533048@qq.com> Co-authored-by: Hongyuan Dong <45926533+Dousia@users.noreply.github.com> --- .gitignore | 2 + README.md | 4 +- docs/current_tasks.md | 1 + lmms_eval/__main__.py | 1 - lmms_eval/api/samplers.py | 4 +- lmms_eval/evaluator.py | 18 +- lmms_eval/models/__init__.py | 2 +- lmms_eval/models/llava.py | 3 + lmms_eval/models/llava_onevision.py | 38 +- lmms_eval/models/tinyllava.py | 2 + .../video_chatgpt/model/video_chatgpt.py | 1 - lmms_eval/models/vila.py | 2 - lmms_eval/models/xcomposer2_4KHD.py | 1 + lmms_eval/tasks/__init__.py | 2 +- .../_default_template_detailcaps_yaml | 3 + lmms_eval/tasks/detailcaps/detailcaps.yaml | 46 ++ lmms_eval/tasks/detailcaps/utils.py | 198 +++++++ lmms_eval/tasks/gqa_ru/gqa_ru.yaml | 29 + lmms_eval/tasks/gqa_ru/utils.py | 23 + .../llava-in-the-wild/llava-in-the-wild.yaml | 2 +- .../_default_template_wilder_yaml | 2 +- .../tasks/mix_evals/_default_template_yaml | 16 - .../tasks/mix_evals/mix_evals_video2text.yaml | 5 - .../mix_evals_video2text_freeform.yaml | 22 - .../mix_evals/mix_evals_video2text_mc.yaml | 31 -- .../mix_evals_video2text_openended.yaml | 22 - .../mix_evals_video2text_openended_2nd.yaml | 23 - lmms_eval/tasks/mix_evals/utils.py | 284 ---------- lmms_eval/tasks/mlvu/utils.py | 1 - .../mmbench/_default_template_mmbench_ru_yaml | 24 + lmms_eval/tasks/mmbench/mmbench.yaml | 1 + lmms_eval/tasks/mmbench/mmbench_ru_dev.yaml | 10 + lmms_eval/tasks/mmbench/ru_utils.py | 128 +++++ lmms_eval/tasks/mmstar/mmstar.yaml | 37 ++ lmms_eval/tasks/mmstar/utils.py | 120 ++++ lmms_eval/tasks/mmvet/mmvet.yaml | 8 +- lmms_eval/tasks/muirbench/muirbench.yaml | 1 - lmms_eval/tasks/videomme/utils.py | 103 +++- .../tasks/vitatecs/_default_template_yaml | 9 + lmms_eval/tasks/vitatecs/_vitatecs.yaml | 8 + lmms_eval/tasks/vitatecs/utils.py | 225 ++++++++ .../vitatecs/vitatecs_compositionality.yaml | 13 + .../tasks/vitatecs/vitatecs_direction.yaml | 13 + .../tasks/vitatecs/vitatecs_intensity.yaml | 13 + .../tasks/vitatecs/vitatecs_localization.yaml | 13 + .../tasks/vitatecs/vitatecs_sequence.yaml | 13 + lmms_eval/tasks/vitatecs/vitatecs_type.yaml | 13 + pyproject.toml | 4 +- tools/live_bench/example.ipynb | 481 ---------------- tools/live_bench/live_bench/api/live_bench.py | 20 - .../data_generator/live_bench_data.py | 139 ----- .../live_bench/data_generator/qa_generator.py | 522 ------------------ .../live_bench/data_generator/response.py | 12 - .../live_bench/data_generator/score_getter.py | 157 ------ .../live_bench/data_generator/utils/claude.py | 68 --- .../live_bench/data_generator/utils/gemini.py | 37 -- tools/live_bench/live_bench/driver/.gitignore | 1 - .../live_bench/driver/load_driver.py | 71 --- .../live_bench/screen_shoter/screen.py | 30 - .../live_bench/screen_shoter/screen_shoter.py | 141 ----- .../live_bench/websites/load_website.py | 34 -- .../live_bench/live_bench/websites/website.py | 62 --- .../live_bench/websites/website_list.yaml | 78 --- 63 files changed, 1115 insertions(+), 2282 deletions(-) create mode 100644 lmms_eval/tasks/detailcaps/_default_template_detailcaps_yaml create mode 100644 lmms_eval/tasks/detailcaps/detailcaps.yaml create mode 100644 lmms_eval/tasks/detailcaps/utils.py create mode 100644 lmms_eval/tasks/gqa_ru/gqa_ru.yaml create mode 100644 lmms_eval/tasks/gqa_ru/utils.py delete mode 100644 lmms_eval/tasks/mix_evals/_default_template_yaml delete mode 100644 lmms_eval/tasks/mix_evals/mix_evals_video2text.yaml delete mode 100644 lmms_eval/tasks/mix_evals/mix_evals_video2text_freeform.yaml delete mode 100644 lmms_eval/tasks/mix_evals/mix_evals_video2text_mc.yaml delete mode 100644 lmms_eval/tasks/mix_evals/mix_evals_video2text_openended.yaml delete mode 100644 lmms_eval/tasks/mix_evals/mix_evals_video2text_openended_2nd.yaml delete mode 100644 lmms_eval/tasks/mix_evals/utils.py create mode 100644 lmms_eval/tasks/mmbench/_default_template_mmbench_ru_yaml create mode 100644 lmms_eval/tasks/mmbench/mmbench_ru_dev.yaml create mode 100644 lmms_eval/tasks/mmbench/ru_utils.py create mode 100644 lmms_eval/tasks/mmstar/mmstar.yaml create mode 100644 lmms_eval/tasks/mmstar/utils.py create mode 100644 lmms_eval/tasks/vitatecs/_default_template_yaml create mode 100755 lmms_eval/tasks/vitatecs/_vitatecs.yaml create mode 100644 lmms_eval/tasks/vitatecs/utils.py create mode 100644 lmms_eval/tasks/vitatecs/vitatecs_compositionality.yaml create mode 100644 lmms_eval/tasks/vitatecs/vitatecs_direction.yaml create mode 100644 lmms_eval/tasks/vitatecs/vitatecs_intensity.yaml create mode 100644 lmms_eval/tasks/vitatecs/vitatecs_localization.yaml create mode 100644 lmms_eval/tasks/vitatecs/vitatecs_sequence.yaml create mode 100644 lmms_eval/tasks/vitatecs/vitatecs_type.yaml delete mode 100644 tools/live_bench/example.ipynb delete mode 100644 tools/live_bench/live_bench/api/live_bench.py delete mode 100644 tools/live_bench/live_bench/data_generator/live_bench_data.py delete mode 100644 tools/live_bench/live_bench/data_generator/qa_generator.py delete mode 100644 tools/live_bench/live_bench/data_generator/response.py delete mode 100644 tools/live_bench/live_bench/data_generator/score_getter.py delete mode 100644 tools/live_bench/live_bench/data_generator/utils/claude.py delete mode 100644 tools/live_bench/live_bench/data_generator/utils/gemini.py delete mode 100644 tools/live_bench/live_bench/driver/.gitignore delete mode 100644 tools/live_bench/live_bench/driver/load_driver.py delete mode 100644 tools/live_bench/live_bench/screen_shoter/screen.py delete mode 100644 tools/live_bench/live_bench/screen_shoter/screen_shoter.py delete mode 100644 tools/live_bench/live_bench/websites/load_website.py delete mode 100644 tools/live_bench/live_bench/websites/website.py delete mode 100644 tools/live_bench/live_bench/websites/website_list.yaml diff --git a/.gitignore b/.gitignore index 2557ab1bd..edf2efef2 100755 --- a/.gitignore +++ b/.gitignore @@ -13,6 +13,7 @@ temp __pycache__ .ipynb_checkpoints temp +.DS_STORE # IPython profile_default/ ipython_config.py @@ -37,3 +38,4 @@ llava-video/ Video-MME/ VATEX/ lmms_eval/tasks/vatex/__pycache__/utils.cpython-310.pyc +lmms_eval/tasks/mlvu/__pycache__/utils.cpython-310.pyc \ No newline at end of file diff --git a/README.md b/README.md index 2bf7a26e3..7be1c6c73 100755 --- a/README.md +++ b/README.md @@ -12,7 +12,9 @@ ## Annoucement -- [2024-06] 🎬🎬 The `lmms-eval/v0.2` has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/) for more details +- [2024-07] 👨‍💻👨‍💻 The `lmms-eval/v0.2.1` has been upgraded to support more models, including [LongVA](https://github.com/EvolvingLMMs-Lab/LongVA), [InterVL-2](https://github.com/OpenGVLab/InternVL), [VILA](https://github.com/NVlabs/VILA), and many more evaluation tasks, e.g. [Details Captions](https://github.com/EvolvingLMMs-Lab/lmms-eval/pull/136), [MLVU](https://arxiv.org/abs/2406.04264), [WildVision-Bench](https://huggingface.co/datasets/WildVision/wildvision-arena-data), [VITATECS](https://github.com/lscpku/VITATECS) and [LLaVA-Interleave-Bench](https://llava-vl.github.io/blog/2024-06-16-llava-next-interleave/). + +- [2024-06] 🎬🎬 The `lmms-eval/v0.2.0` has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more. Please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/) for more details - [2024-03] 📝📝 We have released the first version of `lmms-eval`, please refer to the [blog](https://lmms-lab.github.io/posts/lmms-eval-0.1/) for more details diff --git a/docs/current_tasks.md b/docs/current_tasks.md index 1622e9602..6e0e52381 100644 --- a/docs/current_tasks.md +++ b/docs/current_tasks.md @@ -50,6 +50,7 @@ - MMMU (mmmu) - MMMU Validation (mmmu_val) - MMMU Test (mmmu_test) +- MMStar (mmstar) - MMUPD (mmupd) - MMUPD Base (mmupd_base) - MMAAD Base (mmaad_base) diff --git a/lmms_eval/__main__.py b/lmms_eval/__main__.py index 996e1e539..ef0e2f1c7 100755 --- a/lmms_eval/__main__.py +++ b/lmms_eval/__main__.py @@ -165,7 +165,6 @@ def cli_evaluate(args: Union[argparse.Namespace, None] = None) -> None: # reset logger eval_logger.remove() eval_logger.add(sys.stdout, colorize=True, level=args.verbosity) - # eval_logger.add(sys.stderr, level=args.verbosity) eval_logger.info(f"Verbosity set to {args.verbosity}") os.environ["TOKENIZERS_PARALLELISM"] = "false" diff --git a/lmms_eval/api/samplers.py b/lmms_eval/api/samplers.py index f77065e8e..2cecfe224 100755 --- a/lmms_eval/api/samplers.py +++ b/lmms_eval/api/samplers.py @@ -37,7 +37,9 @@ def get_context(self, doc, num_fewshot): + ( str(self.doc_to_target(doc)[0]) if type(self.doc_to_target(doc)) is list - else self.doc_to_target(doc) if (self.config.doc_to_choice is None or type(self.doc_to_target(doc)) is str) else str(self.doc_to_choice(doc)[self.doc_to_target(doc)]) + else self.doc_to_target(doc) + if (self.config.doc_to_choice is None or type(self.doc_to_target(doc)) is str) + else str(self.doc_to_choice(doc)[self.doc_to_target(doc)]) ) for doc in selected_docs ] diff --git a/lmms_eval/evaluator.py b/lmms_eval/evaluator.py index 0e7d05c92..48397a0a2 100755 --- a/lmms_eval/evaluator.py +++ b/lmms_eval/evaluator.py @@ -1,3 +1,5 @@ +import os +import time import random import itertools import json @@ -423,6 +425,12 @@ def evaluate( # Ensure all ranks wait for rank 0 to finish aggregation torch.distributed.barrier() + # Synchronize processes with a temp file in case the evluation metric requires gpus + # TODO: fix barriers' taking up gpu computation + os.makedirs(cli_args.output_path, exist_ok=True) + if os.path.exists(f"{cli_args.output_path}/rank{int(os.environ.get('RANK', 0))}_metric_eval_done.txt"): + os.remove(f"{cli_args.output_path}/rank{int(os.environ.get('RANK', 0))}_metric_eval_done.txt") + if lm.rank == 0: ### Get task ordering for correct sample-wide aggregation group_to_task = {} @@ -623,8 +631,12 @@ def print_tasks(task_hierarchy, task_order, task_version, task_group_alias): } if log_samples: results_dict["samples"] = dict(samples) + else: + results_dict = None - return results_dict + with open(f"{cli_args.output_path}/rank{int(os.environ.get('RANK', 0))}_metric_eval_done.txt", "w") as f: + f.write(f"rank {int(os.environ.get('RANK', 0))} eval done") + while len([file for file in os.listdir(cli_args.output_path) if file.endswith("metric_eval_done.txt")]) < lm._world_size: + time.sleep(1) - else: - return None + return results_dict diff --git a/lmms_eval/models/__init__.py b/lmms_eval/models/__init__.py index 7ba7264c5..d11149a93 100755 --- a/lmms_eval/models/__init__.py +++ b/lmms_eval/models/__init__.py @@ -35,8 +35,8 @@ "mplug_owl_video": "mplug_Owl", "phi3v": "Phi3v", "tinyllava": "TinyLlava", - "llava_hf": "LlavaHf", "llava_onevision": "LlavaOneVision", + "llava_hf": "LlavaHf", "longva": "LongVA", "vila": "VILA", "xcomposer2d5": "XComposer2D5", diff --git a/lmms_eval/models/llava.py b/lmms_eval/models/llava.py index 7528e3569..7d6420ba6 100755 --- a/lmms_eval/models/llava.py +++ b/lmms_eval/models/llava.py @@ -58,6 +58,7 @@ def __init__( device_map="cuda:0", conv_template="vicuna_v1", use_cache=True, + tie_weights: bool = True, truncate_context=False, # whether to truncate the context in generation, set it False for LLaVA-1.6 customized_config=None, # ends in json **kwargs, @@ -97,6 +98,8 @@ def __init__( self._tokenizer, self._model, self._image_processor, self._max_length = load_pretrained_model(pretrained, None, model_name, device_map=self.device_map, **llava_model_args) self._config = self._model.config self.model.eval() + if tie_weights: + self.model.tie_weights() self.truncation = truncation self.batch_size_per_gpu = int(batch_size) diff --git a/lmms_eval/models/llava_onevision.py b/lmms_eval/models/llava_onevision.py index 856aa36cc..4347bc519 100644 --- a/lmms_eval/models/llava_onevision.py +++ b/lmms_eval/models/llava_onevision.py @@ -83,7 +83,7 @@ def __init__( customized_config: Optional[str] = None, # ends in json max_frames_num: Optional[int] = 32, mm_spatial_pool_stride: Optional[int] = 2, - mm_spatial_pool_mode: Optional[str] = "average", + mm_spatial_pool_mode: Optional[str] = "bilinear", token_strategy: Optional[str] = "single", # could be "single" or "multiple", "multiple" denotes adding multiple tokens for each frame video_decode_backend: str = "decord", **kwargs, @@ -183,7 +183,7 @@ def __init__( elif accelerator.num_processes == 1 and device_map == "auto": eval_logger.info(f"Using {accelerator.num_processes} devices with tensor parallelism") self._rank = 0 - self._word_size = 1 + self._world_size = 1 else: eval_logger.info(f"Using single device: {self._device}") @@ -262,6 +262,37 @@ def loglikelihood(self, requests: List[Instance]) -> List[Tuple[float, bool]]: pbar = tqdm(total=len(requests), disable=(self.rank != 0), desc="Model Responding") for contexts, doc_to_target, doc_to_visual, doc_id, task, split in [reg.args for reg in requests]: + if len(visual) > 1 or "image_aspect_ratio" not in self._config.__dict__: # for multi image case, we treat per image aspect ratio as "pad" by default. + self._config.image_aspect_ratio = getattr(gen_kwargs, "image_aspect_ratio", "pad") + eval_logger.info(f"Setting image aspect ratio: {self._config.image_aspect_ratio}") + # if (len(visual) > 1 or "image_aspect_ratio" not in self._config.__dict__) and ("image_aspect_ratio" in gen_kwargs.keys()): + # self._config.image_aspect_ratio = gen_kwargs["image_aspect_ratio"] + # eval_logger.info(f"Setting image aspect ratio: {self._config.image_aspect_ratio}") + + if type(visual[0]) == PIL.Image.Image: # For image task + image_tensor = process_images(visual, self._image_processor, self._config) + if type(image_tensor) is list: + image_tensor = [_image.to(dtype=torch.float16, device=self.device) for _image in image_tensor] + else: + image_tensor = image_tensor.to(dtype=torch.float16, device=self.device) + + task_type = "image" + + elif type(visual[0]) == str: # For video task + image_tensor = [] + try: + if self.video_decode_backend == "decord": + frames = self.load_video(visual, self.max_frames_num) + elif self.video_decode_backend == "pyav": + frames = read_video_pyav(visual[0], num_frm=self.max_frames_num) + frames = self._image_processor.preprocess(frames, return_tensors="pt")["pixel_values"].half().cuda() + image_tensor.append(frames) + except Exception as e: + eval_logger.error(f"Error {e} in loading video") + image_tensor = None + + task_type = "video" + # encode, pad, and truncate contexts for this batch if type(doc_to_target) == str: continuation = doc_to_target @@ -319,7 +350,7 @@ def loglikelihood(self, requests: List[Instance]) -> List[Tuple[float, bool]]: image_tokens = " ".join(image_tokens) prompts_input = image_tokens + "\n" + (contexts[0] if isinstance(contexts, list) else contexts) else: - question = (contexts[0] if isinstance(contexts, list) else contexts) + question = contexts[0] if isinstance(contexts, list) else contexts # This is much safer for llama3, as we now have some object type in it if "llama_3" in self.conv_template: @@ -349,7 +380,6 @@ def loglikelihood(self, requests: List[Instance]) -> List[Tuple[float, bool]]: self._config.mm_spatial_pool_stride = self.mm_spatial_pool_stride self._config.mm_spatial_pool_mode = self.mm_spatial_pool_mode - with torch.inference_mode(): outputs = self.model(input_ids=input_ids, labels=labels, images=image, use_cache=True, **kwargs) loss = outputs["loss"] diff --git a/lmms_eval/models/tinyllava.py b/lmms_eval/models/tinyllava.py index 1cb6d281c..a4335f054 100755 --- a/lmms_eval/models/tinyllava.py +++ b/lmms_eval/models/tinyllava.py @@ -23,7 +23,9 @@ from loguru import logger as eval_logger try: + from tinyllava.model import load_pretrained_model from tinyllava.data import ImagePreprocess, TextPreprocess + from tinyllava.utils.constants import DEFAULT_IMAGE_TOKEN from tinyllava.utils.message import Message except Exception as e: eval_logger.debug("TinyLLaVA_Factory is not installed. Please install TinyLLaVA_Factory to use this model.\nError: %s" % e) diff --git a/lmms_eval/models/video_chatgpt/model/video_chatgpt.py b/lmms_eval/models/video_chatgpt/model/video_chatgpt.py index df6fee4f3..bded27e74 100644 --- a/lmms_eval/models/video_chatgpt/model/video_chatgpt.py +++ b/lmms_eval/models/video_chatgpt/model/video_chatgpt.py @@ -76,7 +76,6 @@ def forward( inputs_embeds = self.embed_tokens(input_ids) if (input_ids.shape[1] != 1 or self.training) and video_spatio_temporal_features is not None: - video_features = self.mm_projector(video_spatio_temporal_features) dummy_video_features = torch.zeros(video_features.shape[1], 1024, device=inputs_embeds.device, dtype=inputs_embeds.dtype) dummy_video_features = self.mm_projector(dummy_video_features) diff --git a/lmms_eval/models/vila.py b/lmms_eval/models/vila.py index 5d48af83f..abec8f765 100755 --- a/lmms_eval/models/vila.py +++ b/lmms_eval/models/vila.py @@ -290,7 +290,6 @@ def generate_until(self, requests) -> List[str]: images = self.load_video(visual, num_video_frames) elif self.video_decode_backend == "pyav": images = read_video_pyav(visual, num_frm=num_video_frames) - video = process_images(images, self.model.image_processor, self.model.config).half().cuda() videos.append(video) @@ -338,7 +337,6 @@ def generate_until(self, requests) -> List[str]: if "num_beams" not in gen_kwargs: gen_kwargs["num_beams"] = 1 - # import pdb;pdb.set_trace() with torch.inference_mode(): output_ids = self.model.generate( input_ids=input_ids, diff --git a/lmms_eval/models/xcomposer2_4KHD.py b/lmms_eval/models/xcomposer2_4KHD.py index e741f637b..6c4f81a77 100644 --- a/lmms_eval/models/xcomposer2_4KHD.py +++ b/lmms_eval/models/xcomposer2_4KHD.py @@ -6,6 +6,7 @@ import torchvision.transforms as transforms from datetime import timedelta + from lmms_eval import utils from lmms_eval.api.instance import Instance from lmms_eval.api.model import lmms diff --git a/lmms_eval/tasks/__init__.py b/lmms_eval/tasks/__init__.py index ce11b8524..8955536c7 100755 --- a/lmms_eval/tasks/__init__.py +++ b/lmms_eval/tasks/__init__.py @@ -73,7 +73,7 @@ def include_task_folder(task_dir: str, register_task: bool = True) -> None: # if (subdirs == [] or subdirs == ["__pycache__"]) and (len(file_list) > 0): for f in file_list: # if "detail" in f: - # import pdb;pdb.set_trace() + # # if "vatex" in f: # print("a") if f.endswith(".yaml"): diff --git a/lmms_eval/tasks/detailcaps/_default_template_detailcaps_yaml b/lmms_eval/tasks/detailcaps/_default_template_detailcaps_yaml new file mode 100644 index 000000000..f673aeb7b --- /dev/null +++ b/lmms_eval/tasks/detailcaps/_default_template_detailcaps_yaml @@ -0,0 +1,3 @@ +model_specific_prompt_kwargs: + default: + prompt: "Describe this image in detail." \ No newline at end of file diff --git a/lmms_eval/tasks/detailcaps/detailcaps.yaml b/lmms_eval/tasks/detailcaps/detailcaps.yaml new file mode 100644 index 000000000..c509f965d --- /dev/null +++ b/lmms_eval/tasks/detailcaps/detailcaps.yaml @@ -0,0 +1,46 @@ +dataset_path: foundation-multimodal-models/DetailCaps-4870 +dataset_kwargs: + token: True +task: "detailcaps" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.detailcaps_doc_to_visual +doc_to_text: !function utils.detailcaps_doc_to_text +doc_to_target: !function utils.detailcaps_doc_to_target +generation_kwargs: + max_new_tokens: 512 + temperature: 0 + top_p: 0 + num_beams: 1 + do_sample: false +process_results: !function utils.detailcaps_process_result +# Note that the metric name can be either a registed metric function (such as the case for GQA) or a key name returned by process_results +metric_list: + - metric: detailcaps_CAPTURE + aggregation : !function utils.detailcaps_capture + higher_is_better : true + - metric: detailcaps_Bleu_4 + aggregation : !function utils.detailcaps_bleu4 + higher_is_better : true + - metric: detailcaps_Bleu_3 + aggregation : !function utils.detailcaps_bleu3 + higher_is_better : true + - metric: detailcaps_Bleu_2 + aggregation : !function utils.detailcaps_bleu2 + higher_is_better : true + - metric: detailcaps_Bleu_1 + aggregation : !function utils.detailcaps_bleu1 + higher_is_better : true + - metric: detailcaps_METEOR + aggregation : !function utils.detailcaps_meteor + higher_is_better : true + - metric: detailcaps_ROUGE_L + aggregation : !function utils.detailcaps_rougel + higher_is_better : true + - metric: detailcaps_CIDEr + aggregation : !function utils.detailcaps_cider + higher_is_better : true + +metadata: + - version: 0.0 +include: _default_template_detailcaps_yaml \ No newline at end of file diff --git a/lmms_eval/tasks/detailcaps/utils.py b/lmms_eval/tasks/detailcaps/utils.py new file mode 100644 index 000000000..50c0aba25 --- /dev/null +++ b/lmms_eval/tasks/detailcaps/utils.py @@ -0,0 +1,198 @@ +import collections +import os +import json +from capture_metric.capture import CAPTURE +from pycocoevalcap.eval import COCOEvalCap, Bleu, Meteor, Rouge, Cider, Spice +from pycocoevalcap.tokenizer.ptbtokenizer import PTBTokenizer +from pycocotools.coco import COCO +import io +from PIL import Image + +from lmms_eval.tasks._task_utils.file_utils import generate_submission_file + +import logging + +eval_logger = logging.getLogger("lmms-eval") + +dir_name = os.path.dirname(os.path.abspath(__file__)) + +detailcaps_METRICS = ["CAPTURE", "Bleu_4", "Bleu_3", "Bleu_2", "Bleu_1", "METEOR", "ROUGE_L", "CIDEr"] # , "SPICE"] + + +def detailcaps_doc_to_visual(doc): + return [Image.open(io.BytesIO(doc["binary"])).convert("RGB")] + + +def detailcaps_doc_to_text(doc, model_specific_prompt_kwargs=None): + # question = "Please carefully observe the image and come up with a caption for the image" + return model_specific_prompt_kwargs["prompt"] + + +def detailcaps_doc_to_target(doc): + references = [ + doc["GT_Caption_GPT4O"], + doc["GT_Caption_GPT4V"], + doc["GT_Caption_Gemini15Pro"], + ] + return references + + +def detailcaps_process_result(doc, result): + """ + Args: + doc: a instance of the eval dataset + results: [pred] + Returns: + a dictionary with key: metric name, value: metric value + """ + + pred = result[0] + # The question id in our dataset is the image file itself + image_id = doc["image"] + + data_dict = {"answer": detailcaps_doc_to_target(doc), "pred": pred, "image_id": image_id} + + return {f"detailcaps_{metric}": data_dict for metric in detailcaps_METRICS} + + +def check_if_context_is_set(expected_context="spawn"): + # 获取默认上下文的名称 + default_context_name = mp.get_context().get_start_method() + + # 检查当前上下文是否与预期的上下文相匹配 + is_set_to_expected = default_context_name == expected_context + + return is_set_to_expected + + +def detailcaps_aggregation_result(results, metric, args=None): + scorers = [(Bleu(4), "Bleu_1"), (Bleu(4), "Bleu_2"), (Bleu(4), "Bleu_3"), (Bleu(4), "Bleu_4"), (Meteor(), "METEOR"), (Rouge(), "ROUGE_L"), (Cider(), "CIDEr"), (CAPTURE(), "CAPTURE")] + scorers_dict = {s[1]: s for s in scorers} + + stored_results = [] + # In order to make the coco eval tools to successfully create index + # We need at least two dict in the dataset + # 'annotation' and 'images' + # 'annotation' exactly reproduce the original annotation + # 'images' however only need the image id which is contained in the file name + dataset = {"annotations": [], "images": []} + idx = 0 + + for result in results: + stored_results.append({"image_id": result["image_id"], "caption": result["pred"]}) + for a in result["answer"]: + dataset["annotations"].append({"image_id": result["image_id"], "caption": a, "id": idx}) + idx += 1 + dataset["images"].append({"id": result["image_id"]}) + + coco = COCO() + # Manually create index here + coco.dataset = dataset + coco.createIndex() + + detailcaps_result = coco.loadRes(stored_results) + detailcaps_eval = COCOEvalCap(coco, detailcaps_result) + + imgIds = detailcaps_eval.params["image_id"] + gts = {} + res = {} + for imgId in imgIds: + gts[imgId] = detailcaps_eval.coco.imgToAnns[imgId] + res[imgId] = detailcaps_eval.cocoRes.imgToAnns[imgId] + + eval_logger.info("tokenization...") + tokenizer = PTBTokenizer() + + if metric == "CAPTURE": + reorg_gts, reorg_res = collections.defaultdict(list), collections.defaultdict(list) + for _, samples in gts.items(): + for sample in samples: + reorg_gts[sample["image_id"]].append(sample["caption"]) + for _, samples in res.items(): + for sample in samples: + reorg_res[sample["image_id"]].append(sample["caption"]) + gts, res = reorg_gts, reorg_res + else: + gts = tokenizer.tokenize(gts) + res = tokenizer.tokenize(res) + + eval_logger.info(f"Computing {metric} scores...") + + # if int(os.environ.get("RANK", 0)) == 0: + # from IPython import embed; embed() + # else: + # import time; time.sleep(1200) + + score, scores = scorers_dict[metric][0].compute_score(gts, res) + # When metric is one of the Bleu, score will be a list + if type(score) == list: + n = int(metric.split("_")[-1]) + score = score[n - 1] + + path = generate_submission_file(f"detailcaps_val_{metric}_scores.json", args) + eval_logger.info("Storing prediction that can be submitted to the server ...") + with open(path, "w") as f: + json.dump(stored_results, f, indent=4) + eval_logger.info(f"Your result has been saved to {path}.") + + return score + + +def detailcaps_bleu4(results, args=None): + return detailcaps_aggregation_result(results, "Bleu_4", args) + + +def detailcaps_bleu3(results, args=None): + return detailcaps_aggregation_result(results, "Bleu_3", args) + + +def detailcaps_bleu2(results, args=None): + return detailcaps_aggregation_result(results, "Bleu_2", args) + + +def detailcaps_bleu1(results, args=None): + return detailcaps_aggregation_result(results, "Bleu_1", args) + + +def detailcaps_meteor(results, args=None): + return detailcaps_aggregation_result(results, "METEOR", args) + + +def detailcaps_rougel(results, args=None): + return detailcaps_aggregation_result(results, "ROUGE_L", args) + + +def detailcaps_cider(results, args=None): + return detailcaps_aggregation_result(results, "CIDEr", args) + + +def detailcaps_spice(results, args=None): + return detailcaps_aggregation_result(results, "SPICE", args) + + +def detailcaps_capture(results, args=None): + return detailcaps_aggregation_result(results, "CAPTURE", args) + + +def detailcaps_test_process_result(doc, result): + """ + Args: + doc: a instance of the eval dataset + results: [pred] + Returns: + a dictionary with key: metric name (in this case detailcaps_passthrough), value: metric value + """ + return {"detailcaps_passthrough": {"pred": result[0], "image_id": doc["image_id"]}} + + +def detailcaps_test_aggregation_result(results, args=None): + stored_results = [] + for result in results: + stored_results.append({"image_id": int(result["image_id"]), "caption": result["pred"]}) + + path = generate_submission_file("detailcaps_captions_detailcaps_test_alg_results.json", args) + eval_logger.info("Storing prediction that can be submitted to the server ...") + with open(path, "w") as f: + json.dump(stored_results, f, indent=4) + + eval_logger.info(f"Your test result has been stored in {path}. Make sure you also have the val result stored to submit to the server on https://codalab.lisn.upsaclay.fr/competitions/7404#participate.") diff --git a/lmms_eval/tasks/gqa_ru/gqa_ru.yaml b/lmms_eval/tasks/gqa_ru/gqa_ru.yaml new file mode 100644 index 000000000..2a3d10972 --- /dev/null +++ b/lmms_eval/tasks/gqa_ru/gqa_ru.yaml @@ -0,0 +1,29 @@ +dataset_path: deepvk/GQA-ru +dataset_name: testdev_balanced_instructions +dataset_kwargs: + token: True +task: "gqa-ru" +test_split: testdev +output_type: generate_until +doc_to_visual: !function utils.gqa_doc_to_visual +doc_to_text: !function utils.gqa_doc_to_text +doc_to_target: "answer" +generation_kwargs: + max_new_tokens: 16 + temperature: 0 + top_p: 1.0 + num_beams: 1 + do_sample: false +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + - version: 0.0 + +model_specific_prompt_kwargs: + default: + pre_prompt: "" + post_prompt: "\nОтветь одним словом." \ No newline at end of file diff --git a/lmms_eval/tasks/gqa_ru/utils.py b/lmms_eval/tasks/gqa_ru/utils.py new file mode 100644 index 000000000..9c1acb9ce --- /dev/null +++ b/lmms_eval/tasks/gqa_ru/utils.py @@ -0,0 +1,23 @@ +from datasets import load_dataset + +GQA_RAW_IMAGE_DATASET = None +GQA_ID2IMAGE = None + + +def gqa_doc_to_visual(doc): + global GQA_RAW_IMAGE_DATASET + global GQA_ID2IMAGE + if GQA_RAW_IMAGE_DATASET is None: + GQA_RAW_IMAGE_DATASET = load_dataset("deepvk/GQA-ru", "testdev_balanced_images", split="testdev", token=True) + GQA_ID2IMAGE = {} + for row in GQA_RAW_IMAGE_DATASET: + GQA_ID2IMAGE[row["id"]] = row["image"].convert("RGB") + image = GQA_ID2IMAGE[doc["imageId"]] + return [image] + + +def gqa_doc_to_text(doc, model_specific_prompt_kwargs): + question = doc["question"] + pre_prompt = model_specific_prompt_kwargs["pre_prompt"] + post_prompt = model_specific_prompt_kwargs["post_prompt"] + return f"{pre_prompt}{question}{post_prompt}" diff --git a/lmms_eval/tasks/llava-in-the-wild/llava-in-the-wild.yaml b/lmms_eval/tasks/llava-in-the-wild/llava-in-the-wild.yaml index 02e846c37..c0db28e18 100755 --- a/lmms_eval/tasks/llava-in-the-wild/llava-in-the-wild.yaml +++ b/lmms_eval/tasks/llava-in-the-wild/llava-in-the-wild.yaml @@ -11,7 +11,7 @@ generation_kwargs: until: - "ASSISTANT:" image_aspect_ratio: original - max_new_tokens: 32768 + max_new_tokens: 4096 temperature: 0 top_p: 1.0 num_beams: 1 diff --git a/lmms_eval/tasks/llava_wilder/_default_template_wilder_yaml b/lmms_eval/tasks/llava_wilder/_default_template_wilder_yaml index 356df525f..2484f795c 100644 --- a/lmms_eval/tasks/llava_wilder/_default_template_wilder_yaml +++ b/lmms_eval/tasks/llava_wilder/_default_template_wilder_yaml @@ -4,7 +4,7 @@ doc_to_text: !function utils.llava_doc_to_text doc_to_target: "gpt4v_answer" generation_kwargs: max_new_tokens: 4096 - temperature: 0.7 + temperature: 0 top_p: 1.0 num_beams: 1 do_sample: false diff --git a/lmms_eval/tasks/mix_evals/_default_template_yaml b/lmms_eval/tasks/mix_evals/_default_template_yaml deleted file mode 100644 index ce26b0ea1..000000000 --- a/lmms_eval/tasks/mix_evals/_default_template_yaml +++ /dev/null @@ -1,16 +0,0 @@ -dataset_path: lmms-lab/MixEvals_Video2Text -dataset_kwargs: - token: True - video: True - cache_dir: mix_evals_video2text -model_specific_prompt_kwargs: - default: - pre_prompt: "" - post_prompt: "" - gpt4v: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "" -metadata: - modality: video - version: 0.0 - gpt_eval_model_name: "gpt-3.5-turbo" \ No newline at end of file diff --git a/lmms_eval/tasks/mix_evals/mix_evals_video2text.yaml b/lmms_eval/tasks/mix_evals/mix_evals_video2text.yaml deleted file mode 100644 index ed0a517c0..000000000 --- a/lmms_eval/tasks/mix_evals/mix_evals_video2text.yaml +++ /dev/null @@ -1,5 +0,0 @@ -group: mix_evals_video2text -task: -- mix_evals_video2text_openconv -- mix_evals_video2text_mc -- mix_evals_video2text_freeform \ No newline at end of file diff --git a/lmms_eval/tasks/mix_evals/mix_evals_video2text_freeform.yaml b/lmms_eval/tasks/mix_evals/mix_evals_video2text_freeform.yaml deleted file mode 100644 index e8ec9b4ad..000000000 --- a/lmms_eval/tasks/mix_evals/mix_evals_video2text_freeform.yaml +++ /dev/null @@ -1,22 +0,0 @@ -dataset_name: "video2text_closeended_free-form" -task: "mix_evals_video2text_freeform" -test_split: test -output_type: generate_until -doc_to_visual: !function utils.mix_evals_video2text_doc_to_visual -doc_to_text: !function utils.mix_evals_video2text_doc_to_text -doc_to_target: "{{target}}" -process_results: !function utils.mix_evals_video2text_process_results_freeform -metric_list: - - metric: gpt_eval - aggregation: !function utils.mix_evals_video2text_gpt_eval - higher_is_better: true - -include: _default_template_yaml - -model_specific_prompt_kwargs: - default: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "Answer the question using a single word or phrase." - gpt4v: - pre_prompt: "These are frames from a video. Please answer the following questions about the video with a short phrase." - post_prompt: "" \ No newline at end of file diff --git a/lmms_eval/tasks/mix_evals/mix_evals_video2text_mc.yaml b/lmms_eval/tasks/mix_evals/mix_evals_video2text_mc.yaml deleted file mode 100644 index d04dabf42..000000000 --- a/lmms_eval/tasks/mix_evals/mix_evals_video2text_mc.yaml +++ /dev/null @@ -1,31 +0,0 @@ -include: _default_template_yaml -dataset_name: "video2text_closeended_multiple-choice" -task: "mix_evals_video2text_mc" -test_split: test -output_type: generate_until -doc_to_visual: !function utils.mix_evals_video2text_doc_to_visual -doc_to_text: !function utils.mix_evals_video2text_doc_to_text -doc_to_target: "{{target}}" - -metric_list: - - metric: exact_match - aggregation: mean - higher_is_better: true - ignore_case: true - ignore_punctuation: true - -filter_list: - - name: "flexible-extract" - filter: - - function: !function utils.MultiChoiceRegexFilter - group_select: 0 - ignore_case: true - ignore_punctuation: true - -model_specific_prompt_kwargs: - default: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "Answer with the option's letter from the given choices directly." - gpt4v: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "Answer with the option's letter from the given choices directly." \ No newline at end of file diff --git a/lmms_eval/tasks/mix_evals/mix_evals_video2text_openended.yaml b/lmms_eval/tasks/mix_evals/mix_evals_video2text_openended.yaml deleted file mode 100644 index eb3fca3db..000000000 --- a/lmms_eval/tasks/mix_evals/mix_evals_video2text_openended.yaml +++ /dev/null @@ -1,22 +0,0 @@ -include: _default_template_yaml -dataset_name: "video2text_openended" -task: "mix_evals_video2text_openconv" -test_split: test -output_type: generate_until -doc_to_visual: !function utils.mix_evals_video2text_doc_to_visual -doc_to_text: !function utils.mix_evals_video2text_doc_to_text_open_convs -doc_to_target: "" -process_results: !function utils.mix_evals_video2text_process_results_open_convs - -metric_list: - - metric: submission - aggregation: !function utils.mix_evals_video2text_aggregate_gen - higher_is_better: true - -model_specific_prompt_kwargs: - default: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "" - gpt4v: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "" diff --git a/lmms_eval/tasks/mix_evals/mix_evals_video2text_openended_2nd.yaml b/lmms_eval/tasks/mix_evals/mix_evals_video2text_openended_2nd.yaml deleted file mode 100644 index e8c0f1fe7..000000000 --- a/lmms_eval/tasks/mix_evals/mix_evals_video2text_openended_2nd.yaml +++ /dev/null @@ -1,23 +0,0 @@ -include: _default_template_yaml -dataset_path: lmms-lab/MixEvals_Video2Text_OpenEnded_2nd -dataset_name: "video2text_openended" -task: "mix_evals_video2text_openconv_2nd" -test_split: test -output_type: generate_until -doc_to_visual: !function utils.mix_evals_video2text_doc_to_visual -doc_to_text: !function utils.mix_evals_video2text_doc_to_text_open_convs -doc_to_target: "" -process_results: !function utils.mix_evals_video2text_process_results_open_convs - -metric_list: - - metric: submission - aggregation: !function utils.mix_evals_video2text_aggregate_gen - higher_is_better: true - -model_specific_prompt_kwargs: - default: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "" - gpt4v: - pre_prompt: "These are frames from a video. Please answer the following questions about the video." - post_prompt: "" diff --git a/lmms_eval/tasks/mix_evals/utils.py b/lmms_eval/tasks/mix_evals/utils.py deleted file mode 100644 index 6c3e6e6c3..000000000 --- a/lmms_eval/tasks/mix_evals/utils.py +++ /dev/null @@ -1,284 +0,0 @@ -import os -import re -import sys -import datetime -import lmms_eval.tasks._task_utils.file_utils as file_utils -from lmms_eval.filters.extraction import ExtendedRegexFilter -import json -import yaml -from pathlib import Path -import requests -import time -from loguru import logger as eval_logger - -with open(Path(__file__).parent / "_default_template_yaml", "r") as f: - raw_data = f.readlines() - safe_data = [] - for i, line in enumerate(raw_data): - # remove function definition since yaml load cannot handle it - if "!function" not in line: - safe_data.append(line) - - config = yaml.safe_load("".join(safe_data)) - -NUM_SECONDS_TO_SLEEP = 5 -GPT_EVAL_MODEL_NAME = config["metadata"]["gpt_eval_model_name"] -API_TYPE = os.getenv("API_TYPE", "openai") - -if API_TYPE == "openai": - API_URL = os.getenv("OPENAI_API_URL", "https://api.openai.com/v1/chat/completions") - API_KEY = os.getenv("OPENAI_API_KEY", "YOUR_API_KEY") - headers = { - "Authorization": f"Bearer {API_KEY}", - "Content-Type": "application/json", - } -elif API_TYPE == "azure": - API_URL = os.getenv("AZURE_ENDPOINT", "https://api.cognitive.microsoft.com/sts/v1.0/issueToken") - API_KEY = os.getenv("AZURE_API_KEY", "YOUR_API_KEY") - headers = { - "api-key": API_KEY, - "Content-Type": "application/json", - } - -eval_prompt = """You are an AI assistant who will help me to evaluate the quality of a model response to a few candidate ground truth answers. - -Some criterion -- Response that perfectly reflect the meaning of the ground truth: 1 point -- Response that reflect none of the key points in the ground truth: 0 point -- Some part in the response are correct but some parts in the ground truth are not mentioned in the response: 0.5 point -- Some part in the response are correct but other parts in the response are not mentioned in the ground truth: 0.5 point - -Here're some examples about the scoring criterion and format: -model response: Steam Cleaning Services -ground truth: ["steam clean", "steam clean", "cleaning", "car", "steam clean"], -Point: 1 - -model response: A cowboy action shooter. -ground truth: ["man"] -Point: 1 - -model response: I'm sorry, but I can't assist with that request. -ground truth: ["quality"] -Point: 0 - -Let's begin this task: -model response: {model_response} -ground truth: {ground_truth} -Point:""" - - -def get_eval(model_response: str, ground_truth: str, max_tokens: int, retries: int = 5): - global headers - content = eval_prompt.format(model_response=model_response, ground_truth=ground_truth) - - messages = [ - {"role": "user", "content": content}, - ] - - payload = { - "model": GPT_EVAL_MODEL_NAME, - "messages": messages, - "temperature": 0.2, - "max_tokens": max_tokens, - } - - for attempt in range(retries): - try: - response = requests.post(API_URL, headers=headers, json=payload, timeout=60) - response.raise_for_status() - response_data = response.json() - - content = response_data["choices"][0]["message"]["content"].strip() - if content != "": - return content, response_data["model"] - break # If successful, break out of the loop - - except Exception as e: - eval_logger.info(f"Attempt {attempt + 1} failed with error: {e}") - if attempt < retries: # If we have retries left, sleep and then continue to next attempt - time.sleep(NUM_SECONDS_TO_SLEEP) - else: # If this was the last attempt, log and return empty - eval_logger.error(f"All {retries} attempts failed. Last error message: {e}") - return "", "" - return "", "" - - -# A bit ugly here -# But the idea is that we will unzip all the zip files -# To HF HOME cache dir -# And load it here -HF_HOME = os.environ["HF_HOME"] -cache_dir = config["dataset_kwargs"]["cache_dir"] -cache_dir = os.path.join(HF_HOME, cache_dir) -cache_dir = os.path.join(cache_dir) - - -# Pass in video path here -# Can only work correctly with video llm -def mix_evals_video2text_doc_to_visual(doc): - video_path = doc["video_path"] - video_path = os.path.join(cache_dir, video_path) - if os.path.exists(video_path): - video_path = video_path - elif os.path.exists(video_path.replace("mp4", "MP4")): - video_path = video_path.replace("mp4", "MP4") - else: - sys.exit(f"video path:{video_path} does not exist, please check") - return [video_path] - - -# This is the place where you format your question -def mix_evals_video2text_doc_to_text(doc, model_specific_prompt_kwargs=None): - if model_specific_prompt_kwargs is None: - model_specific_prompt_kwargs = {} - pre_prompt = "" - post_prompt = "" - if "pre_prompt" in model_specific_prompt_kwargs: - pre_prompt = model_specific_prompt_kwargs["pre_prompt"] - if "post_prompt" in model_specific_prompt_kwargs: - post_prompt = model_specific_prompt_kwargs["post_prompt"] - - user_prompt = doc["prompt"] - - if "options" in doc: - option_prompt = "Here are the options:\n" - for idx, option in enumerate(doc["options"]): - char_idx = chr(ord("A") + idx) - option = option.strip() - option_prompt += f"{char_idx}. {option}\n" - - option_prompt = option_prompt.rstrip("\n") - user_prompt = f"{user_prompt}\n{option_prompt}" - - if pre_prompt: - user_prompt = f"{pre_prompt}\n{user_prompt}" - - if post_prompt: - user_prompt = f"{user_prompt}\n{post_prompt}" - return user_prompt - - -OPEN_CONVS_PROMPT = """{PRE} -{FIRST} -{POST} -""" - - -def mix_evals_video2text_doc_to_text_open_convs(doc, model_specific_prompt_kwargs=None): - if model_specific_prompt_kwargs is None: - model_specific_prompt_kwargs = {} - pre_prompt = "" - post_prompt = "" - if "pre_prompt" in model_specific_prompt_kwargs: - pre_prompt = model_specific_prompt_kwargs["pre_prompt"] - if "post_prompt" in model_specific_prompt_kwargs: - post_prompt = model_specific_prompt_kwargs["post_prompt"] - - filtered_first_turn = re.sub(r"", "", doc["first_turn_user_prompt"]) - return OPEN_CONVS_PROMPT.format( - PRE=pre_prompt, - POST=post_prompt, - FIRST=filtered_first_turn, - ) - - -MODEL_CONVS_PROMPT = """{FIRST} -{MODEL_RESPONSE} -{PRE} -{SECOND} -{POST} -""" - - -def mix_evals_video2text_doc_to_text_open_2nd_convs(doc, model_specific_prompt_kwargs=None): - if model_specific_prompt_kwargs is None: - model_specific_prompt_kwargs = {} - pre_prompt = "" - post_prompt = "" - if "pre_prompt" in model_specific_prompt_kwargs: - pre_prompt = model_specific_prompt_kwargs["pre_prompt"] - if "post_prompt" in model_specific_prompt_kwargs: - post_prompt = model_specific_prompt_kwargs["post_prompt"] - - return MODEL_CONVS_PROMPT.format( - PRE=pre_prompt, - POST=post_prompt, - FIRST=doc["first_turn_user_prompt"], - SECOND=doc["second_turn_user_prompt"], - MODEL_RESPONSE=doc["model_response"], - ) - - -def mix_evals_video2text_process_results_open_convs(doc, result): - pred = result[0] - return {"submission": {"pred": pred, "question_idx": doc["question_index"], "first_turn_video_caption": doc["first_turn_video_caption"], "target": ""}} - - -def mix_evals_video2text_process_results_freeform(doc, result): - pred = result[0] - ground_truth_str = ", ".join([f'"{gt}"' for gt in doc["target"]]) - ground_truth_str = f"[{ground_truth_str}]" - content = eval_prompt.format(model_response=pred, ground_truth=ground_truth_str) - eval_answer, model_name = get_eval(model_response=pred, ground_truth=ground_truth_str, max_tokens=1024) - return { - "submission": {"pred": pred, "question_idx": doc["question_index"], "target": doc["target"], "eval_answer": eval_answer, "gpt_prompt": content}, - "gpt_eval": {"pred": pred, "question_idx": doc["question_index"], "target": doc["target"], "eval_answer": eval_answer, "gpt_prompt": content}, - } - - -def mix_evals_video2text_aggregate_submissions(results, args, task): - now_date_time = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") - submission_file_name = f"mix_evals_video2text_{task}-{now_date_time}.json" - path = file_utils.generate_submission_file(submission_file_name, args) - with open(path, "w") as f: - json.dump(results, f) - eval_logger.info(f"Submission file saved to {path}") - - -def mix_evals_video2text_gpt_eval(results, args): - score = 0 - for result in results: - eval_answer = result["eval_answer"] - eval_score = re.search(r"([0-9.]+)", eval_answer).group(1) - try: - eval_score = float(eval_score) - except Exception as e: - eval_logger.error(f"Error parsing eval_score: {e}") - eval_score = 0.0 - score += eval_score - - return score / len(results) - - -# Factory into different aggregate -def mix_evals_video2text_aggregate_gen(results, args): - mix_evals_video2text_aggregate_submissions(results, args, "OpenConvs") - - -class MultiChoiceRegexFilter(ExtendedRegexFilter): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def apply(self, resps, docs): - filtered_resps = [] - - for r, doc in zip(resps, docs): - # Regex to directly extract the option letter from the model response - option_letter_regex = re.compile(r"\b([A-Z])\.\s+([^\n]*)") - - # Process each response - filtered = [] - for resp in r: - # Try to match the option letter at the start of the response - match = option_letter_regex.match(resp) - if match: - # If a match is found, append the matched letter - filtered.append(match.group(1)) - else: - # If no match, return the original response - filtered.append(resp) - - # Assuming we need the first response that matches or the original response - filtered_resps.append(filtered[0]) - - return filtered_resps diff --git a/lmms_eval/tasks/mlvu/utils.py b/lmms_eval/tasks/mlvu/utils.py index 046d5423a..9829ee8af 100644 --- a/lmms_eval/tasks/mlvu/utils.py +++ b/lmms_eval/tasks/mlvu/utils.py @@ -29,7 +29,6 @@ def mlvu_doc_to_visual(doc): - cache_dir = os.path.join(base_cache_dir, cache_name) video_path = doc["video_name"] video_path = os.path.join(cache_dir, video_path) diff --git a/lmms_eval/tasks/mmbench/_default_template_mmbench_ru_yaml b/lmms_eval/tasks/mmbench/_default_template_mmbench_ru_yaml new file mode 100644 index 000000000..993cd52ad --- /dev/null +++ b/lmms_eval/tasks/mmbench/_default_template_mmbench_ru_yaml @@ -0,0 +1,24 @@ +dataset_path: deepvk/MMBench-ru +dataset_kwargs: + token: True +doc_to_target: "answer" +model_specific_prompt_kwargs: + default: + pre_prompt: "" + post_prompt: "\nВыбери правильный вариант ответа буквой." +doc_to_visual: !function ru_utils.mmbench_doc_to_visual +doc_to_text: !function ru_utils.mmbench_doc_to_text +doc_to_target: "answer" +process_results: !function ru_utils.mmbench_process_results +model_specific_generation_kwargs: + llava: + image_aspect_ratio: original +output_type: generate_until +generation_kwargs: + until: + - "ASSISTANT:" + max_new_tokens: 1024 + temperature: 0 + top_p: 1.0 + num_beams: 1 + do_sample: false diff --git a/lmms_eval/tasks/mmbench/mmbench.yaml b/lmms_eval/tasks/mmbench/mmbench.yaml index 821065eea..f2546aa5f 100755 --- a/lmms_eval/tasks/mmbench/mmbench.yaml +++ b/lmms_eval/tasks/mmbench/mmbench.yaml @@ -5,6 +5,7 @@ task: - mmbench_cn_dev - mmbench_cn_test - mmbench_cn_cc + - mmbench_ru_dev metadata: version: 0.0 sys_prompt: "There are several options:" diff --git a/lmms_eval/tasks/mmbench/mmbench_ru_dev.yaml b/lmms_eval/tasks/mmbench/mmbench_ru_dev.yaml new file mode 100644 index 000000000..46407ae46 --- /dev/null +++ b/lmms_eval/tasks/mmbench/mmbench_ru_dev.yaml @@ -0,0 +1,10 @@ +task: "mmbench_ru_dev" +test_split: dev +include: _default_template_mmbench_ru_yaml +metric_list: + - metric: gpt_eval_score + aggregation: !function ru_utils.mmbench_aggregate_dev_results_eval + higher_is_better: true + - metric: submission + aggregation: !function ru_utils.mmbench_aggregate_dev_results_submission + higher_is_better: true \ No newline at end of file diff --git a/lmms_eval/tasks/mmbench/ru_utils.py b/lmms_eval/tasks/mmbench/ru_utils.py new file mode 100644 index 000000000..3ff515c7c --- /dev/null +++ b/lmms_eval/tasks/mmbench/ru_utils.py @@ -0,0 +1,128 @@ +import yaml +import os +from pathlib import Path +import pandas as pd +import json + +from loguru import logger as eval_logger +from lmms_eval.tasks.mmbench.mmbench_evals import MMBench_Evaluator +from lmms_eval.tasks._task_utils.file_utils import generate_submission_file + +with open(Path(__file__).parent / "mmbench.yaml", "r") as f: + raw_data = f.readlines() + safe_data = [] + for i, line in enumerate(raw_data): + # remove function definition since yaml load cannot handle it + if "!function" not in line: + safe_data.append(line) + + config = yaml.safe_load("".join(safe_data)) + +GPT_EVAL_MODEL_NAME = config["metadata"]["gpt_eval_model_name"] +API_TYPE = os.getenv("API_TYPE", "openai") + +if API_TYPE == "openai": + API_URL = os.getenv("OPENAI_API_URL", "https://api.openai.com/v1/chat/completions") + API_KEY = os.getenv("OPENAI_API_KEY", "YOUR_API_KEY") +elif API_TYPE == "azure": + API_URL = os.getenv("AZURE_ENDPOINT", "https://api.cognitive.microsoft.com/sts/v1.0/issueToken") + API_KEY = os.getenv("AZURE_API_KEY", "YOUR_API_KEY") +else: + API_URL = "YOUR_API_URL" + API_KEY = "YOUR_API_KEY" + + +mmbench_evaluator = MMBench_Evaluator(sys_prompt=config["metadata"]["sys_prompt"], API_KEY=API_KEY, API_URL=API_URL, model_version=GPT_EVAL_MODEL_NAME) + + +def mmbench_doc_to_visual(doc): + return [doc["image"].convert("RGB")] + + +def mmbench_doc_to_text(doc, model_specific_prompt_kwargs=None): + option_candidate = ["A", "B", "C", "D", "E"] + options_prompt, options_dict = mmbench_evaluator.create_options_prompt(doc, option_candidate) + + data = { + # "img": doc["image"], + "question": doc["question"], + "answer": doc.get("answer", None), + "options": options_prompt, + "category": doc["category"], + "L2-category": doc["l2-category"], + "options_dict": options_dict, + "index": doc["index"], + "hint": doc["hint"], + "source": doc["source"], + "split": doc["split"], + } + + query_prompt = f"{data['hint']} {data['question']} {data['options']}" if pd.notna(data["hint"]) and data["hint"] != "nan" else f"{data['question']} {data['options']}" + + if model_specific_prompt_kwargs: + query_prompt = f"{query_prompt}\n{model_specific_prompt_kwargs['post_prompt']}" + + return query_prompt + + +def mmbench_process_results(doc, results): + model_response = results[0].strip() + data = { + "gpt_eval_score": { + "index": doc["index"], + "question": doc["question"], + "answer": doc["answer"], + "prediction": model_response, + "hint": doc["hint"], + "source": doc["source"], + "split": doc["split"], + "category": doc["category"], + "L2-category": doc["l2-category"], + }, + "submission": { + "index": doc["index"], + "question": doc["question"], + "answer": doc["answer"], + "prediction": model_response, + "hint": doc["hint"], + "source": doc["source"], + "split": doc["split"], + "category": doc["category"], + "L2-category": doc["l2-category"], + }, + } + option_candidate = ["A", "B", "C", "D", "E"] + for c in option_candidate: + data["submission"][c] = doc.get(c, "nan") + data["gpt_eval_score"][c] = doc.get(c, "nan") + return data + + +def mmbench_aggregate_dev_results_eval(results, args): + print(f"============= MMBench-RU(Dev) Detailed Results =============") + overall_acc, category_acc, l2_category_acc = mmbench_evaluator.eval_result(results, eval_method="openai") + file = generate_submission_file("mmbench_ru_dev_results.json", args) + details_info = { + "overall_acc": overall_acc, + "category_acc": category_acc, + "l2_category_acc": l2_category_acc, + } + with open(file, "w") as f: + json.dump(details_info, f) + return overall_acc * 100 + + +def mmbench_aggregate_dev_results_submission(results, args): + df = pd.DataFrame(results) + excel_write_path = generate_submission_file("mmbench_ru_dev_results.xlsx", args) + with pd.ExcelWriter(excel_write_path) as writer: + df.to_excel(writer, index=False) + eval_logger.info(f"Saved results to {excel_write_path}") + + +def mmbench_aggregate_test_results(results, args): + df = pd.DataFrame(results) + excel_write_path = generate_submission_file("mmbench_ru_test_results.xlsx", args) + with pd.ExcelWriter(excel_write_path) as writer: + df.to_excel(writer, index=False) + eval_logger.info(f"Saved results to {excel_write_path}") diff --git a/lmms_eval/tasks/mmstar/mmstar.yaml b/lmms_eval/tasks/mmstar/mmstar.yaml new file mode 100644 index 000000000..845978460 --- /dev/null +++ b/lmms_eval/tasks/mmstar/mmstar.yaml @@ -0,0 +1,37 @@ +dataset_path: Lin-Chen/MMStar +dataset_kwargs: + token: True +task: "mmstar" +test_split: val +output_type: generate_until +doc_to_visual: !function utils.mmstar_doc_to_visual +doc_to_text: !function utils.mmstar_doc_to_text +doc_to_target: "answer" +# The return value of process_results will be used by metrics +process_results: !function utils.mmstar_process_results +# Note that the metric name can be either a registed metric function (such as the case for GQA) or a key name returned by process_results +metric_list: + - metric: coarse perception + aggregation: !function utils.mmstar_aggregate_results + higher_is_better: true + - metric: fine-grained perception + aggregation: !function utils.mmstar_aggregate_results + higher_is_better: true + - metric: instance reasoning + aggregation: !function utils.mmstar_aggregate_results + higher_is_better: true + - metric: logical reasoning + aggregation: !function utils.mmstar_aggregate_results + higher_is_better: true + - metric: science & technology + aggregation: !function utils.mmstar_aggregate_results + higher_is_better: true + - metric: math + aggregation: !function utils.mmstar_aggregate_results + higher_is_better: true +model_specific_prompt_kwargs: + default: + pre_prompt: "" + post_prompt: "\nAnswer with the option's letter from the given choices directly" +metadata: + - version: 0.0 diff --git a/lmms_eval/tasks/mmstar/utils.py b/lmms_eval/tasks/mmstar/utils.py new file mode 100644 index 000000000..66933f299 --- /dev/null +++ b/lmms_eval/tasks/mmstar/utils.py @@ -0,0 +1,120 @@ +from collections import defaultdict +import os +import datetime +import json +from lmms_eval.tasks._task_utils.file_utils import generate_submission_file + + +from loguru import logger as eval_logger + +dir_name = os.path.dirname(os.path.abspath(__file__)) + +eval_type_dict = { + "coarse perception" : [ + "image scene and topic", + "image style & quality", + "image emotion" + ], + "fine-grained perception" : [ + "object counting", + "recognition", + "localization" + ], + "instance reasoning" : [ + "single-instance reasoning", + "cross-instance attribute reasoning", + "cross-instance relation reasoning" + ], + "logical reasoning" : [ + "code & sequence reasoning", + "diagram reasoning", + "common reasoning" + ], + "science & technology" : [ + "biology & chemistry & physics", + "electronics & energy & mechanical eng.", + "geography & earth science & agriculture" + ], + "math" : [ + "geometry", + "numeric commonsense and calculation", + "statistical reasoning" + ] +} + + +replace_prompt = " Please answer yes or no." + + +def mmstar_doc_to_visual(doc): + return [doc["image"].convert("RGB")] + + +def mmstar_doc_to_text(doc, model_specific_prompt_kwargs=None): + question = doc["question"].strip() + if "pre_prompt" in model_specific_prompt_kwargs and model_specific_prompt_kwargs["pre_prompt"] != "": + question = question.replace(replace_prompt, "") + question = f"{model_specific_prompt_kwargs['pre_prompt']}{question}" + if "post_prompt" in model_specific_prompt_kwargs and model_specific_prompt_kwargs["post_prompt"] != "": + question = question.replace(replace_prompt, "") + question = f"{question}{model_specific_prompt_kwargs['post_prompt']}" + return question + + +def exact_match(pred, gt): + """Brought from MMStar""" + answer = gt.lower().strip().replace('\n', ' ') + predict = pred.lower().strip().replace('\n', ' ') + try: + if answer == predict[0]: + return 1.0 + elif predict[0] == '(' and answer == predict[1]: + return 1.0 + elif predict[0:7] == 'option ' and answer == predict[7]: + return 1.0 + elif predict[0:14] == 'the answer is ' and answer == predict[14]: + return 1.0 + except Exception as e: + return 0.0 + return 0.0 + + +def mmstar_process_results(doc, results): + """ + Args: + doc: a instance of the eval dataset + results: [pred] + Returns: + a dictionary with key: metric name, value: metric value + """ + pred = results[0] + gt = doc["answer"] + + score = exact_match(pred, gt) + category = doc["category"] + l2_category = doc["l2_category"] + return {category: {"question_id": doc["index"], "l2_category": l2_category, "score": score}} + + +def mmstar_aggregate_results(results): + """ + Args: + results: a list of values returned by process_results + Returns: + A score + """ + l2_category_scores = defaultdict(list) + for result in results: + score = result["score"] + l2_category = result["l2_category"] + l2_category_scores[l2_category].append(score) + + l2_category_avg_score = {} + for l2_category, scores in l2_category_scores.items(): + avg_score = sum(scores) / len(scores) + l2_category_avg_score[l2_category] = avg_score + eval_logger.info(f"{l2_category}: {avg_score:.2f}") + + avg_score = sum(l2_category_avg_score.values()) / len(l2_category_avg_score) + return avg_score + \ No newline at end of file diff --git a/lmms_eval/tasks/mmvet/mmvet.yaml b/lmms_eval/tasks/mmvet/mmvet.yaml index 30c1907aa..827856db6 100755 --- a/lmms_eval/tasks/mmvet/mmvet.yaml +++ b/lmms_eval/tasks/mmvet/mmvet.yaml @@ -8,10 +8,8 @@ doc_to_visual: !function utils.mmvet_doc_to_visual doc_to_text: !function utils.doc_to_text # Such that {{question}} will be replaced by doc["question"] doc_to_target: "{{answer}}" generation_kwargs: - until: - - "ASSISTANT:" - max_new_tokens: 32768 - temperature: 0 + max_new_tokens: 1024 + temperature: 0.2 top_p: 1.0 num_beams: 1 do_sample: false @@ -25,5 +23,5 @@ metadata: gpt_eval_model_name: "gpt-4-0613" model_specific_prompt_kwargs: default: - pre_prompt: "Please think step by step and try to provide best answer to the following question: \n\n" + pre_prompt: "First please perform reasoning, and think step by step to provide best answer to the following question: \n\n" post_prompt: "" diff --git a/lmms_eval/tasks/muirbench/muirbench.yaml b/lmms_eval/tasks/muirbench/muirbench.yaml index 43b8ab7cc..896314772 100644 --- a/lmms_eval/tasks/muirbench/muirbench.yaml +++ b/lmms_eval/tasks/muirbench/muirbench.yaml @@ -1,4 +1,3 @@ - dataset_path: MUIRBENCH/MUIRBENCH task: "muirbench" dataset_kwargs: diff --git a/lmms_eval/tasks/videomme/utils.py b/lmms_eval/tasks/videomme/utils.py index 401b02a04..c9ec7aa6e 100644 --- a/lmms_eval/tasks/videomme/utils.py +++ b/lmms_eval/tasks/videomme/utils.py @@ -133,6 +133,43 @@ def extract_subtitles(video_path, subtitle_path): return subtitle_frames, total_frame +def parse_subtitle_time(time_str): + h, m, s_ms = time_str.split(':') + s, ms = s_ms.split(',') + return int(h) * 3600 + int(m) * 60 + int(s) + int(ms) / 1000 + +def load_subtitles(subtitle_path): + subtitles = {} + with open(subtitle_path, 'r', encoding='utf-8') as file: + content = file.read().split('\n\n') + for section in content: + if section.strip(): + lines = section.split('\n') + if len(lines) >= 3: + time_range = lines[1].split(' --> ') + start_time = parse_subtitle_time(time_range[0]) + end_time = parse_subtitle_time(time_range[1]) + text = ' '.join(line for line in lines[2:]) + subtitles[(start_time, end_time)] = text + return subtitles + +def convert_time_to_frame(time_in_seconds, fps): + return int(time_in_seconds * fps) + +def extract_subtitles(video_path, subtitle_path): + video = cv2.VideoCapture(video_path) + fps = video.get(cv2.CAP_PROP_FPS) + total_frame=int(video.get(cv2.CAP_PROP_FRAME_COUNT)) + subtitles = load_subtitles(subtitle_path) + + subtitle_frames = [] + for (start_time, end_time), text in subtitles.items(): + start_frame = convert_time_to_frame(start_time, fps) + end_frame = convert_time_to_frame(end_time, fps) + subtitle_frames.append((start_frame, end_frame, text)) + + return subtitle_frames,total_frame + def videomme_doc_to_visual(doc): cache_dir = os.path.join(base_cache_dir, cache_name) video_path = doc["videoID"] + ".mp4" @@ -149,7 +186,71 @@ def videomme_doc_to_visual(doc): def videomme_doc_to_text(doc, model_specific_prompt_kwargs=None): - option_prompt = "Select the best answer to the following multiple-choice question based on the video and the subtitles. Respond with only the letter (A, B, C, or D) of the correct option." + option_prompt="Select the best answer to the following multiple-choice question based on the video and the subtitles. Respond with only the letter (A, B, C, or D) of the correct option." + question = doc["question"] + option = str(doc["options"]) + question = question + "\n" + option + full_prompt=option_prompt+"\n"+question+"\n"+"The best answer is:" + return full_prompt +# Frames + Subs +# This video's subtitles are listed below: +# 【subtitles】 + +# Select the best answer to the following multiple-choice question based on the video and the subtitles. Respond with only the letter (A, B, C, or D) of the correct option. +# 【question】 +# The best answer is: +# Frames / Frames + Audio +# Select the best answer to the following multiple-choice question based on the video. Respond with only the letter (A, B, C, or D) of the correct option. +# 【question】 +# The best answer is: + +def videomme_doc_to_text_subtitle(doc, model_specific_prompt_kwargs=None): + cache_dir = os.path.join(base_cache_dir, cache_name) + video_path = doc["videoID"] + ".mp4" + subtitle_path=os.path.join(cache_dir,"subtitle",doc["videoID"]+".srt") + video_path = os.path.join(cache_dir, video_path) + if os.path.exists(subtitle_path): #Denote have subtitle + subtitle=open(subtitle_path).readlines() + else: + subtitle="" + subtitles_prompt="This video's subtitles are listed below: \n" + if subtitle=="": + subtitle="No subtitles available" + else: + if "gemini_api_flag" in model_specific_prompt_kwargs: #specific for gemini_api + if model_specific_prompt_kwargs['gemini_api_flag']=="full subtitle": + textlist=[] + for ele in subtitle: + pattern = r'(.*?)' + matches = re.findall(pattern, ele) + if matches: + textlist.append(matches[0]) + subtitle_text="\n".join(textlist) + else: + if "frame_num" in model_specific_prompt_kwargs: + frame_num=model_specific_prompt_kwargs['frame_num'] + subtitle_by_frame,total_frame=extract_subtitles(video_path,subtitle_path) + uniform_sampled_frames = np.linspace(0, total_frame - 1, frame_num, dtype=int).tolist() + + subtitle_by_frame_idx=[] + for frame_idx in uniform_sampled_frames: + for idx,title in enumerate(subtitle_by_frame): + if frame_idx=title[0]: + subtitle_by_frame_idx.append(idx) + subtitle_by_frame_idx=list(set(subtitle_by_frame_idx)) + + textlist=[] + for idx in subtitle_by_frame_idx: + pattern = r'(.*?)' + raw_text=re.findall(pattern, subtitle_by_frame[idx][2]) + try: + textlist.append(raw_text[0]) + except: + continue + subtitle_text="\n".join(textlist) + subtitle=subtitle_text + + option_prompt="Select the best answer to the following multiple-choice question based on the video and the subtitles. Respond with only the letter (A, B, C, or D) of the correct option." question = doc["question"] option = str(doc["options"]) # option = "\n".join([f"{opt}" for i, opt in enumerate(doc["options"])]) diff --git a/lmms_eval/tasks/vitatecs/_default_template_yaml b/lmms_eval/tasks/vitatecs/_default_template_yaml new file mode 100644 index 000000000..e93ce7d5e --- /dev/null +++ b/lmms_eval/tasks/vitatecs/_default_template_yaml @@ -0,0 +1,9 @@ +dataset_path: lscpku/VITATECS +dataset_kwargs: + token: True + video: True + cache_dir: vitatecs +model_specific_prompt_kwargs: + default: + pre_prompt: "" + post_prompt: "\nPlease response with a single letter (A or B):" \ No newline at end of file diff --git a/lmms_eval/tasks/vitatecs/_vitatecs.yaml b/lmms_eval/tasks/vitatecs/_vitatecs.yaml new file mode 100755 index 000000000..07677c284 --- /dev/null +++ b/lmms_eval/tasks/vitatecs/_vitatecs.yaml @@ -0,0 +1,8 @@ +group: vitatecs +task: +- vitatecs_direction +- vitatecs_intensity +- vitatecs_sequence +- vitatecs_compositionality +- vitatecs_localization +- vitatecs_type diff --git a/lmms_eval/tasks/vitatecs/utils.py b/lmms_eval/tasks/vitatecs/utils.py new file mode 100644 index 000000000..b2adcaa96 --- /dev/null +++ b/lmms_eval/tasks/vitatecs/utils.py @@ -0,0 +1,225 @@ +from decord import VideoReader, cpu +import numpy as np +import os +import sys +import datetime +import lmms_eval.tasks._task_utils.file_utils as file_utils +import json +import logging +import yaml +from pathlib import Path + +import requests +import openai +from openai import OpenAI +import time +import ast +from tqdm import tqdm +import random + +import re + +with open(Path(__file__).parent / "_default_template_yaml", "r") as f: + raw_data = f.readlines() + safe_data = [] + for i, line in enumerate(raw_data): + # remove function definition since yaml load cannot handle it + if "!function" not in line: + safe_data.append(line) + + config = yaml.safe_load("".join(safe_data)) + + +API_TYPE = os.getenv("API_TYPE", "openai") + +if API_TYPE == "openai": + API_URL = os.getenv("OPENAI_API_URL", "https://api.openai.com/v1/chat/completions") + API_KEY = os.getenv("OPENAI_API_KEY", "YOUR_API_KEY") + headers = { + "Authorization": f"Bearer {API_KEY}", + "Content-Type": "application/json", + } + +# We will unzip all the zip files +# To HF HOME cache dir +# And load it here +HF_HOME = os.environ["HF_HOME"] +cache_dir = config["dataset_kwargs"]["cache_dir"] +cache_dir = os.path.join(HF_HOME, cache_dir) + +eval_logger = logging.getLogger("lmms-eval") + + +# Pass in video path here +# Can only work correctly with video llm +def vitatecs_doc_to_visual(doc): + video_path = os.path.join(cache_dir, doc["src_dataset"], doc["video_name"]) + if os.path.exists(video_path): + video_path = video_path + else: + sys.exit(f"video path:{video_path} does not exist, please check") + return [video_path] + + +# This is the place where you format your question +def vitatecs_doc_to_text(doc, model_specific_prompt_kwargs=None): + if model_specific_prompt_kwargs is None: + model_specific_prompt_kwargs = {} + pre_prompt = "" + post_prompt = "" + if "pre_prompt" in model_specific_prompt_kwargs: + pre_prompt = model_specific_prompt_kwargs["pre_prompt"] + if "post_prompt" in model_specific_prompt_kwargs: + post_prompt = model_specific_prompt_kwargs["post_prompt"] + + question, _, _ = format_question_and_answer(doc) + return f"{pre_prompt}{question}{post_prompt}" + + +def process_option_for_question(sent): + if not sent.endswith("."): + sent += "." + return sent.capitalize() + + +def process_option_for_matching(sent): + if sent.endswith("."): + sent = sent[:-1] + return sent.lower() + + +def format_question_and_answer(doc): + seed = sum(ord(c) for c in doc["caption"] + doc["counterfactual"]) % 100 + random.seed(seed) + if random.random() > 0.5: + option_a = process_option_for_question(doc["caption"]) + option_b = process_option_for_question(doc["counterfactual"]) + answer = "(A) " + option_a + else: + option_a = process_option_for_question(doc["counterfactual"]) + option_b = process_option_for_question(doc["caption"]) + answer = "(B) " + option_b + options = [process_option_for_matching(doc["caption"]), process_option_for_matching(doc["counterfactual"])] + + question = f"Which of the following best describes the content of the video: \n(A) {option_a} \n(B) {option_b}" + return question, answer, options + + +def vitatecs_doc_to_answer(doc): + _, answer, _ = format_question_and_answer(doc) + return answer + + +# Process result +def vitatecs_process_results(doc, result): + pred = result[0] + rating = 0 + match_success = True + chatgpt_response = None + question, answer, options = format_question_and_answer(doc) + + # Some hand-crafted matching rules + if options[0] in pred.lower() and options[1] not in pred.lower(): + rating = 1 + elif options[1] in pred.lower() and options[0] not in pred.lower(): + rating = 0 + elif pred in ["A", "B"]: + rating = 1 if pred == answer[1] else 0 + elif any(pred.startswith(prefix) for prefix in ["A.", "B."]): + rating = 1 if pred.split(".")[0] == answer[1] else 0 + elif any(pred.startswith(prefix) for prefix in ["A)", "B)"]): + rating = 1 if pred.split(")")[0] == answer[1] else 0 + elif any(pred.startswith(prefix) for prefix in ["(A)", "(B)"]): + rating = 1 if pred.split(")")[1] == answer[1] else 0 + else: + # Fail to match answer in the video-llm response. Use ChatGPT to evaluate. + match_success = False + + base_prompt = """You will receive a caption matching question, the ground-truth answer and the prediction from a question answering (QA) model. Your task is to determine whether QA model prediction is correct, based on the question and ground-truth answer. If the prediction is correct, respond "Correct". If the prediction is incorrect, respond "Incorrect". """ + prompt = f"""{base_prompt}\n\nCaption Matching Question: {question}\n\nGround-Truth Answer: {answer}\n\nModel Prediction: {pred}""" + chatgpt_response, rating = get_eval_result(prompt) + + if not match_success: + return { + "accuracy": { + "src_dataset": doc["src_dataset"], + "video_id": doc["video_name"], + "question": question, + "gt-answer": answer, + "video-llm-prediction": pred, + "match_success": match_success, + "rating": rating, + # "chatgpt_prompt": prompt, + "chatgpt_response": chatgpt_response, + "aspect": doc["aspect"], + }, + } + else: + return { + "accuracy": { + "src_dataset": doc["src_dataset"], + "video_id": doc["video_name"], + "question": question, + "gt-answer": answer, + "video-llm-prediction": pred, + "match_success": match_success, + "rating": rating, + "aspect": doc["aspect"], + }, + } + + +# utils function for gpt_evaluation when rule-based matching is unsuccessful +def get_eval_result(prompt, maxtry=10, sys_prompt=None): + llm_output = None + while True: + try: + llm_output = get_llm_output(prompt, sys_prompt) + rating = llm_output_to_rating(llm_output) + return llm_output, rating + except: + if maxtry <= 0: + return llm_output, 0 + maxtry -= 1 + print(f"Not success! {maxtry} retries remaining...") + time.sleep(random.uniform(1, 2)) + + +# utils function for gpt evaluation +def get_llm_output(prompt, sys_prompt, max_tokens=128): + if sys_prompt is None: + sys_prompt = "You are an AI assistant for question answering." + data = {"max_tokens": max_tokens, "model": "gpt-3.5-turbo-1106", "temperature": 1.0, "top_p": 1, "presence_penalty": 1, "messages": [{"role": "system", "content": sys_prompt}, {"role": "user", "content": prompt}]} + response = requests.post(API_URL, headers=headers, data=json.dumps(data).encode("utf-8")) + result = response.content.decode("utf-8") + dict_result = json.loads(result) + llm_output = dict_result["choices"][0]["message"]["content"].strip() + return llm_output + + +# utils function that converts gpt evaluation into rating +def llm_output_to_rating(llm_output): + assert "Correct" in llm_output or "Incorrect" in llm_output + if llm_output.startswith("Correct"): + rating = 1 + elif llm_output.startswith("Incorrect"): + rating = 0 + elif ("Correct" in llm_output) and ("Incorrect" not in llm_output): + rating = 1 + elif "Incorrect" in llm_output: + rating = 0 + return rating + + +# Factory into different aggregate +def vitatecs_aggregate_rating(results, args): + yes_count = 0 + + # results is a list of dict + for answer_dict in results: + if answer_dict["rating"] == 1: + yes_count += 1 + + accuracy = yes_count / len(results) + + return accuracy * 100 diff --git a/lmms_eval/tasks/vitatecs/vitatecs_compositionality.yaml b/lmms_eval/tasks/vitatecs/vitatecs_compositionality.yaml new file mode 100644 index 000000000..eb73ef00d --- /dev/null +++ b/lmms_eval/tasks/vitatecs/vitatecs_compositionality.yaml @@ -0,0 +1,13 @@ +dataset_name: "Compositionality" +task: "vitatecs_compositionality" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.vitatecs_doc_to_visual +doc_to_text: !function utils.vitatecs_doc_to_text +doc_to_target: !function utils.vitatecs_doc_to_answer +process_results: !function utils.vitatecs_process_results +metric_list: + - metric: accuracy + aggregation: !function utils.vitatecs_aggregate_rating + higher_is_better: true +include: _default_template_yaml diff --git a/lmms_eval/tasks/vitatecs/vitatecs_direction.yaml b/lmms_eval/tasks/vitatecs/vitatecs_direction.yaml new file mode 100644 index 000000000..8c4b5b60b --- /dev/null +++ b/lmms_eval/tasks/vitatecs/vitatecs_direction.yaml @@ -0,0 +1,13 @@ +dataset_name: "Direction" +task: "vitatecs_direction" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.vitatecs_doc_to_visual +doc_to_text: !function utils.vitatecs_doc_to_text +doc_to_target: !function utils.vitatecs_doc_to_answer +process_results: !function utils.vitatecs_process_results +metric_list: + - metric: accuracy + aggregation: !function utils.vitatecs_aggregate_rating + higher_is_better: true +include: _default_template_yaml diff --git a/lmms_eval/tasks/vitatecs/vitatecs_intensity.yaml b/lmms_eval/tasks/vitatecs/vitatecs_intensity.yaml new file mode 100644 index 000000000..a12a4dea2 --- /dev/null +++ b/lmms_eval/tasks/vitatecs/vitatecs_intensity.yaml @@ -0,0 +1,13 @@ +dataset_name: "Intensity" +task: "vitatecs_intensity" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.vitatecs_doc_to_visual +doc_to_text: !function utils.vitatecs_doc_to_text +doc_to_target: !function utils.vitatecs_doc_to_answer +process_results: !function utils.vitatecs_process_results +metric_list: + - metric: accuracy + aggregation: !function utils.vitatecs_aggregate_rating + higher_is_better: true +include: _default_template_yaml diff --git a/lmms_eval/tasks/vitatecs/vitatecs_localization.yaml b/lmms_eval/tasks/vitatecs/vitatecs_localization.yaml new file mode 100644 index 000000000..633fae76e --- /dev/null +++ b/lmms_eval/tasks/vitatecs/vitatecs_localization.yaml @@ -0,0 +1,13 @@ +dataset_name: "Localization" +task: "vitatecs_localization" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.vitatecs_doc_to_visual +doc_to_text: !function utils.vitatecs_doc_to_text +doc_to_target: !function utils.vitatecs_doc_to_answer +process_results: !function utils.vitatecs_process_results +metric_list: + - metric: accuracy + aggregation: !function utils.vitatecs_aggregate_rating + higher_is_better: true +include: _default_template_yaml diff --git a/lmms_eval/tasks/vitatecs/vitatecs_sequence.yaml b/lmms_eval/tasks/vitatecs/vitatecs_sequence.yaml new file mode 100644 index 000000000..6ad434917 --- /dev/null +++ b/lmms_eval/tasks/vitatecs/vitatecs_sequence.yaml @@ -0,0 +1,13 @@ +dataset_name: "Sequence" +task: "vitatecs_sequence" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.vitatecs_doc_to_visual +doc_to_text: !function utils.vitatecs_doc_to_text +doc_to_target: !function utils.vitatecs_doc_to_answer +process_results: !function utils.vitatecs_process_results +metric_list: + - metric: accuracy + aggregation: !function utils.vitatecs_aggregate_rating + higher_is_better: true +include: _default_template_yaml diff --git a/lmms_eval/tasks/vitatecs/vitatecs_type.yaml b/lmms_eval/tasks/vitatecs/vitatecs_type.yaml new file mode 100644 index 000000000..7034a45c5 --- /dev/null +++ b/lmms_eval/tasks/vitatecs/vitatecs_type.yaml @@ -0,0 +1,13 @@ +dataset_name: "Type" +task: "vitatecs_type" +test_split: test +output_type: generate_until +doc_to_visual: !function utils.vitatecs_doc_to_visual +doc_to_text: !function utils.vitatecs_doc_to_text +doc_to_target: !function utils.vitatecs_doc_to_answer +process_results: !function utils.vitatecs_process_results +metric_list: + - metric: accuracy + aggregation: !function utils.vitatecs_aggregate_rating + higher_is_better: true +include: _default_template_yaml diff --git a/pyproject.toml b/pyproject.toml index e01140ea8..cb9f04e6a 100755 --- a/pyproject.toml +++ b/pyproject.toml @@ -7,7 +7,7 @@ build-backend = "setuptools.build_meta" [project] name = "lmms_eval" -version = "0.2.0.post1" +version = "0.2.1" authors = [ { name = "LMMMs-Lab Evaluation Team", email = "lmms_eval@outlook.com" }, ] @@ -74,6 +74,8 @@ dependencies = [ "spacy", "anls", "rouge", + "capture_metric", + "protobuf==3.20", ] [project.optional-dependencies] diff --git a/tools/live_bench/example.ipynb b/tools/live_bench/example.ipynb deleted file mode 100644 index 92b9df73e..000000000 --- a/tools/live_bench/example.ipynb +++ /dev/null @@ -1,481 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "/data/pufanyi/anaconda3/anacondabin/envs/live_bench/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", - " from .autonotebook import tqdm as notebook_tqdm\n" - ] - } - ], - "source": [ - "from random import sample\n", - "\n", - "from live_bench.websites.website import DefaultWebsite\n", - "from live_bench.websites import load_websites\n", - "\n", - "# website = load_websites()\n", - "# website = sample(website, 1)\n", - "# website[0].url\n", - "website = [DefaultWebsite(url=\"https://www.asahi.com/\")] # , DefaultWebsite(url=\"https://www.bbc.com/sport\"), DefaultWebsite(url=\"https://www.bbc.com/business\")]" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "could not detect version_main.therefore, we are assuming it is chrome 108 or higher\n" - ] - } - ], - "source": [ - "from live_bench.data_generator.utils.extract_infomation import InfomationExtractor\n", - "from live_bench.screen_shoter import get_shoter\n", - "from live_bench.driver import load_driver\n", - "\n", - "shoter = get_shoter(\"single_screen\")\n", - "driver = load_driver()\n", - "w = shoter(driver, website[0])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - }, - { - "cell_type": "code", - "execution_count": 16, - "metadata": {}, - "outputs": [], - "source": [ - "extractor = InfomationExtractor()" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "metadata": {}, - "outputs": [], - "source": [ - "response = extractor.extract_infomation(w)" - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "**Here is something you can take as reference.**\n", - "\n", - "## Text Extracted in the HTML\n", - "\n", - "Below is the text extracted from the website for you to take reference:\n", - "BBC Home - Breaking News, World News, US News, Sports, Business, Innovation, Climate, Culture, Travel, Video & Audio\n", - "\n", - "What son's conviction means for President Biden\n", - "The guilty verdict is unlikely to change voters' minds, but it will be a personal blow for the US president.\n", - "1 hr ago | US & Canada\n", - "\n", - "Hunter Biden found guilty on all counts in gun trial\n", - "The US president's son is found guilty of lying about his drug use when buying a handgun in 2018.\n", - "- The struggles and scandals of Hunter Biden\n", - "\n", - "Blinken says fate of ceasefire plan down to Hamas\n", - "The US diplomat says Israel's prime minister \"reaffirmed his commitment\" to a Gaza ceasefire plan.\n", - "2 hrs ago | Middle East\n", - "\n", - "Ukraine 'hits missile launch sites in Russia'\n", - "The mayor of the city of Kharkiv says the situation there is \"calmer\" as Russia has been shelling less.\n", - "1 hr ago | Europe\n", - "\n", - "Four US college instructors stabbed in public park in China\n", - "The instructors were on a daytime visit to a public park when they were attacked, Cornell College says.\n", - "4 hrs ago | Asia\n", - "\n", - "Animal-rights protesters attack portrait of King\n", - "Animal-rights protesters attack a portrait of King Charles III, in a London gallery.\n", - "2 hrs ago | UK\n", - "\n", - "Warning shots from South as NK soldiers cross border\n", - "The incident at the DMZ comes at a time of heightened tensions between the two Koreas.\n", - "11 hrs ago | Asia\n", - "\n", - "## Image Features\n", - "\n", - "From the screenshot of the news website you provided, here is the analysis based on the images displayed alongside the corresponding news headings and text:\n", - "\n", - "1. **Image Associated with Hunter Biden's Conviction**:\n", - " - **Description**: The image depicts Hunter Biden escorted by, possibly, security personnel or aides. He seems to be dressed in a formal dark suit and appears to be descending stairs or possibly exiting a vehicle. He carries an air of seriousness, likely reflective of the gravity of his legal situation.\n", - " - **Relevance**: This image effectively captures the serious, personal, and public nature of the judicial proceedings against the President's son, making the situation more relatable to the audience. It directly ties to the news confirming Hunter Biden’s guilty verdict in a gun trial related to lying about drug use.\n", - "\n", - "2. **Image Accompanying the Article on Biden's Supporters**:\n", - " - **Description**: The accompanying image shows a group of enthusiastic supporters holding signs, with one prominently reading \"Say Yes to Biden,\" suggesting a political rally or campaign event. The participants display expressions of support and enthusiasm.\n", - " - **Relevance**: This image provides a visual contrast to the first, highlighting the ongoing support for the Biden family or campaign despite the legal issues faced by Hunter Biden. It serves to illustrate the political backdrop and public opinion dynamic mentioned in the news headlines.\n", - "\n", - "These images serve different purposes:\n", - "- The first image personalizes the news story, putting a face to the name in a high-stakes legal controversy. It underlines the personal and public challenges faced by the Biden family due to the conviction.\n", - "- The second image contextualizes the broader political support for the Biden family, suggesting that despite personal legal woes, there is a segment of the populace fervently supporting them.\n", - "\n", - "The clear connection between the images and the corresponding textual content on the news site helps readers visualize and better understand the unfolding events, enhancing the impact of the news storytelling.\n", - "\n", - "## Interesting Points\n", - "\n", - "The BBC news website, as demonstrated through the detailed examination of its content, offers a dynamic and visually engaging approach to news presentation. Here’s a deeper analysis of how it distinguishes itself:\n", - "\n", - "1. **Comprehensive and Geographically Diverse News Coverage**:\n", - " - The content spans a wide range of geographical locations including the US, Middle East, Europe, Asia, and the UK. Each news piece targets a major recent event, reflecting the website’s commitment to global news coverage. This expansive geographic focus ensures that readers have access to a broad spectrum of significant, impactful news.\n", - "\n", - "2. **Varied Content Themes**: \n", - " - The news themes are diverse, covering political, social, and cultural issues. From the legal troubles of a high-profile political figure’s son in the US to a ceasefire plan in the Middle East and violent incidents in Asia, the website covers a wide array of topics. This variety meets different readers' interests and keeps the content engaging.\n", - "\n", - "3. **Immediate Relevance**:\n", - " - The website's content is timely, as indicated by timestamps such as “1 hr ago” and “2 hrs ago.” This reflects the website’s commitment to providing the latest news, which is crucial for maintaining reader engagement and trust in a digital age where current information is highly valued.\n", - "\n", - "4. **Stylistic and Engaging Visual Design**:\n", - " - The use of compelling images alongside the news articles plays a critical role in storytelling. For instance, the image of Hunter Biden descending steps with a serious demeanor visually reinforces the gravity of the news about his conviction. \n", - " - Meanwhile, the image of supporters holding \"Say Yes to Biden\" signs juxtaposed with Hunter Biden's legal news offers a visual narrative of continued support amidst political strife, underscoring the complexity and depth of public and personal life in politics.\n", - "\n", - "5. **Interactive and Multimedia Features**:\n", - " - The use of tags such as \"OLIVE\" beside the breaking story of Hunter Biden indicates an interactive or breaking news feature that likely offers real-time updates and extensive coverage. This kind of multimedia integration enhances user interaction and engagement.\n", - "\n", - "In summary, the BBC news website sets itself apart through a combination of up-to-date, visually engaging, and comprehensively covered news items that cater to a global audience with varied interests. The effective use of images not only contextualizes the stories but also adds a layer of emotional and visual impact, making the news relatable and striking.\n" - ] - } - ], - "source": [ - "print(str(response))" - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "This screenshot from a news website contains several images corresponding to different news stories. Let's examine each image and extract relevant details:\n", - "\n", - "1. **Image associated with the Michael Mosley news story:**\n", - " - The image depicts a middle-aged couple, smiling warmly at each other in a sunny, natural outdoor setting. This photo likely portrays Dr. Michael Mosley with his wife, representing a human interest angle to the story about Dr. Mosley's disappearance on the Greek island of Symi. The caption, \"'We will not lose hope,' says Michael Mosley's wife,\" implies a context of hope and determination amidst difficult circumstances.\n", - "\n", - "2. **Image linked to the news about the hostages freed in Gaza:**\n", - " - This image features a group of soldiers with one individual in civilian clothing at the center, being lifted or celebrated, possibly right after a rescue scenario. The setting appears to be a rugged outdoor area, suggestive of a conflict or military zone, which aligns with the news story about hostages being freed in Gaza. The inclusion of armed personnel and a jubilant expression on the civilian's face highlights the relief and successful outcome of a dangerous operation.\n", - "\n", - "3. **Image for the Nova festival hostages news:**\n", - " - This image depicts a motorboat on clear water under bright skies, possibly implying the geographic setting related to Michael Mosley’s disappearance near the Greek island of Symi. The serene environment contrasts starkly with the concerning news of his disappearance during what might have been a routine outing or travel.\n", - "\n", - "These images serve as visual supplements to the written content, providing readers with a clearer, more immediate understanding of the stories. They help bridge the emotional and contextual gaps that pure text might leave, allowing readers to engage more deeply with the news events. Each image is carefully selected to evoke specific sentiments and to provide visual context to the news headlines and summaries.\n" - ] - } - ], - "source": [ - "print(response[\"features\"])" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "from live_bench import LiveBench" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": {}, - "outputs": [], - "source": [ - "dataset = LiveBench(force_clear=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "'2024-06'" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "dataset.name" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "could not detect version_main.therefore, we are assuming it is chrome 108 or higher\n", - "Capturing websites: 0%| | 0/1 [00:00 str: - subtask = subtask.strip().lower() - for valid_subtask in SUBTASKS: - if valid_subtask.lower() in subtask.lower(): - return valid_subtask - return "Unknown Subtask" - - def set_subtask(self, subtask: str): - """ - Set the subtask for the QAData instance after parsing it. - - Args: - subtask (str): The subtask string to be set. - """ - self.subtask = self.parse_subtask(subtask) - - def to_dict(self): - return {"question": self.question, "answer": self.answer} - - -class QAGenerator(ABC): - def __init__(self, prompt_file: str = os.path.join(os.path.dirname(__file__), "prompt.md")): - self.prompt_file = prompt_file - self.prompt = self._load_prompt() - - def _load_prompt(self): - with open(self.prompt_file, "r") as f: - return f.read() - - def __call__(self, images: ScreenImage, *args, **kwargs): - return self.generate(images, *args, **kwargs) - - def generate(self, images: ScreenImage, *, test=False, infomation=None, **kwargs) -> Response: - if test: - return Response(success=True, content="This is a test response.", full_log={}) - return self._generate(images, infomation=infomation, test=test, **kwargs) - - def check(self, images: ScreenImage, question, answer, criteria, subtask, *, infomation=None, test=False, **kwargs) -> Response: - if test: - return Response(success=True, content="This is a test response.", full_log={}) - return self._check(images, question, answer, criteria, subtask, infomation=infomation, **kwargs) - - @abstractmethod - def _generate(self, images: ScreenImage, **kwargs) -> Response: - raise NotImplementedError("_generate not implemented") - - @abstractmethod - def _check(self, images: ScreenImage, question, answer, criteria, subtask, **kwargs) -> Response: - raise NotImplementedError("_check not implemented") - - def format_response(self, response: Response) -> QAData: - if response.success: - qa_data = self._format_response(response) - if qa_data is None: - return [] - else: - return qa_data - else: - return [] - - @abstractmethod - def _format_response(self, response: Response) -> str: - raise NotImplementedError("format_response not implemented") - - @abstractmethod - def format_checked_response(self, response: Response) -> QAData: - raise NotImplementedError("format_checked_response not implemented") - - def get_name(self) -> str: - raise NotImplementedError("get_name not implemented") - - -class GeneratorRegistry: - def __init__(self): - self.generators = {} - - def register_generator(self, name): - def decorator(cls): - self.generators[name] = cls - cls.get_name = lambda self: name - return cls - - return decorator - - def get_generator(self, name) -> QAGenerator: - return self.generators[name] - - def get_random_generator(self) -> QAGenerator: - return random.choice(list(self.generators.values())) - - -generator_registry = GeneratorRegistry() - - -def register_generator(name): - return generator_registry.register_generator(name) - - -def get_generator(name, *args, **kwargs) -> QAGenerator: - return generator_registry.get_generator(name)(*args, **kwargs) - - -def get_random_generator(*args, **kwargs) -> QAGenerator: - return generator_registry.get_random_generator()(*args, **kwargs) - - -@register_generator("gpt4v") -class GPT4Generator(QAGenerator): - def __init__( - self, - prompt_file: str = os.path.join(os.path.dirname(__file__), "prompt.md"), - model="gpt-4o", - example_path=os.path.join(os.path.dirname(__file__), "example"), - check_prompt=os.path.join(os.path.dirname(__file__), "check_prompt.md"), - ): - super().__init__(prompt_file) - API_KEY = os.getenv("OPENAI_API_KEY") - if not API_KEY: - raise ValueError("OPENAI_API_KEY environment variable not set.") - self.api_key = API_KEY - self.client = openai.OpenAI(api_key=self.api_key) - self.model = model - if os.path.exists(example_path): - self.example_path = example_path - else: - self.example_path = None - if os.path.exists(check_prompt): - with open(check_prompt, "r") as f: - self.check_prompt = f.read() - else: - self.check_prompt = check_prompt - - def format_messages(self, images: List[Image.Image], example_image: Image.Image, example_output: str, infomation: ImageInfomation): - example = [ - { - "type": "text", - "text": "Here are few examples about the task and the expected output format. You can take these as examples to generate your own questions.", - }, - format_gpt4v_images(example_image), - { - "type": "text", - "text": example_output, - }, - ] - content = example + [format_gpt4v_images(image) for image in images] - if infomation: - content.append({"type": "text", "text": str(infomation)}) - content.append( - { - "type": "text", - "text": "Please generate high-quality questions focusing on the information displayed within this webpage. Your response should be in the format of the examples provided above and in JSON format.", - }, - ) - messages = [ - { - "role": "system", - "content": self.prompt, - }, - { - "role": "user", - "content": content, - }, - ] - return messages - - def _generate(self, images: ScreenImage, *, max_tokens=4096, max_try_times=5, infomation=None, **kwargs): - if self.example_path: - example_image_path = os.path.join(self.example_path, "example_website.png") - example_output_path = os.path.join(self.example_path, "example_output.json") - example_image = Image.open(example_image_path) - with open(example_output_path, "r") as f: - example_output = json.load(f) - example_output = json.dumps(example_output, indent=4) - - messages = self.format_messages(images.images, example_image, example_output, infomation) - - return gpt4v_generate_response(client=self.client, model=self.model, messages=messages, max_tokens=max_tokens, max_try_times=max_try_times, json_format=True, **kwargs) - - def get_check_prompt(self, question: str, answer: str, criteria, subtask, images: List[Image.Image], infomation: ImageInfomation = None): - messages = [ - { - "role": "system", - "content": self.check_prompt, - } - ] - content = [] - for img in images: - content.append(format_gpt4v_images(img)) - content.append( - { - "type": "text", - "text": f"Question: {question}\nQuestioner's Answer: {answer}\nCriteria: {criteria}\nSubtask: {subtask}", - }, - ) - if infomation: - content.append( - { - "type": "text", - "text": str(infomation), - }, - ) - content.append( - { - "type": "text", - "text": "Please rephrase or rewrite the high-quality question focusing on the information displayed within this webpage. Your response should be in the format of the examples provided above and in JSON format.", - }, - ) - messages.append( - { - "role": "user", - "content": content, - } - ) - return messages - - def _check(self, images: ScreenImage, question, answer, criteria, subtask, *, max_tokens=4096, max_try_times=5, **kwargs): - messages = self.get_check_prompt(question, answer, criteria, subtask, images.images) - return gpt4v_generate_response(client=self.client, model=self.model, messages=messages, max_tokens=max_tokens, max_try_times=max_try_times, json_format=True, **kwargs) - - def format_checked_response(self, response: Response): - data = json.loads(response.content) - question = data.get("question", None) - answer = data.get("answer", None) - criteria = data.get("criteria", None) - subtask = data.get("subtask", None) - return QAData(question=question, answer=answer, criteria=criteria, subtask=subtask) - - def _format_response(self, response: Response) -> List[QAData]: - try: - qa_data = [] - content = json.loads(response.content) - for subtask, messages in content.items(): - subtask = subtask.lower() - for message in messages: - message_lower = {k.lower(): v for k, v in message.items()} - try: - question = message_lower["question"] - answer = message_lower["answer"] - criteria = message_lower["criteria"] - qa_data.append(QAData(question=question, answer=answer, criteria=criteria, subtask=subtask)) - except KeyError as e: - logger.error(f"Failed to parse response: {message}") - logger.error(f"Error: {e}") - return qa_data - except Exception as e: - logger.error(f"Failed to format response: {e}") - return [] - - -@register_generator("gemini") -class GeminiGenerator(QAGenerator): - def __init__( - self, - prompt_file: str = os.path.join(os.path.dirname(__file__), "prompt.md"), - model="gemini-1.5-pro-latest", - example_path=os.path.join(os.path.dirname(__file__), "example"), - check_prompt=os.path.join(os.path.dirname(__file__), "check_prompt.md"), - ): - super().__init__(prompt_file) - GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") - if not GOOGLE_API_KEY: - raise ValueError("GOOGLE_API_KEY environment variable not set.") - genai.configure(api_key=GOOGLE_API_KEY) - - self.api_key = GOOGLE_API_KEY - self.model = model - self.client = genai.GenerativeModel(model) - if os.path.exists(example_path): - self.example_path = example_path - else: - self.example_path = None - if os.path.exists(check_prompt): - with open(check_prompt, "r") as f: - self.check_prompt = f.read() - else: - self.check_prompt = check_prompt - - def format_messages(self, images: List[Image.Image], example_image: Image.Image, example_output: str, infomation: ImageInfomation = None): - content = [self.prompt, "\n", "Example Image:", example_image, "\n", "Example Output:", example_output] - content.extend(images) - content.append(str(infomation)) - content.append("Please generate high-quality questions focusing on the information displayed within this webpage. Your response should be in the format of the examples provided above and in JSON format.") - return content - - def _generate(self, images: ScreenImage, *, max_tokens=4096, max_try_times=5, infomation: ImageInfomation = None, **kwargs): - if self.example_path: - example_image_path = os.path.join(self.example_path, "example_website.png") - example_output_path = os.path.join(self.example_path, "example_output.json") - example_image = Image.open(example_image_path) - with open(example_output_path, "r") as f: - # example_output = f.read() - example_output = json.load(f) - example_output = json.dumps(example_output, indent=4) - - messages = self.format_messages(images.images, example_image, example_output, infomation) - - return gemini_generate_response(self.client, messages, max_tokens, max_try_times, **kwargs) - - def get_check_prompt(self, question: str, answer: str, criteria, subtask, images: List[Image.Image], infomation: ImageInfomation = None): - content = [self.check_prompt] + images - content.append(f"Question: {question}\nQuestioner's Answer: {answer}\nCriteria: {criteria}, Subtask: {subtask}") - content.append("Your response should be strictly in the below format:\n\nQuestion: \nAnswer: \nCriteria: \nSubtask: ") - if infomation: - content.append(str(infomation)) - return content - - def _check(self, images: ScreenImage, question, answer, criteria, subtask, *, max_tokens=4096, max_try_times=5, infomation: ImageInfomation = None, **kwargs): - messages = self.get_check_prompt(question, answer, criteria, subtask, images.images, infomation) - return gemini_generate_response(self.client, messages, max_tokens, max_try_times, **kwargs) - - def format_checked_response(self, response: Response): - # Extract the question, answer, and subtask from the normalized content - question_match = re.search(r"question:\s*(.*?)\nAnswer:", response.content, re.IGNORECASE | re.DOTALL) - answer_match = re.search(r"answer:\s*(.*?)\nCriteria", response.content, re.IGNORECASE | re.DOTALL) - criteria_match = re.search(r"criteria:\s*(.*?)\n(Subtask:|$)", response.content, re.IGNORECASE | re.DOTALL) - subtask_match = re.search(r"subtask:\s*(.*)", response.content, re.IGNORECASE) - - question = answer = subtask = None - - if question_match: - # Extract the matched groups - question = question_match.group(1).strip() - if answer_match: - answer = answer_match.group(1).strip() - if criteria_match: - criteria = criteria_match.group(1).strip() - if subtask_match: - subtask = subtask_match.group(1).strip() - - return QAData(question=question, answer=answer, criteria=criteria, subtask=subtask) - - def _format_response(self, response: Response) -> List[QAData]: - try: - qa_data = [] - content = json.loads(response.content) - for subtask, message in content.items(): - subtask = subtask.lower() - message_lower = {k.lower(): v for k, v in message.items()} - try: - question = message_lower["question"] - answer = message_lower["answer"] - qa_data.append(QAData(question=question, answer=answer, subtask=subtask)) - except KeyError as e: - logger.error(f"Failed to parse response: {message}") - logger.error(f"Error: {e}") - return qa_data - except Exception as e: - logger.error(f"Failed to format response: {e}") - return [] - - -@register_generator("claude") -class ClaudeGenerator(QAGenerator): - def __init__( - self, - prompt_file: str = os.path.join(os.path.dirname(__file__), "prompt.md"), - model="claude-3-5-sonnet-20240620", - example_path=os.path.join(os.path.dirname(__file__), "example"), - check_prompt=os.path.join(os.path.dirname(__file__), "check_prompt.md"), - ): - super().__init__(prompt_file) - API_KEY = os.getenv("ANTHROPIC_API_KEY") - if not API_KEY: - raise ValueError("ANTHROPIC_API_KEY environment variable not set.") - self.api_key = API_KEY - self.client = anthropic.Anthropic(api_key=self.api_key) - self.model = model - if os.path.exists(example_path): - self.example_path = example_path - else: - self.example_path = None - if os.path.exists(check_prompt): - with open(check_prompt, "r") as f: - self.check_prompt = f.read() - else: - self.check_prompt = check_prompt - - def format_messages(self, images: List[Image.Image], example_image: Image.Image, example_output: str, infomation: ImageInfomation): - example = [ - { - "type": "text", - "text": "Here are few examples about the task and the expected output format. You can take these as examples to generate your own questions.", - }, - format_claude_images(example_image), - { - "type": "text", - "text": example_output, - }, - ] - content = example + [format_claude_images(image) for image in images] - if infomation: - content.append({"type": "text", "text": str(infomation)}) - content.append( - { - "type": "text", - "text": "Please generate high-quality questions focusing on the information displayed within this webpage. Ensure your response adheres to the examples provided above and is structured in JSON format, incorporating regular expressions to validate the format.", - }, - ) - messages = [ - { - "role": "user", - "content": content, - }, - ] - return messages - - def _generate(self, images: ScreenImage, *, max_tokens=4096, max_try_times=5, infomation=None, **kwargs): - if self.example_path: - example_image_path = os.path.join(self.example_path, "example_website.png") - example_output_path = os.path.join(self.example_path, "example_output.json") - example_image = Image.open(example_image_path) - with open(example_output_path, "r") as f: - # example_output = f.read() - example_output = json.load(f) - example_output = json.dumps(example_output, indent=4) - - messages = self.format_messages(images.images, example_image, example_output, infomation) - - return claude_generate_response(client=self.client, model=self.model, messages=messages, max_tokens=max_tokens, max_try_times=max_try_times, json_format=True, system=self.prompt, **kwargs) - - def get_check_prompt(self, question: str, answer: str, criteria, subtask, images: List[Image.Image], infomation: ImageInfomation = None): - messages = [ - { - "role": "system", - "content": self.check_prompt, - } - ] - content = [] - for img in images: - content.append(format_claude_images(img)) - content.append( - { - "type": "text", - "text": f"Question: {question}\nQuestioner's Answer: {answer}\nCriteria: {criteria}\nSubtask: {subtask}", - }, - ) - if infomation: - content.append( - { - "type": "text", - "text": str(infomation), - }, - ) - content.append( - { - "type": "text", - "text": "Please rephrase or rewrite the high-quality question focusing on the information displayed within this webpage. Your response should be in the format of the examples provided above and in JSON format.", - }, - ) - messages.append( - { - "role": "user", - "content": content, - } - ) - return messages - - def _check(self, images: ScreenImage, question, answer, criteria, subtask, *, max_tokens=4096, max_try_times=5, **kwargs): - messages = self.get_check_prompt(question, answer, criteria, subtask, images.images) - return claude_generate_response(client=self.client, model=self.model, messages=messages, max_tokens=max_tokens, max_try_times=max_try_times, json_format=True, **kwargs) - - def format_checked_response(self, response: Response): - data = json.loads(response.content) - question = data.get("question", None) - answer = data.get("answer", None) - criteria = data.get("criteria", None) - subtask = data.get("subtask", None) - return QAData(question=question, answer=answer, criteria=criteria, subtask=subtask) - - def _format_response(self, response: Response) -> List[QAData]: - try: - qa_data = [] - content = json.loads(response.content) - for subtask, messages in content.items(): - subtask = subtask.lower() - for message in messages: - message_lower = {k.lower(): v for k, v in message.items()} - try: - question = message_lower["question"] - answer = message_lower["answer"] - criteria = message_lower["criteria"] - qa_data.append(QAData(question=question, answer=answer, criteria=criteria, subtask=subtask)) - except KeyError as e: - logger.error(f"Failed to parse response: {message}") - logger.error(f"Error: {e}") - return qa_data - except Exception as e: - logger.error(f"Failed to format response: {e}") - return [] diff --git a/tools/live_bench/live_bench/data_generator/response.py b/tools/live_bench/live_bench/data_generator/response.py deleted file mode 100644 index 9eed882be..000000000 --- a/tools/live_bench/live_bench/data_generator/response.py +++ /dev/null @@ -1,12 +0,0 @@ -class Response(object): - def __init__(self, success: bool, content: str, full_log: dict): - self.success = success - self.content = content - self.full_log = full_log - - def to_dict(self): - return { - "success": self.success, - "content": self.content, - "full_log": self.full_log, - } diff --git a/tools/live_bench/live_bench/data_generator/score_getter.py b/tools/live_bench/live_bench/data_generator/score_getter.py deleted file mode 100644 index 412989285..000000000 --- a/tools/live_bench/live_bench/data_generator/score_getter.py +++ /dev/null @@ -1,157 +0,0 @@ -import os -import json -import random -import openai -import anthropic -from abc import ABC, abstractmethod -from typing import List -from PIL import Image -from live_bench.screen_shoter import ScreenImage -from live_bench.data_generator.qa_generator import Response -from live_bench.data_generator.utils.gpt4v import format_gpt4v_images, gpt4v_generate_response -from live_bench.data_generator.utils.claude import format_claude_images, claude_generate_response - - -class Score(object): - def __init__(self, score: int, reason: str): - self.score = score - self.reason = reason - - -class ScoreGetter(ABC): - def get_name(self): - return self.name - - @abstractmethod - def get_score(self, question: str, answer: str, images: ScreenImage): - raise NotImplementedError("get_score not implemented") - - def __call__(self, question: str, answer: str, images: ScreenImage, **kwargs): - return self.get_score(question, answer, images, **kwargs) - - -class ScoreGetterRegistry: - def __init__(self): - self.score_getters = {} - - def register_score_getter(self, name): - def decorator(cls): - self.score_getters[name] = cls - cls.name = name - return cls - - return decorator - - def get_score_getter(self, name) -> ScoreGetter: - return self.score_getters[name] - - def get_random_score_getter(self) -> ScoreGetter: - return random.choice(list(self.score_getters.values())) - - -generator_registry = ScoreGetterRegistry() - - -def register_score_getter(name): - return generator_registry.register_score_getter(name) - - -def get_score_getter(name, *args, **kwargs) -> ScoreGetter: - return generator_registry.get_score_getter(name)(*args, **kwargs) - - -def get_random_score_getter(*args, **kwargs) -> ScoreGetter: - return generator_registry.get_random_score_getter()(*args, **kwargs) - - -@register_score_getter("gpt4v") -class GPT4VScoreGetter(ScoreGetter): - def __init__(self, prompt: str = os.path.join(os.path.dirname(__file__), "score_prompt.md"), model="gpt-4o", example_path=os.path.join(os.path.dirname(__file__), "example")): - super().__init__() - if os.path.exists(prompt): - with open(prompt, "r") as f: - self.prompt = f.read() - else: - self.prompt = prompt - API_KEY = os.getenv("OPENAI_API_KEY") - if not API_KEY: - raise ValueError("OPENAI_API_KEY environment variable not set.") - self.api_key = API_KEY - self.client = openai.OpenAI(api_key=self.api_key) - self.model = model - if os.path.exists(example_path) and os.path.isfile(os.path.join(example_path, "example_score_input.md")): - with open(example_path, "r") as f: - self.example = f.read() - else: - self.example = None - - def _format_prompt(self, question: str, answer: str, images: List[Image.Image]): - prompt = [{"role": "system", "content": self.prompt}] - messages = [] - for image in images: - messages.append(format_gpt4v_images(image)) - messages.append({"type": "text", "text": f"Question: {question}\nQuestioner's Answer: {answer}"}) - messages.append({"type": "text", "text": 'You should format you answer into json format like this: {"reason": "some reason", "score": 10}'}) - prompt.append({"role": "user", "content": messages}) - return prompt - - def get_score(self, question: str, answer: str, images: ScreenImage, *, max_tokens=4096, max_try_times=5, **kwargs) -> Score: - prompt = self._format_prompt(question, answer, images) - try: - response = gpt4v_generate_response(client=self.client, model=self.model, messages=prompt, max_tokens=max_tokens, max_try_times=max_try_times, json_format=True, **kwargs) - if response.success: - content = json.loads(response.content) - score = content.get("score", None) - reason = content.get("reason", None) - return Score(score=score, reason=reason) - else: - return Score(score=None, reason=response.content) - except Exception as e: - return Score(score=None, reason=str(e)) - - -@register_score_getter("claude") -class ClaudeScoreGetter(ScoreGetter): - def __init__(self, prompt: str = os.path.join(os.path.dirname(__file__), "score_prompt.md"), model="claude-3-5-sonnet-20240620", example_path=os.path.join(os.path.dirname(__file__), "example")): - super().__init__() - if os.path.exists(prompt): - with open(prompt, "r") as f: - self.prompt = f.read() - else: - self.prompt = prompt - API_KEY = os.getenv("ANTHROPIC_API_KEY") - if not API_KEY: - raise ValueError("ANTHROPIC_API_KEY environment variable not set.") - self.api_key = API_KEY - self.client = anthropic.Anthropic(api_key=self.api_key) - self.model = model - if os.path.exists(example_path) and os.path.isfile(os.path.join(example_path, "example_score_input.md")): - with open(example_path, "r") as f: - self.example = f.read() - else: - self.example = None - - def _format_prompt(self, question: str, answer: str, images: List[Image.Image]): - # prompt = [{"role": "system", "content": self.prompt}] - prompt = [] - messages = [] - for image in images: - messages.append(format_claude_images(image)) - messages.append({"type": "text", "text": f"Question: {question}\nQuestioner's Answer: {answer}"}) - messages.append({"type": "text", "text": 'You should format you answer into JSON format like this: { "reason": "some reason", "score": 10 }'}) - prompt.append({"role": "user", "content": messages}) - return prompt - - def get_score(self, question: str, answer: str, images: ScreenImage, *, max_tokens=4096, max_try_times=5, **kwargs) -> Score: - prompt = self._format_prompt(question, answer, images) - try: - response = claude_generate_response(client=self.client, model=self.model, messages=prompt, system=self.prompt, max_tokens=max_tokens, max_try_times=max_try_times, **kwargs) - if response.success: - content = json.loads(response.content) - score = content.get("score", None) - reason = content.get("reason", None) - return Score(score=score, reason=reason) - else: - return Score(score=None, reason=response.content) - except Exception as e: - return Score(score=None, reason=str(e)) diff --git a/tools/live_bench/live_bench/data_generator/utils/claude.py b/tools/live_bench/live_bench/data_generator/utils/claude.py deleted file mode 100644 index ccd0b381f..000000000 --- a/tools/live_bench/live_bench/data_generator/utils/claude.py +++ /dev/null @@ -1,68 +0,0 @@ -from PIL import Image -import io -import base64 -from live_bench.data_generator.response import Response -import anthropic -import logging -from time import sleep -from typing import Union, List - -logger = logging.getLogger("lmms-eval") - - -def format_claude_images(image: Union[Image.Image, List[Image.Image]]): - if isinstance(image, list): - return [format_claude_images(img) for img in image] - buffered = io.BytesIO() - image.save(buffered, format="PNG") - img_str = base64.b64encode(buffered.getvalue()).decode("utf-8") - return { - "type": "image", - "source": { - "type": "base64", - "media_type": "image/png", - "data": img_str, - }, - } - - -def claude_generate_response(client: anthropic.Anthropic, model, messages, max_tokens: int = 4096, max_try_times=5, system=None, json_format="auto", test=False, **kwargs): - if json_format == "auto": - json_format = False - for message in messages: - if message.get("role") == "user": - contents = message.get("content", []) - if isinstance(contents, str): - if "json" in contents: - json_format = True - break - else: - for content in contents: - if content.get("type", None) == "text" and "json" in content.get("text", ""): - json_format = True - break - - if json_format: - messages.append({"role": "assistant", "content": "{"}) - - def _generate(): - if system: - return client.messages.create(model=model, messages=messages, max_tokens=max_tokens, system=system, **kwargs) - else: - return client.messages.create(model=model, messages=messages, max_tokens=max_tokens, **kwargs) - - for times in range(max_try_times): - try: - response = _generate() - response_str = response.content[0].text - if json_format: - response_str = "{" + response_str - return Response(success=True, content=response_str, full_log={"input": messages, "output": response.to_dict()}) - except Exception as e: - logger.error(f"Failed to generate response: {e}") - if times < max_try_times - 1: - logger.info(f"Retrying... ({times+1}/{max_try_times})") - sleep(3) - else: - logger.error("Failed to generate response after retrying.") - return Response(success=False, content=str(e), full_log={"input": messages, "output": None}) diff --git a/tools/live_bench/live_bench/data_generator/utils/gemini.py b/tools/live_bench/live_bench/data_generator/utils/gemini.py deleted file mode 100644 index 57b77536c..000000000 --- a/tools/live_bench/live_bench/data_generator/utils/gemini.py +++ /dev/null @@ -1,37 +0,0 @@ -import google.generativeai as genai -from time import sleep -from live_bench.data_generator.response import Response -import logging -from google.generativeai.types import HarmCategory, HarmBlockThreshold - -logger = logging.getLogger("lmms-eval") - - -def gemini_generate_response(client: genai.GenerativeModel, messages, max_tokens: int, max_try_times: int = 5, **kwargs): - generation_config = genai.GenerationConfig(max_output_tokens=max_tokens) - - def _generate(): - return client.generate_content( - messages, - generation_config=generation_config, - safety_settings={ - HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE, - HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE, - HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE, - HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE, - }, - **kwargs, - ) - - for times in range(max_try_times): - try: - response = _generate() - return Response(success=True, content=response.text, full_log={"input": messages, "output": response}) - except Exception as e: - logger.error(f"Failed to generate response: {e}") - if times < max_try_times - 1: - logger.info(f"Retrying... ({times+1}/{max_try_times})") - sleep(3) - else: - logger.error("Failed to generate response after retrying.") - return Response(success=False, content=str(e), full_log={"input": messages, "output": None}) diff --git a/tools/live_bench/live_bench/driver/.gitignore b/tools/live_bench/live_bench/driver/.gitignore deleted file mode 100644 index 0ef18421a..000000000 --- a/tools/live_bench/live_bench/driver/.gitignore +++ /dev/null @@ -1 +0,0 @@ -extensions/ diff --git a/tools/live_bench/live_bench/driver/load_driver.py b/tools/live_bench/live_bench/driver/load_driver.py deleted file mode 100644 index c0ca7528d..000000000 --- a/tools/live_bench/live_bench/driver/load_driver.py +++ /dev/null @@ -1,71 +0,0 @@ -import os -import zipfile -import requests -from selenium import webdriver -from webdriver_manager.chrome import ChromeDriverManager -from webdriver_manager.core.os_manager import ChromeType -from selenium.webdriver.chrome.options import Options - -import undetected_chromedriver as uc - - -def load_driver( - window_size="auto", - headless=True, - driver="undetected_chromedriver", - driver_version=None, - chrome_type="CHROME", - adblock=True, - adblock_version="6.0.2-mv3", - extension_cache_dir=os.path.join(os.path.dirname(__file__), "extensions"), - *, - service=None, - additional_options=None, -): - options = Options() - if service is None: - chrome_type = chrome_type.upper() - if chrome_type == "CHROMIUM": - chrome_type = ChromeType.CHROMIUM - elif chrome_type == "CHROME": - chrome_type = ChromeType.GOOGLE - elif chrome_type == "BRAVE": - chrome_type = ChromeType.BRAVE - service = ChromeDriverManager(driver_version=driver_version, chrome_type=chrome_type).install() - if headless: - options.add_argument("--headless") - if adblock: - try: - adblock_url = f"https://code.getadblock.com/releases/adblockchrome-{adblock_version}.zip" - adblock_path = os.path.join(extension_cache_dir, f"adblockchrome-{adblock_version}") - if not os.path.isdir(adblock_path): - os.makedirs(os.path.join(adblock_path, ".."), exist_ok=True) - # Download the adblock zip file - response = requests.get(adblock_url) - with open(f"{adblock_path}.zip", "wb") as file: - file.write(response.content) - # Unzip the downloaded file - with zipfile.ZipFile(f"{adblock_path}.zip", "r") as zip_ref: - zip_ref.extractall(adblock_path) - # Remove the zip file after extraction - os.remove(f"{adblock_path}.zip") - options.add_argument(f"--load-extension={os.path.abspath(adblock_path)}") - except Exception as e: - print(f"Error loading adblock extension: {e}") - if driver == "undetected_chromedriver": - driver = uc.Chrome(headless=headless, options=options, driver_executable_path=service) - if window_size != "auto": - driver.set_window_size(*window_size) - return driver - elif driver == "chrome": - options = Options() - if additional_options is not None: - for option in additional_options: - options.add_argument(option) - service = webdriver.chrome.service.Service(service) - driver = webdriver.Chrome(service=service, options=options) - if window_size != "auto": - driver.set_window_size(*window_size) - return driver - else: - raise ValueError(f"Unknown driver: {driver}") diff --git a/tools/live_bench/live_bench/screen_shoter/screen.py b/tools/live_bench/live_bench/screen_shoter/screen.py deleted file mode 100644 index 5817b8cc6..000000000 --- a/tools/live_bench/live_bench/screen_shoter/screen.py +++ /dev/null @@ -1,30 +0,0 @@ -import io -import base64 - -from PIL import Image -from typing import List, Tuple - -from live_bench.websites import Website - - -def image_to_base64(image: Image.Image) -> str: - buffered = io.BytesIO() - image.save(buffered, format="PNG") - return base64.b64encode(buffered.getvalue()).decode("utf-8") - - -class ScreenImage(object): - def __init__(self, images: List[Image.Image], website: Website, shoter: str, screen_size: Tuple[int, int], capture_datetime: str): - self.images = images - self.website = website - self.shoter = shoter - self.screen_size = screen_size - self.capture_datetime = capture_datetime - - def to_dict(self): - return {"images": self.images, "website": self.website.get_info(), "shoter": self.shoter, "screen_size": self.screen_size, "capture_datetime": self.capture_datetime} - - def to_output_dict(self): - output = self.to_dict() - output["images"] = [image_to_base64(image) for image in self.images] - return output diff --git a/tools/live_bench/live_bench/screen_shoter/screen_shoter.py b/tools/live_bench/live_bench/screen_shoter/screen_shoter.py deleted file mode 100644 index 6069cd26d..000000000 --- a/tools/live_bench/live_bench/screen_shoter/screen_shoter.py +++ /dev/null @@ -1,141 +0,0 @@ -from selenium import webdriver -from PIL import Image -from live_bench.websites import Website -from live_bench.screen_shoter.screen import ScreenImage -from typing import List -from abc import ABC, abstractmethod -from datetime import datetime -from PIL import Image -import os -import io -import logging - -logger = logging.getLogger("lmms-eval") - - -class ScreenShoter(ABC): - def __init__(self, screen_size=(1024, 1024)): - self.screen_size = screen_size - - def capture(self, driver: webdriver.Chrome, website: Website) -> ScreenImage: - if driver is not None: - website.visit(driver) - if self.screen_size != "auto": - driver.set_window_size(self.screen_size[0], self.screen_size[1]) - else: - driver.set_window_size(1024, 1024) - page_width = driver.execute_script("return document.body.scrollWidth") - driver.set_window_size(page_width, 1024) - # print("Screen size:", driver.get_window_size()) - images = self.get_screenshot(driver) - return ScreenImage(images, website, self.get_name(), self.screen_size, datetime.now().strftime("%Y-%m-%d %H:%M:%S")) - - def __call__(self, driver: webdriver.Chrome, website: Website) -> List[Image.Image]: - return self.capture(driver, website) - - def get_name(self) -> str: - raise NotImplementedError("get_name not implemented") - - @abstractmethod - def get_screenshot(self, driver: webdriver.Chrome) -> List[Image.Image]: - pass - - -class ScreenShoterRegistry: - def __init__(self): - self.shoters = {} - - def register_shoter(self, name): - def decorator(cls): - self.shoters[name] = cls - cls.get_name = lambda self: name - return cls - - return decorator - - def get_shoter(self, name) -> ScreenShoter: - return self.shoters[name] - - -shoter_registry = ScreenShoterRegistry() - - -def register_shoter(name): - return shoter_registry.register_shoter(name) - - -def get_shoter(name, *args, **kwargs) -> ScreenShoter: - return shoter_registry.get_shoter(name)(*args, **kwargs) - - -@register_shoter("human") -class HumanScreenShoter(ScreenShoter): - def __init__(self, screen_size=None): - super().__init__(screen_size) - - def capture(self, driver: webdriver.Chrome, website: Website) -> ScreenImage: - path = website.get_path() - images = [] - - def get_image(path): - try: - with open(path, "rb") as f: - image_data = f.read() - image = Image.open(io.BytesIO(image_data)) - images.append(image) - except Exception as e: - logger.error(f"Error loading image {path}: {e}") - - if os.path.isdir(path): - for root, dirs, files in os.walk(path): - for file_name in files: - get_image(os.path.join(root, file_name)) - else: - try: - get_image(path) - except Exception as e: - logger.error(f"Error loading image {path}: {e}") - if not images: - raise ValueError(f"No images found in {path}") - return ScreenImage(images, website, self.get_name(), self.screen_size, datetime.now().strftime("%Y-%m-%d %H:%M:%S")) - - def get_screenshot(self, driver: webdriver.Chrome) -> List[Image.Image]: - return [] - - -@register_shoter("single_screen") -class SingleScreenShoter(ScreenShoter): - def __init__(self, screen_size=(1024, 1024)): - super().__init__(screen_size) - - def get_screenshot(self, driver: webdriver.Chrome) -> List[Image.Image]: - screenshot = driver.get_screenshot_as_png() - return [Image.open(io.BytesIO(screenshot))] - - -@register_shoter("rolling_screen") -class RollingScreenShoter(ScreenShoter): - def __init__(self, screen_size=(1024, 1024)): - super().__init__(screen_size) - - def get_screenshot(self, driver: webdriver.Chrome) -> List[Image.Image]: - screenshots = [] - # Scroll to the top of the page before taking the first screenshot - driver.execute_script("window.scrollTo(0, 0)") - # Get the total height of the web page - total_height = driver.execute_script("return document.body.parentNode.scrollHeight") - # Get the viewport height - viewport_height = driver.execute_script("return window.innerHeight") - # Initialize the current scroll position - current_scroll_position = 0 - - # Scroll through the page and take screenshots - while current_scroll_position < total_height: - # Take screenshot and append to the list - screenshot = driver.get_screenshot_as_png() - screenshots.append(Image.open(io.BytesIO(screenshot))) - # Scroll down by the viewport height - current_scroll_position += viewport_height - driver.execute_script(f"window.scrollTo(0, {current_scroll_position})") - - return screenshots diff --git a/tools/live_bench/live_bench/websites/load_website.py b/tools/live_bench/live_bench/websites/load_website.py deleted file mode 100644 index 10976db75..000000000 --- a/tools/live_bench/live_bench/websites/load_website.py +++ /dev/null @@ -1,34 +0,0 @@ -import yaml -import os -from random import sample -from live_bench.websites.website import Website, DefaultWebsite, HumanScreenShotWebsite - - -def get_website(website_dict): - if "website_class" not in website_dict: - website_class = DefaultWebsite - else: - website_class = website_dict["website_class"] - url = website_dict["url"] - if "args" in website_dict: - return website_class(url, **website_dict["args"]) - else: - return website_class(url) - - -def load_websites(num_sample: int = -1): - website_list_path = os.path.join(os.path.dirname(__file__), "website_list.yaml") - with open(website_list_path, "r") as f: - website_list = yaml.full_load(f)["websites"] - if num_sample > 0: - website_list = sample(website_list, num_sample) - return [get_website(website_dict) for website_dict in website_list] - - -def load_websites_from_file(file_path): - names = os.listdir(file_path) - websites = [] - for name in names: - path = os.path.join(file_path, name) - websites.append(HumanScreenShotWebsite(path=path, name=name)) - return websites diff --git a/tools/live_bench/live_bench/websites/website.py b/tools/live_bench/live_bench/websites/website.py deleted file mode 100644 index 327bad372..000000000 --- a/tools/live_bench/live_bench/websites/website.py +++ /dev/null @@ -1,62 +0,0 @@ -import time -import os - -from webdriver_manager.core.driver import Driver -from abc import ABC, abstractmethod - - -class Website(ABC): - def __init__(self, url=None, name=None, path=None): - self.url = url - self.name = name - self.path = path - assert self.url is not None or self.path is not None, "Either url or path must be provided" - - def get_path(self): - if self.url: - return self.url - else: - return self.path - - def visit(self, driver: Driver): - self.pre_visit(driver) - driver.get(self.url) - self.post_visit(driver) - - def get_info(self): - info = {} - if self.url: - info["url"] = self.url - if self.name: - info["name"] = self.name - return info - - @abstractmethod - def pre_visit(self, driver: Driver): - raise NotImplementedError("pre_action not implemented") - - @abstractmethod - def post_visit(self, driver: Driver): - raise NotImplementedError("post_action not implemented") - - -class DefaultWebsite(Website): - def __init__(self, url, name=None): - super().__init__(url, name) - - def pre_visit(self, driver: Driver): - pass - - def post_visit(self, driver: Driver): - time.sleep(5) # Wait for 5 seconds to allow adblock to finish - - -class HumanScreenShotWebsite(Website): - def __init__(self, name=None, path=None): - super().__init__(name=name, path=path) - - def pre_visit(self, driver: Driver): - pass - - def post_visit(self, driver: Driver): - pass diff --git a/tools/live_bench/live_bench/websites/website_list.yaml b/tools/live_bench/live_bench/websites/website_list.yaml deleted file mode 100644 index c85605d7d..000000000 --- a/tools/live_bench/live_bench/websites/website_list.yaml +++ /dev/null @@ -1,78 +0,0 @@ -websites: -- url: https://www.bbc.com/ - # can add below line to specify the class to use for this website - # website_class: !constructor website.DefaultWebsite - # can add args tag to specify the arguments to pass to the class constructor - # args: - # arg1: value1 - # arg2: value2 -# - url: https://www.bbc.com/news -# - url: https://www.bbc.com/sport -# - url: https://www.bbc.com/business -# - url: https://www.bbc.com/innovation -# - url: https://www.bbc.com/culture -# - url: https://www.bbc.com/travel -# - url: https://www.bbc.com/future-planet -# - url: https://edition.cnn.com/ -# - url: https://edition.cnn.com/politics -# - url: https://edition.cnn.com/entertainment -# - url: https://edition.cnn.com/style -# - url: https://www.bloomberg.com/economics -# - url: https://www.bloomberg.com/industries -# - url: https://www.bloomberg.com/technology -# - url: https://www.bloomberg.com/politics -# - url: https://www.bloomberg.com/opinion -# - url: https://www.wsj.com/ -# - url: https://www.wsj.com/world/africa?mod=nav_top_subsection -# - url: https://www.wsj.com/world/americas?mod=nav_top_subsection -# - url: https://www.wsj.com/world/asia?mod=nav_top_subsection -# - url: https://www.wsj.com/world/china?mod=nav_top_subsection -# - url: https://www.wsj.com/world/europe?mod=nav_top_subsection -# - url: https://www.wsj.com/world/middle-east?mod=nav_top_subsection -# - url: https://www.wsj.com/world/india?mod=nav_top_subsection -# - url: https://www.wsj.com/world/oceania?mod=nav_top_subsection -# - url: https://www.wsj.com/world/russia?mod=nav_top_subsection -# - url: https://www.wsj.com/world/uk?mod=nav_top_subsection -# - url: https://www.wsj.com/science?mod=nav_top_subsection -# - url: https://www.wsj.com/science/archaeology?mod=nav_top_subsection -# - url: https://www.wsj.com/science/biology?mod=nav_top_subsection -# - url: https://www.wsj.com/science/environment?mod=nav_top_subsection -# - url: https://www.wsj.com/science/physics?mod=nav_top_subsection -# - url: https://www.wsj.com/science/space-astronomy?mod=nav_top_subsection -# - url: https://www.wsj.com/economy/central-banking?mod=nav_top_subsection -# - url: https://www.wsj.com/economy/consumers?mod=nav_top_subsection -# - url: https://www.wsj.com/economy/housing?mod=nav_top_subsection -# - url: https://www.wsj.com/economy/jobs?mod=nav_top_subsection -# - url: https://www.wsj.com/economy/trade?mod=nav_top_subsection -# - url: https://www.wsj.com/economy/global -# - url: https://www.wsj.com/tech/ai?mod=nav_top_subsection -# - url: https://www.wsj.com/tech/biotech -# - url: https://www.wsj.com/tech/cybersecurity?mod=nav_top_subsection -# - url: https://www.wsj.com/tech/personal-tech?mod=nav_top_subsection -# - url: https://www.reuters.com/ -# - url: https://www.reuters.com/business/aerospace-defense/ -# - url: https://www.reuters.com/business/autos-transportation/ -# - url: https://www.reuters.com/business/davos/ -# - url: https://www.reuters.com/business/energy/ -# - url: https://www.reuters.com/business/environment/ -# - url: https://www.reuters.com/business/finance/ -# - url: https://www.reuters.com/business/healthcare-pharmaceuticals/ -# - url: https://www.reuters.com/business/media-telecom/ -# - url: https://www.reuters.com/business/retail-consumer/ -# - url: https://www.reuters.com/business/future-of-health/ -# - url: https://www.reuters.com/business/future-of-money/ -# - url: https://www.reuters.com/business/take-five/ -# - url: https://www.reuters.com/business/world-at-work/ -# - url: https://www.reuters.com/breakingviews/ -# - url: https://www.reuters.com/technology/ -# - url: https://www.reuters.com/technology/cybersecurity/ -# - url: https://www.reuters.com/technology/space/ -# - url: https://www.reuters.com/technology/disrupted/ -# - url: https://www.reuters.com/technology/reuters-momentum/ -# - url: https://www.reuters.com/investigations/ -# - url: https://a16z.com/news-content/#latest -# - url: https://news.ycombinator.com/ -# - url: https://www.reddit.com/?rdt=48006 -# - url: https://news.crunchbase.com/ -# - url: https://www.cctv.com/ -# - url: https://sports.cctv.com/