VL模型无法load #335
Replies: 5 comments 4 replies
-
pip install transformers==4.34.0可以解决这个问题 但是我遇到了其他问题,按照llava上的#966更改llava文件后,会提示timm load错误,unknown model clip-vit-H-14-laion2B-s32B-b79K-yi-vl-6B-44B |
Beta Was this translation helpful? Give feedback.
-
使用github.com/01-ai/Yi/tree/liuyudong/yi_vl 分支,里面的VL/single_inference.py可以直接运行,记得改mm_vision_tower |
Beta Was this translation helpful? Give feedback.
-
还没试,但我看代码里有写量化 |
Beta Was this translation helpful? Give feedback.
-
RuntimeError: FlashAttention only supports Ampere GPUs or newer. |
Beta Was this translation helpful? Give feedback.
-
我想把模型下载到别的路径下,怎么修改代码呢 |
Beta Was this translation helpful? Give feedback.
-
ValueError: Non-consecutive added token '' found. Should have index 64000 but has index 0 in saved vocabulary.
显然是tokenizer不对,为何
Beta Was this translation helpful? Give feedback.
All reactions