Skip to content

Commit

Permalink
Fix condition for checking accelerator.num_processes in Llava class
Browse files Browse the repository at this point in the history
  • Loading branch information
pufanyi committed Apr 17, 2024
1 parent c688a72 commit d1ba561
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion lmms_eval/models/llava.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ def __init__(
self.use_cache = use_cache
self.truncate_context = truncate_context
# assert self.batch_size_per_gpu == 1, "Llava currently does not support batched generation. See https://github.com/haotian-liu/LLaVA/issues/754. HF Llava also has this issue."
if accelerator.num_processes > 1 and device_map == "":
if accelerator.num_processes > 1 and device_map == "auto":
assert accelerator.distributed_type in [DistributedType.FSDP, DistributedType.MULTI_GPU, DistributedType.DEEPSPEED], "Unsupported distributed type provided. Only DDP and FSDP are supported."
# If you want to use DistributedType.DEEPSPEED, you have to run accelerate config before using the model
# Also, you have to select zero stage 0 (equivalent to DDP) in order to make the prepare model works
Expand Down

0 comments on commit d1ba561

Please sign in to comment.