Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parameterize enable_prefix_caching #2900

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions trl/trainer/grpo_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,10 @@ class GRPOConfig(TrainingArguments):
If set, the `max_model_len` to use for vLLM. This could be useful when running with reduced
`vllm_gpu_memory_utilization`, leading to a reduced KV cache size. If not set, vLLM will use the model
context size, which might be much larger than the KV cache, leading to inefficiencies.
vllm_enable_prefix_caching (`bool`, *optional*, defaults to `True`):
Whether to enable prefix caching in vLLM. If set to `True` (default), ensure that the GPU used support
this feature, because enabling prefix cache on GPUs older than Ampere architecture (like the V100) may
cause errors, see: https://github.com/huggingface/trl/issues/2798.
vllm_guided_decoding_regex (`str` or `None`, *optional*, defaults to `None`):
Regex for vLLM guided decoding. If `None` (default), guided decoding is disabled.

Expand Down Expand Up @@ -204,6 +208,14 @@ class GRPOConfig(TrainingArguments):
"context size, which might be much larger than the KV cache, leading to inefficiencies."
},
)
vllm_enable_prefix_caching: Optional[bool] = field(
default=True,
metadata={
"help": "Whether to enable prefix caching in vLLM. If set to `True` (default), ensure that the GPU used "
"support this feature, because enabling prefix cache on GPUs older than Ampere architecture (like the V100) "
"may cause errors, see: https://github.com/huggingface/trl/issues/2798."
},
)
vllm_guided_decoding_regex: Optional[str] = field(
default=None,
metadata={"help": "Regex for vLLM guided decoding. If `None` (default), guided decoding is disabled."},
Expand Down
2 changes: 1 addition & 1 deletion trl/trainer/grpo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,7 @@ def data_collator(features): # No data collation is needed in GRPO
# Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can
# directly reuse the KV cache if it shares the same prefix with one of the existing queries.
# This is particularly useful here because we generate completions from the same prompts.
enable_prefix_caching=True,
enable_prefix_caching=self.args.vllm_enable_prefix_caching,
max_model_len=self.args.vllm_max_model_len,
)

Expand Down