Skip to content

Commit

Permalink
Explicitly check for None when using prefill attn_mask (#983)
Browse files Browse the repository at this point in the history
When you attempt to implicitly null-check the attention_mask, you hit a
torch error:

```bash
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
```

Simply adding an explicit check for null fixes it, and the
`--use-attention-mask` export path for prefill
  • Loading branch information
stbaione authored Feb 19, 2025
1 parent b299af3 commit 8b41015
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion sharktank/sharktank/examples/export_paged_llm_v1.py
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ def _(model, tokens, seq_lens, seq_block_ids, cs):
shard_count = llama_config.tensor_parallelism_size

tokens = ops.replicate(tokens, count=shard_count)
if attention_mask:
if attention_mask is not None:
attention_mask = ops.replicate(attention_mask, count=shard_count)
seq_block_ids = ops.replicate(seq_block_ids, count=shard_count)
cache_tensors = repack_cache(cs, cache_shard_dim)
Expand Down

0 comments on commit 8b41015

Please sign in to comment.