Skip to content

Pull requests: pytorch/xla

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Sort

Pull requests list

Add block size table for ragged_paged_attention
#8942 opened Apr 5, 2025 by yaochengji Loading…
add rc4 trigger
#8941 opened Apr 4, 2025 by zpcore Loading…
Update test_export_fx_passes.py
#8933 opened Apr 3, 2025 by avikchaudhuri Loading…
test on debian12
#8928 opened Apr 2, 2025 by zpcore Draft
@assume_pure
#8923 opened Apr 2, 2025 by tengyifei Draft
Make GPU CUDA plugin require JAX
#8919 opened Apr 1, 2025 by tengyifei Draft
Adapt Splash Attention from TorchPrime
#8911 opened Mar 31, 2025 by zpcore Draft
[DRAFT/WIP] Add top-p masking
#8871 opened Mar 21, 2025 by hyeygit Draft
[1/N] Initial implementation of local SPMD support distributed SPMD and other distributed things.
#8810 opened Mar 9, 2025 by lsy323 Loading…
Showcase jax.grad in torch_xla
#8800 opened Mar 5, 2025 by zpcore Loading…
Repro ragged paged attn kernel
#8752 opened Feb 26, 2025 by vanbasten23 Draft
Replace setup.py with pyproject.toml
#8744 opened Feb 26, 2025 by ManfeiBai Loading…
Follow up on ragged kernel wrapper
#8737 opened Feb 24, 2025 by vanbasten23 Draft
Document how to debug the dispatcher
#8712 opened Feb 15, 2025 by tengyifei Loading…
Add instruction for exporting inlined constant
#8707 opened Feb 13, 2025 by qihqi Loading…
Transition to Hermetic CUDA
#8665 opened Feb 3, 2025 by ysiraichi Draft
add aarch64 platform build support
#8663 opened Feb 2, 2025 by snadampal Loading…
Lower cummin op lowering ATen Operation lowering
#8565 opened Jan 14, 2025 by zyy-martin Loading…
ProTip! Updated in the last three days: updated:>2025-04-01.