Skip to content

Actions: AlibabaPAI/llumnix

offline_inference

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
484 workflow runs
484 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[Core] Upgrade vllm to v0.6.3.post1
offline_inference #309: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 14:24 20m 50s vllm_upgrade
December 19, 2024 14:24 20m 50s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #308: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 14:16 8m 17s vllm_upgrade
December 19, 2024 14:16 8m 17s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #307: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 11:16 5m 6s vllm_upgrade
December 19, 2024 11:16 5m 6s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #306: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 11:07 10m 20s vllm_upgrade
December 19, 2024 11:07 10m 20s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #305: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 10:24 26m 33s vllm_upgrade
December 19, 2024 10:24 26m 33s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #304: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 10:06 15m 10s vllm_upgrade
December 19, 2024 10:06 15m 10s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #303: Pull request #69 synchronize by ZeldaHuang
December 19, 2024 09:29 4m 48s vllm_upgrade
December 19, 2024 09:29 4m 48s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #302: Pull request #69 synchronize by ZeldaHuang
December 18, 2024 11:54 10m 31s vllm_upgrade
December 18, 2024 11:54 10m 31s
[Core] Increase the instance type when scaling up llumlet
offline_inference #301: Pull request #87 synchronize by KuilongCui
December 18, 2024 11:11 16m 13s engine_type
December 18, 2024 11:11 16m 13s
[Core] Increase the instance type when scaling up llumlet
offline_inference #300: Pull request #87 synchronize by KuilongCui
December 18, 2024 11:09 2m 39s engine_type
December 18, 2024 11:09 2m 39s
[Core] Increase the instance type when scaling up llumlet
offline_inference #299: Pull request #87 synchronize by KuilongCui
December 18, 2024 11:07 1m 21s engine_type
December 18, 2024 11:07 1m 21s
[Core] Increase the instance type when scaling up llumlet
offline_inference #298: Pull request #87 synchronize by KuilongCui
December 18, 2024 11:04 3m 42s engine_type
December 18, 2024 11:04 3m 42s
[Core] Increase the instance type when scaling up llumlet
offline_inference #297: Pull request #87 synchronize by KuilongCui
December 18, 2024 10:58 5m 56s engine_type
December 18, 2024 10:58 5m 56s
[BladeLLM] Support dispatch feature for BladeLLM (#86)
offline_inference #296: Commit 156ce24 pushed by KuilongCui
December 18, 2024 10:46 6m 32s main
December 18, 2024 10:46 6m 32s
[BladeLLM] Support dispatch feature for BladeLLM
offline_inference #295: Pull request #86 synchronize by KuilongCui
December 18, 2024 09:58 12m 31s dispatch_bladellm
December 18, 2024 09:58 12m 31s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #294: Pull request #69 synchronize by ZeldaHuang
December 18, 2024 09:43 20m 16s vllm_upgrade
December 18, 2024 09:43 20m 16s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #293: Pull request #69 synchronize by ZeldaHuang
December 18, 2024 08:39 14m 27s vllm_upgrade
December 18, 2024 08:39 14m 27s
[Deployment] Support global launch in addition to local launch
offline_inference #292: Pull request #88 synchronize by s5u13b
December 18, 2024 07:22 1m 24s centralized-deployment
December 18, 2024 07:22 1m 24s
[Deployment] Support global launch in addition to local launch
offline_inference #291: Pull request #88 opened by s5u13b
December 18, 2024 03:44 1m 21s centralized-deployment
December 18, 2024 03:44 1m 21s
[WIP] Adapt to New Engine Backend BladeLLM
offline_inference #290: Pull request #58 synchronize by KuilongCui
December 18, 2024 02:47 54s blade_support
December 18, 2024 02:47 54s
[Core] Increase the instance type when scaling up llumlet
offline_inference #289: Pull request #87 synchronize by KuilongCui
December 17, 2024 11:46 15m 35s engine_type
December 17, 2024 11:46 15m 35s
[BladeLLM] Support dispatch feature for BladeLLM
offline_inference #288: Pull request #86 synchronize by KuilongCui
December 17, 2024 11:43 4m 48s dispatch_bladellm
December 17, 2024 11:43 4m 48s
[Core] Increase the instance type when scaling up llumlet
offline_inference #287: Pull request #87 synchronize by KuilongCui
December 17, 2024 11:17 12m 1s engine_type
December 17, 2024 11:17 12m 1s
[Core] Increase the instance type when scaling up llumlet
offline_inference #286: Pull request #87 opened by KuilongCui
December 17, 2024 10:57 2m 53s engine_type
December 17, 2024 10:57 2m 53s
[Core] Upgrade vllm to v0.6.3.post1
offline_inference #285: Pull request #69 synchronize by ZeldaHuang
December 17, 2024 09:28 6m 40s vllm_upgrade
December 17, 2024 09:28 6m 40s