Skip to content

[BugFix]add int8 cache dtype when using attention quantization #144

[BugFix]add int8 cache dtype when using attention quantization

[BugFix]add int8 cache dtype when using attention quantization #144

Triggered via pull request February 21, 2025 02:20
Status Success
Total duration 3m 6s
Artifacts

mypy.yaml

on: pull_request
Matrix: mypy
Fit to window
Zoom out
Zoom in