You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+1
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@ VILA is a visual language model (VLM) pretrained with interleaved image-text dat
16
16
17
17
18
18
## 💡 News
19
+
-[2024/05] We move our repo to NVlabs (https://github.com/NVlabs/VILA) All future developments will be updated there!
19
20
-[2024/05] We release VILA-1.5, which offers **video understanding capability**. VILA-1.5 comes with four model sizes: 3B/8B/13B/40B.
20
21
-[2024/05] We release [AWQ](https://arxiv.org/pdf/2306.00978.pdf)-quantized 4bit VILA-1.5 models. VILA-1.5 is efficiently deployable on diverse NVIDIA GPUs (A100, 4090, 4070 Laptop, Orin, Orin Nano) by [TinyChat](https://github.com/mit-han-lab/llm-awq/tree/main/tinychat) and [TensorRT-LLM](demo_trt_llm) backends.
0 commit comments