Skip to content

Commit

Permalink
[Doc] Add install doc (#49)
Browse files Browse the repository at this point in the history
Add official install guide.

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
  • Loading branch information
wangxiyuan authored Feb 14, 2025
1 parent 46977f9 commit e264987
Show file tree
Hide file tree
Showing 3 changed files with 174 additions and 28 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ By using vLLM Ascend plugin, popular open-source models, including Transformer-l
* PyTorch >= 2.4.0, torch-npu >= 2.4.0
* vLLM (the same version as vllm-ascend)

Find more about how to setup your environment step by step in [here](docs/installation.md).
Find more about how to setup your environment step by step in [here](docs/source/installation.md).

## Getting Started

Expand Down Expand Up @@ -68,7 +68,7 @@ Run the following command to start the vLLM server with the [Qwen/Qwen2.5-0.5B-I
vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models
```
**Please refer to [official docs](./docs/index.md) for more details.**
**Please refer to [official docs](https://vllm-ascend.readthedocs.io/en/latest/) for more details.**

## Contributing
See [CONTRIBUTING](docs/source/developer_guide/contributing.md) for more details, which is a step-by-step guide to help you set up development environment, build and test.
Expand Down
4 changes: 2 additions & 2 deletions README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ vLLM 昇腾插件 (`vllm-ascend`) 是一个让vLLM在Ascend NPU无缝运行的
* PyTorch >= 2.4.0, torch-npu >= 2.4.0
* vLLM (与vllm-ascend版本一致)

[此处](docs/installation.md),您可以了解如何逐步准备环境。
[此处](docs/source/installation.md),您可以了解如何逐步准备环境。

## 开始使用

Expand Down Expand Up @@ -69,7 +69,7 @@ vllm serve Qwen/Qwen2.5-0.5B-Instruct
curl http://localhost:8000/v1/models
```

**请参阅 [官方文档](./docs/index.md)以获取更多详细信息**
**请参阅 [官方文档](https://vllm-ascend.readthedocs.io/en/latest/)以获取更多详细信息**

## 贡献
有关更多详细信息,请参阅 [CONTRIBUTING](docs/source/developer_guide/contributing.zh.md),可以更详细的帮助您部署开发环境、构建和测试。
Expand Down
194 changes: 170 additions & 24 deletions docs/source/installation.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,65 @@
# Installation

## Dependencies
| Requirement | Supported version | Recommended version | Note |
| ------------ | ------- | ----------- | ----------- |
| Python | >= 3.9 | [3.10](https://www.python.org/downloads/) | Required for vllm |
| CANN | >= 8.0.RC2 | [8.0.RC3](https://www.hiascend.com/developer/download/community/result?module=cann&cann=8.0.0.beta1) | Required for vllm-ascend and torch-npu |
| torch-npu | >= 2.4.0 | [2.5.1rc1](https://gitee.com/ascend/pytorch/releases/tag/v6.0.0.alpha001-pytorch2.5.1) | Required for vllm-ascend |
| torch | >= 2.4.0 | [2.5.1](https://github.com/pytorch/pytorch/releases/tag/v2.5.1) | Required for torch-npu and vllm required |
This document describes how to install vllm-ascend manually.

## Prepare Ascend NPU environment
## Requirements

Below is a quick note to install recommended version software:
- OS: Linux
- Python: 3.10 or higher
- A hardware with Ascend NPU. It's usually the Atlas 800 A2 series.
- Software:

### Containerized installation
| Software | Supported version | Note |
| ------------ | ----------------- | ---- |
| CANN | >= 8.0.0.beta1 | Required for vllm-ascend and torch-npu |
| torch-npu | >= 2.5.1rc1 | Required for vllm-ascend |
| torch | >= 2.5.1 | Required for torch-npu and vllm |

You can use the [container image](https://hub.docker.com/r/ascendai/cann) directly with one line command:
## Configure a new environment

Before installing the package, you need to make sure firmware/driver and CANN is installed correctly.

### Install firmwares and drivers

To verify that the Ascend NPU firmware and driver were correctly installed, run `npu-smi` info

> Tips: Refer to [Ascend Environment Setup Guide](https://ascend.github.io/docs/sources/ascend/quick_install.html) for more details.
### Install CANN (optional)

The installation of CANN wouldn’t be necessary if you are using a CANN container image, you can skip this step.If you want to install vllm-ascend on a bare environment by hand, you need install CANN first.

```bash
docker run \
# Create a virtual environment
python -m venv vllm-ascend-env
source vllm-ascend-env/bin/activate

# Install required python packages.
pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple attrs numpy==1.24.0 decorator sympy cffi pyyaml pathlib2 psutil protobuf scipy requests absl-py wheel typing_extensions

# Download and install the CANN package.
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-toolkit_8.0.0_linux-aarch64.run
sh Ascend-cann-toolkit_8.0.0_linux-aarch64.run --full
wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/CANN/CANN%208.0.0/Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run
sh Ascend-cann-kernels-910b_8.0.0_linux-aarch64.run --full
```

Once it's done, you can read either **Set up using Python** or **Set up using Docker** section to install and use vllm-ascend.

## Set up using Python

> Notes: If you are installing vllm-ascend on an arch64 machine, The `-f https://download.pytorch.org/whl/torch/` command parameter in this section can be omitted. It's only used for find torch package on x86 machine.
Please make sure that CANN is installed. It can be done by **Configure a new environment** step. Or by using an CANN container directly:

```bash
# Setup a CANN container using docker
# Update DEVICE according to your device (/dev/davinci[0-7])
DEVICE=/dev/davinci7

docker run --rm \
--name vllm-ascend-env \
--device /dev/davinci1 \
--device $DEVICE \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
Expand All @@ -28,28 +68,134 @@ docker run \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-it quay.io/ascend/cann:8.0.rc3.beta1-910b-ubuntu22.04-py3.10 bash
-it quay.io/ascend/cann:8.0.0.beta1-910b-ubuntu22.04-py3.10 bash
```

You do not need to install `torch` and `torch_npu` manually, they will be automatically installed as `vllm-ascend` dependencies.
Then you can install vllm-ascend from **pre-built wheel** or **source code**.

### Install from Pre-built wheels (Not support yet)

### Manual installation
1. Install vllm

Or follow the instructions provided in the [Ascend Installation Guide](https://ascend.github.io/docs/sources/ascend/quick_install.html) to set up the environment.
Since vllm on pypi is not compatible with cpu, we need to install vllm from source code.

## Building
```bash
git clone --depth 1 --branch v0.7.1 https://github.com/vllm-project/vllm
cd vllm
VLLM_TARGET_DEVICE=empty pip install . -f https://download.pytorch.org/whl/torch/
```

### Build Python package from source
2. Install vllm-ascend

```bash
pip install vllm-ascend -f https://download.pytorch.org/whl/torch/
```

### Install from source code

1. Install vllm

```bash
git clone https://github.com/vllm-project/vllm
cd vllm
VLLM_TARGET_DEVICE=empty pip install . -f https://download.pytorch.org/whl/torch/
```

2. Install vllm-ascend

```bash
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e . -f https://download.pytorch.org/whl/torch/
```

## Set up using Docker

> Tips: CANN, torch, torch_npu, vllm and vllm_ascend are pre-installed in the Docker image already.

### Pre-built images (Not support yet)

Just pull the image and run it with bash.

```bash
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
pip install -e .
docker pull quay.io/ascend/vllm-ascend:latest
# Update DEVICE according to your device (/dev/davinci[0-7])
DEVICE=/dev/davinci7
docker run --rm \
--name vllm-ascend-env \
--device $DEVICE \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-it quay.io/ascend/vllm-ascend:0.7.1rc1 bash
```

### Build container image from source
### Build image from source

If you want to build the docker image from main branch, you can do it by following steps:

```bash
git clone https://github.com/vllm-project/vllm-ascend.git
cd vllm-ascend
docker build -t vllm-ascend-dev-image -f ./Dockerfile .
docker build -t vllm-ascend-dev-image:latest -f ./Dockerfile .
# Update DEVICE according to your device (/dev/davinci[0-7])
DEVICE=/dev/davinci7
docker run --rm \
--name vllm-ascend-env \
--device $DEVICE \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-it vllm-ascend-dev-image:latest bash
```

## Extra information

### Verify installation

Create and run a simple inference test. The `example.py` can be like:

```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(max_tokens=100, temperature=0.0)
# Create an LLM.
llm = LLM(model="facebook/opt-125m")
# Generate texts from the prompts.
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
Then run:
```bash
# export VLLM_USE_MODELSCOPE=true to speed up download if huggingface is not reachable.
python example.py
```

0 comments on commit e264987

Please sign in to comment.