-
Notifications
You must be signed in to change notification settings - Fork 64
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adapt to latest changes in llm microservice famliy
Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
- Loading branch information
Showing
12 changed files
with
222 additions
and
97 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,3 +21,5 @@ | |
.idea/ | ||
*.tmproj | ||
.vscode/ | ||
# CI values | ||
ci*-values.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,55 +1,90 @@ | ||
# llm-uservice | ||
|
||
Helm chart for deploying LLM microservice. | ||
Helm chart for deploying OPEA LLM microservices. | ||
|
||
llm-uservice depends on TGI, you should set TGI_LLM_ENDPOINT as tgi endpoint. | ||
## Installing the chart | ||
|
||
## (Option1): Installing the chart separately | ||
`llm-uservice` depends on one of the following inference backend services: | ||
|
||
First, you need to install the tgi chart, please refer to the [tgi](../tgi) chart for more information. | ||
- TGI: please refer to [tgi](../tgi) chart for more information | ||
|
||
After you've deployted the tgi chart successfully, please run `kubectl get svc` to get the tgi service endpoint, i.e. `http://tgi`. | ||
- vLLM: please refer to [vllm](../vllm) chart for more information | ||
|
||
To install the chart, run the following: | ||
First, you need to install one of the dependent chart, i.e. `tgi` or `vllm` helm chart. | ||
|
||
```console | ||
cd GenAIInfra/helm-charts/common/llm-uservice | ||
export HFTOKEN="insert-your-huggingface-token-here" | ||
export TGI_LLM_ENDPOINT="http://tgi" | ||
helm dependency update | ||
helm install llm-uservice . --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --set TGI_LLM_ENDPOINT=${TGI_LLM_ENDPOINT} --wait | ||
``` | ||
After you've deployted the dependent chart successfully, please run `kubectl get svc` to get the backend inference service endpoint, e.g. `http://tgi`, `http://vllm`. | ||
|
||
## (Option2): Installing the chart with dependencies automatically | ||
To install the `llm-uservice` chart, run the following: | ||
|
||
```console | ||
cd GenAIInfra/helm-charts/common/llm-uservice | ||
export HFTOKEN="insert-your-huggingface-token-here" | ||
helm dependency update | ||
helm install llm-uservice . --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --set tgi.enabled=true --wait | ||
export HFTOKEN="insert-your-huggingface-token-here" | ||
# set backend inferene service endpoint URL | ||
# for tgi | ||
export LLM_ENDPOINT="http://tgi" | ||
# for vllm | ||
# export LLM_ENDPOINT="http://vllm" | ||
|
||
# set the same model used by the backend inference service | ||
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3" | ||
|
||
# install llm-textgen with TGI backend | ||
helm install llm-uservice . --set TEXTGEN_BACKEND="TGI" --set LLM_ENDPOINT=${LLM_ENDPOINT} --set LLM_MODEL_ID=${LLM_MODEL_ID} --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --wait | ||
|
||
# install llm-textgen with vLLM backend | ||
# helm install llm-uservice . --set TEXTGEN_BACKEND="vLLM" --set LLM_ENDPOINT=${LLM_ENDPOINT} --set LLM_MODEL_ID=${LLM_MODEL_ID} --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --wait | ||
|
||
# install llm-docsum with TGI backend | ||
# helm install llm-uservice . --set image.repository="opea/llm-docsum" --set DOCSUM_BACKEND="TGI" --set LLM_ENDPOINT=${LLM_ENDPOINT} --set LLM_MODEL_ID=${LLM_MODEL_ID} --set MAX_INPUT_TOKENS=2048 --set MAX_TOTAL_TOKENS=4096 --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --wait | ||
|
||
# install llm-docsum with vLLM backend | ||
# helm install llm-uservice . --set image.repository="opea/llm-docsum" --set DOCSUM_BACKEND="vLLM" --set LLM_ENDPOINT=${LLM_ENDPOINT} --set LLM_MODEL_ID=${LLM_MODEL_ID} --set MAX_INPUT_TOKENS=2048 --set MAX_TOTAL_TOKENS=4096 --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --wait | ||
|
||
# install llm-faqgen with TGI backend | ||
# helm install llm-uservice . --set image.repository="opea/llm-faqgen" --set FAQGEN_BACKEND="TGI" --set LLM_ENDPOINT=${LLM_ENDPOINT} --set LLM_MODEL_ID=${LLM_MODEL_ID} --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --wait | ||
|
||
# install llm-faqgen with vLLM backend | ||
# helm install llm-uservice . --set image.repository="opea/llm-faqgen" --set FAQGEN_BACKEND="vLLM" --set LLM_ENDPOINT=${LLM_ENDPOINT} --set LLM_MODEL_ID=${LLM_MODEL_ID} --set global.HUGGINGFACEHUB_API_TOKEN=${HFTOKEN} --wait | ||
``` | ||
|
||
## Verify | ||
|
||
To verify the installation, run the command `kubectl get pod` to make sure all pods are running. | ||
|
||
Then run the command `kubectl port-forward svc/llm-uservice 9000:9000` to expose the llm-uservice service for access. | ||
Then run the command `kubectl port-forward svc/llm-uservice 9000:9000` to expose the service for access. | ||
|
||
Open another terminal and run the following command to verify the service if working: | ||
|
||
```console | ||
# for llm-textgen service | ||
curl http://localhost:9000/v1/chat/completions \ | ||
-X POST \ | ||
-d '{"query":"What is Deep Learning?","max_tokens":17,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,"repetition_penalty":1.03,"streaming":true}' \ | ||
-H 'Content-Type: application/json' | ||
-X POST \ | ||
-d d '{"model": "${LLM_MODEL_ID}", "messages": "What is Deep Learning?", "max_tokens":17}' \ | ||
-H 'Content-Type: application/json' | ||
|
||
# for llm-docsum service | ||
curl http://localhost:9000/v1/docsum \ | ||
-X POST \ | ||
-d '{"query":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.", "max_tokens":32, "language":"en"}' \ | ||
-H 'Content-Type: application/json' | ||
|
||
# for llm-faqgen service | ||
curl http://localhost:9000/v1/faqgen \ | ||
-X POST \ | ||
-d '{"query":"Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for the most popular models, including FlagEmbedding, Ember, GTE and E5.","max_tokens": 128}' \ | ||
-H 'Content-Type: application/json' | ||
``` | ||
|
||
## Values | ||
|
||
| Key | Type | Default | Description | | ||
| ------------------------------- | ------ | ---------------- | ------------------------------- | | ||
| global.HUGGINGFACEHUB_API_TOKEN | string | `""` | Your own Hugging Face API token | | ||
| image.repository | string | `"opea/llm-tgi"` | | | ||
| service.port | string | `"9000"` | | | ||
| TGI_LLM_ENDPOINT | string | `""` | LLM endpoint | | ||
| global.monitoring | bool | `false` | Service usage metrics | | ||
| Key | Type | Default | Description | | ||
| ------------------------------- | ------ | ----------------------------- | -------------------------------------------------------------------------------- | | ||
| global.HUGGINGFACEHUB_API_TOKEN | string | `""` | Your own Hugging Face API token | | ||
| image.repository | string | `"opea/llm-textgen"` | one of "opea/llm-textgen", "opea/llm-docsum", "opea/llm-faqgen" | | ||
| LLM_ENDPOINT | string | `""` | backend inference service endpoint | | ||
| LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | model used by the inference backend | | ||
| TEXTGEN_BACKEND | string | `"tgi"` | backend inference engine, only valid for llm-textgen image, one of "TGI", "vLLM" | | ||
| DOCSUM_BACKEND | string | `"tgi"` | backend inference engine, only valid for llm-docsum image, one of "TGI", "vLLM" | | ||
| FAQGEN_BACKEND | string | `"tgi"` | backend inference engine, only valid for llm-faqgen image, one of "TGi", "vLLM" | | ||
| global.monitoring | bool | `false` | Service usage metrics | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
26 changes: 26 additions & 0 deletions
26
helm-charts/common/llm-uservice/ci-vllm-docsum-gaudi-values.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
# Copyright (C) 2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
image: | ||
repository: opea/llm-docsum | ||
tag: "latest" | ||
|
||
DOCSUM_BACKEND: "vLLM" | ||
LLM_MODEL_ID: "Intel/neural-chat-7b-v3-3" | ||
MAX_INPUT_TOKENS: 2048 | ||
MAX_TOTAL_TOKENS: 4096 | ||
|
||
|
||
tgi: | ||
enabled: false | ||
vllm: | ||
enabled: true | ||
image: | ||
repository: opea/vllm-gaudi | ||
tag: "latest" | ||
LLM_MODEL_ID: Intel/neural-chat-7b-v3-3 | ||
OMPI_MCA_btl_vader_single_copy_mechanism: none | ||
extraCmdArgs: ["--tensor-parallel-size","1","--block-size","128","--max-num-seqs","256","--max-seq_len-to-capture","2048"] | ||
resources: | ||
limits: | ||
habana.ai/gaudi: 1 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.