-
Notifications
You must be signed in to change notification settings - Fork 64
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
ChatQnA: Adapt to latest changes (#727)
* ChatQnA: Adapt to latest changes Adapt chatqna to latest following changes: - make vLLM as the default inference engine - adapt to latest changes in component data-prep, retriever, guardrails - sync with docker compose files in GenAIExamples Signed-off-by: Lianhao Lu <lianhao.lu@intel.com> * Temporary disable ui testpod Workaround for issue opea-project/GenAIExamples#1441. Signed-off-by: Lianhao Lu <lianhao.lu@intel.com> --------- Signed-off-by: Lianhao Lu <lianhao.lu@intel.com>
- Loading branch information
Showing
14 changed files
with
195 additions
and
188 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,112 @@ | ||
# Copyright (C) 2024 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
# Override CPU resource request and probe timing values in specific subcharts | ||
# | ||
# RESOURCES | ||
# | ||
# Resource request matching actual resource usage (with enough slack) | ||
# is important when service is scaled up, so that right amount of pods | ||
# get scheduled to right nodes. | ||
# | ||
# Because resource usage depends on the used devices, model, data type | ||
# and SW versions, and this top-level chart has overrides for them, | ||
# resource requests need to be specified here too. | ||
# | ||
# To test service without resource request, use "resources: {}". | ||
# | ||
# PROBES | ||
# | ||
# Inferencing pods startup / warmup takes *much* longer on CPUs than | ||
# with acceleration devices, and their responses are also slower, | ||
# especially when node is running several instances of these services. | ||
# | ||
# Kubernetes restarting pod before its startup finishes, or not | ||
# sending it queries because it's not in ready state due to slow | ||
# readiness responses, does really NOT help in getting faster responses. | ||
# | ||
# => probe timings need to be increased when running on CPU. | ||
|
||
vllm: | ||
enabled: false | ||
tgi: | ||
enabled: true | ||
# TODO: add Helm value also for TGI data type option: | ||
# https://github.com/opea-project/GenAIExamples/issues/330 | ||
LLM_MODEL_ID: meta-llama/Meta-Llama-3-8B-Instruct | ||
|
||
# Potentially suitable values for scaling CPU TGI 2.2 with Intel/neural-chat-7b-v3-3 @ 32-bit: | ||
#resources: | ||
# limits: | ||
# cpu: 8 | ||
# memory: 70Gi | ||
# requests: | ||
# cpu: 6 | ||
# memory: 65Gi | ||
|
||
livenessProbe: | ||
initialDelaySeconds: 8 | ||
periodSeconds: 8 | ||
failureThreshold: 24 | ||
timeoutSeconds: 4 | ||
readinessProbe: | ||
initialDelaySeconds: 16 | ||
periodSeconds: 8 | ||
timeoutSeconds: 4 | ||
startupProbe: | ||
initialDelaySeconds: 10 | ||
periodSeconds: 5 | ||
failureThreshold: 180 | ||
timeoutSeconds: 2 | ||
|
||
teirerank: | ||
RERANK_MODEL_ID: "BAAI/bge-reranker-base" | ||
|
||
# Potentially suitable values for scaling CPU TEI v1.5 with BAAI/bge-reranker-base model: | ||
resources: | ||
limits: | ||
cpu: 4 | ||
memory: 30Gi | ||
requests: | ||
cpu: 2 | ||
memory: 25Gi | ||
|
||
livenessProbe: | ||
initialDelaySeconds: 8 | ||
periodSeconds: 8 | ||
failureThreshold: 24 | ||
timeoutSeconds: 4 | ||
readinessProbe: | ||
initialDelaySeconds: 8 | ||
periodSeconds: 8 | ||
timeoutSeconds: 4 | ||
startupProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
failureThreshold: 120 | ||
|
||
tei: | ||
EMBEDDING_MODEL_ID: "BAAI/bge-base-en-v1.5" | ||
|
||
# Potentially suitable values for scaling CPU TEI 1.5 with BAAI/bge-base-en-v1.5 model: | ||
resources: | ||
limits: | ||
cpu: 4 | ||
memory: 4Gi | ||
requests: | ||
cpu: 2 | ||
memory: 3Gi | ||
|
||
livenessProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
failureThreshold: 24 | ||
timeoutSeconds: 2 | ||
readinessProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
timeoutSeconds: 2 | ||
startupProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
failureThreshold: 120 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,109 +1,5 @@ | ||
# Copyright (C) 2024 Intel Corporation | ||
# Copyright (C) 2025 Intel Corporation | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
# Override CPU resource request and probe timing values in specific subcharts | ||
# | ||
# RESOURCES | ||
# | ||
# Resource request matching actual resource usage (with enough slack) | ||
# is important when service is scaled up, so that right amount of pods | ||
# get scheduled to right nodes. | ||
# | ||
# Because resource usage depends on the used devices, model, data type | ||
# and SW versions, and this top-level chart has overrides for them, | ||
# resource requests need to be specified here too. | ||
# | ||
# To test service without resource request, use "resources: {}". | ||
# | ||
# PROBES | ||
# | ||
# Inferencing pods startup / warmup takes *much* longer on CPUs than | ||
# with acceleration devices, and their responses are also slower, | ||
# especially when node is running several instances of these services. | ||
# | ||
# Kubernetes restarting pod before its startup finishes, or not | ||
# sending it queries because it's not in ready state due to slow | ||
# readiness responses, does really NOT help in getting faster responses. | ||
# | ||
# => probe timings need to be increased when running on CPU. | ||
|
||
tgi: | ||
# TODO: add Helm value also for TGI data type option: | ||
# https://github.com/opea-project/GenAIExamples/issues/330 | ||
LLM_MODEL_ID: Intel/neural-chat-7b-v3-3 | ||
|
||
# Potentially suitable values for scaling CPU TGI 2.2 with Intel/neural-chat-7b-v3-3 @ 32-bit: | ||
resources: | ||
limits: | ||
cpu: 8 | ||
memory: 70Gi | ||
requests: | ||
cpu: 6 | ||
memory: 65Gi | ||
|
||
livenessProbe: | ||
initialDelaySeconds: 8 | ||
periodSeconds: 8 | ||
failureThreshold: 24 | ||
timeoutSeconds: 4 | ||
readinessProbe: | ||
initialDelaySeconds: 16 | ||
periodSeconds: 8 | ||
timeoutSeconds: 4 | ||
startupProbe: | ||
initialDelaySeconds: 10 | ||
periodSeconds: 5 | ||
failureThreshold: 180 | ||
timeoutSeconds: 2 | ||
|
||
teirerank: | ||
RERANK_MODEL_ID: "BAAI/bge-reranker-base" | ||
|
||
# Potentially suitable values for scaling CPU TEI v1.5 with BAAI/bge-reranker-base model: | ||
resources: | ||
limits: | ||
cpu: 4 | ||
memory: 30Gi | ||
requests: | ||
cpu: 2 | ||
memory: 25Gi | ||
|
||
livenessProbe: | ||
initialDelaySeconds: 8 | ||
periodSeconds: 8 | ||
failureThreshold: 24 | ||
timeoutSeconds: 4 | ||
readinessProbe: | ||
initialDelaySeconds: 8 | ||
periodSeconds: 8 | ||
timeoutSeconds: 4 | ||
startupProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
failureThreshold: 120 | ||
|
||
tei: | ||
EMBEDDING_MODEL_ID: "BAAI/bge-base-en-v1.5" | ||
|
||
# Potentially suitable values for scaling CPU TEI 1.5 with BAAI/bge-base-en-v1.5 model: | ||
resources: | ||
limits: | ||
cpu: 4 | ||
memory: 4Gi | ||
requests: | ||
cpu: 2 | ||
memory: 3Gi | ||
|
||
livenessProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
failureThreshold: 24 | ||
timeoutSeconds: 2 | ||
readinessProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
timeoutSeconds: 2 | ||
startupProbe: | ||
initialDelaySeconds: 5 | ||
periodSeconds: 5 | ||
failureThreshold: 120 | ||
image: | ||
repository: opea/chatqna |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.