From 2256f44a25474999f4f3db2e9020afe9cd5e23b1 Mon Sep 17 00:00:00 2001 From: "Ruben S. Montero" Date: Mon, 27 Jan 2025 20:33:07 +0100 Subject: [PATCH] Update 04132560-bebf-013d-a767-7875a4a4f528.yaml --- appliances/service/04132560-bebf-013d-a767-7875a4a4f528.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/appliances/service/04132560-bebf-013d-a767-7875a4a4f528.yaml b/appliances/service/04132560-bebf-013d-a767-7875a4a4f528.yaml index f8acc16..ed448fc 100644 --- a/appliances/service/04132560-bebf-013d-a767-7875a4a4f528.yaml +++ b/appliances/service/04132560-bebf-013d-a767-7875a4a4f528.yaml @@ -44,7 +44,7 @@ opennebula_template: oneapp_ray_api_route: O|text|Route path for the REST API exposed by the RAY application.| |/chat oneapp_ray_application_file64: O|text64|Python application to be deployed in the RAY framework (base64).| | oneapp_ray_application_url: O|text|URL to download the Python application for the RAY framework.| | - oneapp_ray_model_id: M|list-multiple|Determines the AI model(s) to use for inference.|meta-llama/Llama-3.2-1B-Instruct,Qwen/Qwen2.5-1.5B-Instruct| + oneapp_ray_model_id: M|list-multiple|Determines the AI model(s) to use for inference.|meta-llama/Llama-3.2-3B-Instruct,Qwen/Qwen2.5-3B-Instruct,utter-project/EuroLLM-1.7B-Instruct| oneapp_ray_model_prompt: O|text|Starting directive for model responses. | |You are a helpful assisstant. Answer the question. oneapp_ray_model_temperature: M|number-float|Temperature parameter for model outputs, controlling the randomness of generated text.| |0.1 oneapp_ray_model_token: M|password|Provides the authentication token required to access the specified AI model. | |