diff --git a/docs/confident-ai/confident-ai-advanced-llm-connection.mdx b/docs/confident-ai/confident-ai-advanced-llm-connection.mdx index d2fe3ea87..e0d53ae20 100644 --- a/docs/confident-ai/confident-ai-advanced-llm-connection.mdx +++ b/docs/confident-ai/confident-ai-advanced-llm-connection.mdx @@ -5,11 +5,20 @@ sidebar_label: Setup LLM Connection Endpoint --- :::tip -This is particularly helpful if you wish to enable a no-code evaluation workflow for non-technical users, or simulating conversations to be evaluated in one click of a button. +This is particularly helpful if you wish to: + +- Enable no-code evaluation workflows for non-technical users +- Simulate conversations to be evaluated in a click of a button +- Red-team LLM applications + ::: You can also setup an LLM endpoint that accepts a `POST` request over HTTPS to **enable users to run evaluations directly on the platform without having to code**, and start an evaluation through a click of a button instead. At a high level, you would have to provide Confident AI with the mappings to test case parameters such as the `actual_output`, `retrieval_context`, etc., and at evaluation time Confident AI will use the dataset and metrics settings you've specified for your experiment to unit test your LLM application. +:::note +If you're setting this up for red-teaming, all you need to do is return the `actual_output` of your model, and nothing else such as `retrieval_context` (although you're free to return additional data). +::: + ### Create an LLM Endpoint In order for Confident AI to reach your LLM application, you'll need to expose your LLM in a RESTFUL API endpoint that is accessible over the internet. These are the hard rules you **MUST** follow when setting up your endpoint: