diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md index d9454153..62613770 100644 --- a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md +++ b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md @@ -10,11 +10,13 @@ This vulnerability can lead to reputational and financial damage of the company ## Steps to Reproduce -1. Navigate to the following URL: +1. Navigate to the following URL: {{URL}} 1. Inject the following prompt into the LLM: + ```prompt {malicious prompt} ``` + 1. Observe that the LLM returns sensitive data ## Proof of Concept (PoC)