Skip to content

Commit

Permalink
Added Injection (Prompt)
Browse files Browse the repository at this point in the history
  • Loading branch information
RRudder committed Dec 4, 2023
1 parent 0312802 commit a95edcb
Show file tree
Hide file tree
Showing 3 changed files with 44 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Guidance

Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.

Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Recommendation(s)

There is no single technique to prevent injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:

- Validate, sanitize, and treat any user or external inputs as untrusted input sources.
- Establish input limits using the LLM's context window to prevent resource exhaustion.
- Enforce API rate limits that restrict the number of requests that can be made in a specific time frame.
- Limit computational resource use per request.

For more information, refer to the following resources:

- <https://owasp.org/www-project-top-10-for-large-language-model-applications/>
- <https://owasp.org/www-community/attacks/Denial_of_Service>
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Injection (Prompt)

## Overview of the Vulnerability

Injection occurs when an attacker provides inputs to a Large Language Model (LLM) which causes a large amount of resources to be consumed. This can result in a Denial of Service (DoS) to users, incur large amounts of computational resource costs, or slow response times of the LLM.

## Business Impact

This vulnerability can lead to reputational and financial damage of the company due an attacker incurring computational resource costs or denying service to other users, which would also impact customers' trust.

## Steps to Reproduce

1. Navigate to the following URL: {{URL}}
1. Inject the following prompt into the LLM:

```prompt
{malicious prompt}
```

1. Observe that the LLM is slow to return a response

## Proof of Concept (PoC)

The screenshot(s) below demonstrate(s) the vulnerability:
>
> {{screenshot}}

0 comments on commit a95edcb

Please sign in to comment.