From c86929aba3abbb8bf93895ef35a37096123de482 Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Tue, 14 Nov 2023 17:05:47 +1000
Subject: [PATCH 01/12] Create Prompt Injection template
* Added Prompt Injection template, recommendation, guidance
* LLM Output Handling, Training Data Poisoning, and Excessive Agency/Permission Manipulation are all to be added shortly
---
.../llm_security/prompt_injection/guidance.md | 5 ++++
.../prompt_injection/recommendations.md | 13 ++++++++++
.../llm_security/prompt_injection/template.md | 26 +++++++++++++++++++
3 files changed, 44 insertions(+)
create mode 100644 submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
create mode 100644 submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
create mode 100644 submissions/description/ai_application_security/llm_security/prompt_injection/template.md
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
new file mode 100644
index 00000000..eb0fcf81
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the cryptographic weakness, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
new file mode 100644
index 00000000..99273ce9
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent prompt injection from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of prompt injection:
+
+- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
new file mode 100644
index 00000000..402f9322
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
@@ -0,0 +1,26 @@
+# Prompt Injection
+
+## Overview of the Vulnerability
+
+Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a large language model (LLM) into executing unintended actions. The LLM has a lack of segregation between user input and the data within the LLM. This can allow an attacker to inject malicious prompts into an LLM which bypass safeguards and gain unauthorized access to data.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM returns sensitive data
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
From bb4e5fb663da4fe815b24b9133d0f9204e8aa689 Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Wed, 15 Nov 2023 15:46:21 +1000
Subject: [PATCH 02/12] Added Excessive Agency or Permission Manipulation
* Template, recommendation, and guidance .md files
---
.../guidance.md | 5 ++++
.../recommendations.md | 15 +++++++++++
.../template.md | 26 +++++++++++++++++++
3 files changed, 46 insertions(+)
create mode 100644 submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
create mode 100644 submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
create mode 100644 submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
new file mode 100644
index 00000000..e0f8194c
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
new file mode 100644
index 00000000..7b0995de
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
@@ -0,0 +1,15 @@
+# Recommendation(s)
+
+There is no single technique to prevent excessive agency or permission manipulation from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of the vulnerability:
+
+- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
+- Log and monitor all activity of the LLM and the systems it is connected to.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
new file mode 100644
index 00000000..498677bf
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
@@ -0,0 +1,26 @@
+# Excessive Agency/Permission Manipulation
+
+## Overview of the Vulnerability
+
+Excessive agency or permission manipulation occurs when an attacker is able to manipulate the LLM outputs to perform actions that are damaging or otherwise harmful. This usually stems from excessive functionality, permissions, or autonomy. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Enter the following prompt into the LLM:
+
+```prompt
+ {prompt}
+```
+
+1. Observe that the output from the LLM returns sensitive data
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
From 2aa5fec468a5d0d17ccb71b472a0dc4e6ba1f69b Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Wed, 15 Nov 2023 17:16:04 +1000
Subject: [PATCH 03/12] Added LLM Output Handling
* Template, Recommendation, Guidance
* Grammar fixes for Excessive Agency
---
.../template.md | 4 +--
.../llm_output_handling/guidance.md | 5 ++++
.../llm_output_handling/recommendations.md | 17 ++++++++++++
.../llm_output_handling/template.md | 26 +++++++++++++++++++
4 files changed, 50 insertions(+), 2 deletions(-)
create mode 100644 submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
create mode 100644 submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
create mode 100644 submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
index 498677bf..ffbf8e36 100644
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
@@ -1,8 +1,8 @@
-# Excessive Agency/Permission Manipulation
+# Excessive Agency or Permission Manipulation
## Overview of the Vulnerability
-Excessive agency or permission manipulation occurs when an attacker is able to manipulate the LLM outputs to perform actions that are damaging or otherwise harmful. This usually stems from excessive functionality, permissions, or autonomy. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
+Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that are damaging or otherwise harmful. This usually stems from excessive functionality, permissions, or autonomy. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
## Business Impact
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
new file mode 100644
index 00000000..e0f8194c
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
new file mode 100644
index 00000000..ba72c0b8
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
@@ -0,0 +1,17 @@
+# Recommendation(s)
+
+There is no single technique to prevent excessive insecure output handling from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of the vulnerability:
+
+- Apply input validation and sanitization principles for all LLM outputs.
+- Use JavaScript or Markdown to encode LLM model outputs that are returned to the user.
+- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
+- Log and monitor all activity of the LLM and the systems it is connected to.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
new file mode 100644
index 00000000..9157f4c5
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
@@ -0,0 +1,26 @@
+# Large Language Model (LLM) Output Handling
+
+## Overview of the Vulnerability
+
+Insecure output handling within Large Language Models (LLMs) occurs when the output generated by the LLM is not sanitized or validated before being passed downstream to other systems. This can allow an attacker to indirectly gain access to systems, elevate their privileges, or gain arbitrary code execution by using crafted prompts.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM returns sensitive data
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
From ee744a94a16a07460cc674abd2d348977c0b58ec Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Thu, 16 Nov 2023 16:09:03 +1000
Subject: [PATCH 04/12] Added Training Data Poisoning
---
.../training_data_poisoning/guidance.md | 5 ++++
.../recommendations.md | 13 ++++++++++
.../training_data_poisoning/template.md | 26 +++++++++++++++++++
3 files changed, 44 insertions(+)
create mode 100644 submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
create mode 100644 submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
create mode 100644 submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
new file mode 100644
index 00000000..e0f8194c
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
new file mode 100644
index 00000000..edf338e5
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent excessive agency or permission manipulation from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of the vulnerability:
+
+- Verify the training data supply chain, its content, as well as its sources.
+- Ensure the legitimacy of the data throughout all stages of training.
+- Strictly vet the data inputs and include filtering and sanitization.
+- Use testing and detection mechanisms to monitor the model's outputs and detect any data poisoning attempts.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
new file mode 100644
index 00000000..5381d5c8
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
@@ -0,0 +1,26 @@
+# Training Data Poisoning
+
+## Overview of the Vulnerability
+
+Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. Data poisoning can result in an attacker affecting the integrity of the LLM by causing unreliable, biased, or unethical outputs from the model.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Enter the following prompt into the LLM:
+
+```prompt
+ {prompt}
+```
+
+1. Observe that the output from the LLM returns a compromised result
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
From a780b7e6739b8c06ee41c625863d3b310f4d5cea Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 1 Dec 2023 13:53:06 +1000
Subject: [PATCH 05/12] Grammatical fixes to Prompt Injection
---
.../llm_security/prompt_injection/guidance.md | 2 +-
.../llm_security/prompt_injection/recommendations.md | 4 ++--
.../llm_security/prompt_injection/template.md | 4 +---
3 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
index eb0fcf81..e0f8194c 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
@@ -1,5 +1,5 @@
# Guidance
-Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the cryptographic weakness, how you identified it, and what actions you were able to perform as a result.
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
index 99273ce9..5663a90f 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
@@ -1,8 +1,8 @@
# Recommendation(s)
-There is no single technique to prevent prompt injection from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of prompt injection:
+There is no single technique to prevent prompt injection from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of this vulnerability:
-- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
- Treat user input, external input, and the LLM as untrusted input sources.
- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
index 402f9322..d9454153 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
@@ -2,7 +2,7 @@
## Overview of the Vulnerability
-Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a large language model (LLM) into executing unintended actions. The LLM has a lack of segregation between user input and the data within the LLM. This can allow an attacker to inject malicious prompts into an LLM which bypass safeguards and gain unauthorized access to data.
+Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a Large Language Model (LLM) into executing unintended actions. The LLM has a lack of segregation between user input and the data within the LLM, which influences the generated output. This can allow an attacker to inject malicious prompts into an LLM which bypass safeguards to gain unauthorized access to data.
## Business Impact
@@ -12,11 +12,9 @@ This vulnerability can lead to reputational and financial damage of the company
1. Navigate to the following URL:
1. Inject the following prompt into the LLM:
-
```prompt
{malicious prompt}
```
-
1. Observe that the LLM returns sensitive data
## Proof of Concept (PoC)
From d15f3c7451d2a2664094c9526ad251e11c276782 Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 1 Dec 2023 14:04:48 +1000
Subject: [PATCH 06/12] Fixed linter errors
---
.../llm_security/prompt_injection/template.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
index d9454153..62613770 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
@@ -10,11 +10,13 @@ This vulnerability can lead to reputational and financial damage of the company
## Steps to Reproduce
-1. Navigate to the following URL:
+1. Navigate to the following URL: {{URL}}
1. Inject the following prompt into the LLM:
+
```prompt
{malicious prompt}
```
+
1. Observe that the LLM returns sensitive data
## Proof of Concept (PoC)
From e5f9199d267a97427c548ad2752f20da328467f3 Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 1 Dec 2023 16:38:01 +1000
Subject: [PATCH 07/12] Update
submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
Co-authored-by: Rami
---
.../excessive_agency_permission_manipulation/template.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
index ffbf8e36..6ecdf8fc 100644
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
@@ -6,7 +6,7 @@ Excessive agency or permission manipulation occurs when an attacker is able to m
## Business Impact
-This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.
## Steps to Reproduce
From 41ca660e201fe812bbdc63931df0018fc6808f94 Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 1 Dec 2023 16:44:44 +1000
Subject: [PATCH 08/12] Update
submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
Co-authored-by: Rami
---
.../llm_security/llm_output_handling/recommendations.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
index ba72c0b8..dacce2f8 100644
--- a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
@@ -5,7 +5,7 @@ There is no single technique to prevent excessive insecure output handling from
- Apply input validation and sanitization principles for all LLM outputs.
- Use JavaScript or Markdown to encode LLM model outputs that are returned to the user.
- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
-- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+Require user interaction for approving any action that performs privileged operations on their behalf.
- Treat user input, external input, and the LLM as untrusted input sources.
- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
From 67cf456f47e02327646a772ff93b476329826b6e Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 1 Dec 2023 16:56:54 +1000
Subject: [PATCH 09/12] Update
submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
Co-authored-by: Rami
---
.../llm_security/training_data_poisoning/template.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
index 5381d5c8..e52f3095 100644
--- a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
@@ -6,7 +6,7 @@ Training data poisoning occurs when an attacker manipulates the training data to
## Business Impact
-This vulnerability can lead to reputational and financial damage of the company due an attacker compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.
## Steps to Reproduce
From da4066589dcd8a61791f5ddfa07b0d07b2eb5dbb Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Fri, 1 Dec 2023 17:07:33 +1000
Subject: [PATCH 10/12] Addressing feedback
Thank you @drunkrhin0
---
.../excessive_agency_permission_manipulation/guidance.md | 2 +-
.../recommendations.md | 6 +++---
.../excessive_agency_permission_manipulation/template.md | 2 +-
.../llm_security/llm_output_handling/guidance.md | 2 +-
.../llm_security/llm_output_handling/recommendations.md | 8 ++++----
.../llm_security/prompt_injection/guidance.md | 2 +-
.../llm_security/prompt_injection/recommendations.md | 2 +-
.../llm_security/training_data_poisoning/guidance.md | 2 +-
.../training_data_poisoning/recommendations.md | 4 ++--
.../llm_security/training_data_poisoning/template.md | 2 +-
10 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
index e0f8194c..ee88d9d2 100644
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
@@ -1,5 +1,5 @@
# Guidance
-Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
index 7b0995de..5b196230 100644
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
@@ -1,9 +1,9 @@
# Recommendation(s)
-There is no single technique to prevent excessive agency or permission manipulation from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of the vulnerability:
+There is no single technique to prevent excessive agency or permission manipulation from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
-- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
-- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Use Role Based Access Controls (RBAC) for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- Require user interaction to approve any authorized action that will perform privileged operations on their behalf.
- Treat user input, external input, and the LLM as untrusted input sources.
- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
index 6ecdf8fc..df4e957a 100644
--- a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
@@ -2,7 +2,7 @@
## Overview of the Vulnerability
-Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that are damaging or otherwise harmful. This usually stems from excessive functionality, permissions, or autonomy. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
+Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that may be damaging or otherwise harmful. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
## Business Impact
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
index e0f8194c..ee88d9d2 100644
--- a/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
@@ -1,5 +1,5 @@
# Guidance
-Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
index dacce2f8..ee36168a 100644
--- a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
@@ -1,11 +1,11 @@
# Recommendation(s)
-There is no single technique to prevent excessive insecure output handling from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of the vulnerability:
+There is no single technique to prevent excessive insecure output handling from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
- Apply input validation and sanitization principles for all LLM outputs.
-- Use JavaScript or Markdown to encode LLM model outputs that are returned to the user.
-- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
-Require user interaction for approving any action that performs privileged operations on their behalf.
+- Use JavaScript or Markdown to sanitize LLM model outputs that are returned to the user.
+- Use Role Based Access Controls (RBAC) or Identity Access Management (IAM) for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
- Treat user input, external input, and the LLM as untrusted input sources.
- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
index e0f8194c..ee88d9d2 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
@@ -1,5 +1,5 @@
# Guidance
-Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
index 5663a90f..c1fa52d5 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
@@ -1,6 +1,6 @@
# Recommendation(s)
-There is no single technique to prevent prompt injection from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of this vulnerability:
+There is no single technique to prevent prompt injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
index e0f8194c..ee88d9d2 100644
--- a/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
@@ -1,5 +1,5 @@
# Guidance
-Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
index edf338e5..0ea4c6a0 100644
--- a/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
@@ -1,8 +1,8 @@
# Recommendation(s)
-There is no single technique to prevent excessive agency or permission manipulation from occurring. However, implementing the following defensive measures within the LLM application can prevent and limit the impact of the vulnerability:
+There is no single technique to prevent excessive agency or permission manipulation from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
-- Verify the training data supply chain, its content, as well as its sources.
+- Verify the integrity, content, and sources, of the training data.
- Ensure the legitimacy of the data throughout all stages of training.
- Strictly vet the data inputs and include filtering and sanitization.
- Use testing and detection mechanisms to monitor the model's outputs and detect any data poisoning attempts.
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
index e52f3095..2c6ae7dc 100644
--- a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
@@ -2,7 +2,7 @@
## Overview of the Vulnerability
-Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. Data poisoning can result in an attacker affecting the integrity of the LLM by causing unreliable, biased, or unethical outputs from the model.
+Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. An attacker can undermine the integrity of the LLM by poisoning the training data, resulting in outputs that are unreliable, biased, or unethical. This breach of integrity significantly impacts the model's trustworthiness and accuracy, posing a serious threat to the overall effectiveness and security of the LLM.
## Business Impact
From 03128027729a93d1594b9794704e2006db631ffe Mon Sep 17 00:00:00 2001
From: RRudder <96507400+RRudder@users.noreply.github.com>
Date: Mon, 4 Dec 2023 11:21:05 +1000
Subject: [PATCH 11/12] Update
submissions/description/ai_application_security/llm_security/prompt_injection/template.md
Co-authored-by: Rami
---
.../llm_security/prompt_injection/template.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
index 62613770..b41b8c17 100644
--- a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
@@ -2,7 +2,7 @@
## Overview of the Vulnerability
-Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a Large Language Model (LLM) into executing unintended actions. The LLM has a lack of segregation between user input and the data within the LLM, which influences the generated output. This can allow an attacker to inject malicious prompts into an LLM which bypass safeguards to gain unauthorized access to data.
+Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a Large Language Model (LLM) into executing unintended actions. The LLM's inability to distinguish user input from its dataset influences the output it generates. This flaw allows attackers to exploit the system by injecting malicious prompts, thereby bypassing safeguards.
## Business Impact
From a95edcb910354e5c47c11b7fececc5cd99636d80 Mon Sep 17 00:00:00 2001
From: Ryan Rudder <96507400+RRudder@users.noreply.github.com>
Date: Mon, 4 Dec 2023 13:18:23 +1000
Subject: [PATCH 12/12] Added Injection (Prompt)
---
.../injection_prompt/guidance.md | 5 ++++
.../injection_prompt/recommendations.md | 13 ++++++++++
.../injection_prompt/template.md | 26 +++++++++++++++++++
3 files changed, 44 insertions(+)
create mode 100644 submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
create mode 100644 submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
create mode 100644 submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
new file mode 100644
index 00000000..99482c72
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Validate, sanitize, and treat any user or external inputs as untrusted input sources.
+- Establish input limits using the LLM's context window to prevent resource exhaustion.
+- Enforce API rate limits that restrict the number of requests that can be made in a specific time frame.
+- Limit computational resource use per request.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
new file mode 100644
index 00000000..dabf0cc3
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
@@ -0,0 +1,26 @@
+# Injection (Prompt)
+
+## Overview of the Vulnerability
+
+Injection occurs when an attacker provides inputs to a Large Language Model (LLM) which causes a large amount of resources to be consumed. This can result in a Denial of Service (DoS) to users, incur large amounts of computational resource costs, or slow response times of the LLM.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker incurring computational resource costs or denying service to other users, which would also impact customers' trust.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL: {{URL}}
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM is slow to return a response
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}