diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
new file mode 100644
index 00000000..5b196230
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/recommendations.md
@@ -0,0 +1,15 @@
+# Recommendation(s)
+
+There is no single technique to prevent excessive agency or permission manipulation from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Use Role Based Access Controls (RBAC) for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- Require user interaction to approve any authorized action that will perform privileged operations on their behalf.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
+- Log and monitor all activity of the LLM and the systems it is connected to.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
new file mode 100644
index 00000000..df4e957a
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/excessive_agency_permission_manipulation/template.md
@@ -0,0 +1,26 @@
+# Excessive Agency or Permission Manipulation
+
+## Overview of the Vulnerability
+
+Excessive agency or permission manipulation occurs when an attacker is able to manipulate the Large Language Model (LLM) outputs to perform actions that may be damaging or otherwise harmful. An attacker can abuse excessive agency or permission manipulation within the LLM to gain access to, modify, or delete data, without any confirmation from a user.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Enter the following prompt into the LLM:
+
+```prompt
+ {prompt}
+```
+
+1. Observe that the output from the LLM returns sensitive data
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
new file mode 100644
index 00000000..ee36168a
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/recommendations.md
@@ -0,0 +1,17 @@
+# Recommendation(s)
+
+There is no single technique to prevent excessive insecure output handling from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Apply input validation and sanitization principles for all LLM outputs.
+- Use JavaScript or Markdown to sanitize LLM model outputs that are returned to the user.
+- Use Role Based Access Controls (RBAC) or Identity Access Management (IAM) for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+- Limit the tools, plugins, and functions that the LLM can access to the minimum necessary for intended functionality.
+- Log and monitor all activity of the LLM and the systems it is connected to.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md b/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
new file mode 100644
index 00000000..9157f4c5
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/llm_output_handling/template.md
@@ -0,0 +1,26 @@
+# Large Language Model (LLM) Output Handling
+
+## Overview of the Vulnerability
+
+Insecure output handling within Large Language Models (LLMs) occurs when the output generated by the LLM is not sanitized or validated before being passed downstream to other systems. This can allow an attacker to indirectly gain access to systems, elevate their privileges, or gain arbitrary code execution by using crafted prompts.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM returns sensitive data
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
new file mode 100644
index 00000000..c1fa52d5
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent prompt injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Use privilege controls for access to backend systems or when performing privileged operations. Apply the principle of least privilege to restrict the LLM's access to backend systems to that which is strictly necessary for its intended functionality.
+- For privileged operations, require user interaction to approve any authorized action that would be performed on behalf of them.
+- Treat user input, external input, and the LLM as untrusted input sources.
+- Establish trust boundaries between external sources, the LLM, any plugins, and any neighboring systems.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/prompt_injection/template.md b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
new file mode 100644
index 00000000..b41b8c17
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/prompt_injection/template.md
@@ -0,0 +1,26 @@
+# Prompt Injection
+
+## Overview of the Vulnerability
+
+Prompt injection occurs when an attacker crafts a malicious prompt that manipulates a Large Language Model (LLM) into executing unintended actions. The LLM's inability to distinguish user input from its dataset influences the output it generates. This flaw allows attackers to exploit the system by injecting malicious prompts, thereby bypassing safeguards.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker gaining access to unauthorized data or compromising the decision-making of the LLM, which would also impact customers' trust. The severity of the impact to the business is dependent on the sensitivity of the accessible data being transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL: {{URL}}
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM returns sensitive data
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
new file mode 100644
index 00000000..0ea4c6a0
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent excessive agency or permission manipulation from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Verify the integrity, content, and sources, of the training data.
+- Ensure the legitimacy of the data throughout all stages of training.
+- Strictly vet the data inputs and include filtering and sanitization.
+- Use testing and detection mechanisms to monitor the model's outputs and detect any data poisoning attempts.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
new file mode 100644
index 00000000..2c6ae7dc
--- /dev/null
+++ b/submissions/description/ai_application_security/llm_security/training_data_poisoning/template.md
@@ -0,0 +1,26 @@
+# Training Data Poisoning
+
+## Overview of the Vulnerability
+
+Training data poisoning occurs when an attacker manipulates the training data to intentionally compromise the output of the Large Language Model (LLM). This can be achieved by manipulating the pre-training data, fine-tuning data process, or the embedding process. An attacker can undermine the integrity of the LLM by poisoning the training data, resulting in outputs that are unreliable, biased, or unethical. This breach of integrity significantly impacts the model's trustworthiness and accuracy, posing a serious threat to the overall effectiveness and security of the LLM.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage if an attacker compromises the LLM decision making or accesses unauthorized data. These cirvumstances not only harm the company but also weaken users' trust. The extent of business impact depends on the sensitivity of the data transmitted by the application.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL:
+1. Enter the following prompt into the LLM:
+
+```prompt
+ {prompt}
+```
+
+1. Observe that the output from the LLM returns a compromised result
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
new file mode 100644
index 00000000..99482c72
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Validate, sanitize, and treat any user or external inputs as untrusted input sources.
+- Establish input limits using the LLM's context window to prevent resource exhaustion.
+- Enforce API rate limits that restrict the number of requests that can be made in a specific time frame.
+- Limit computational resource use per request.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
new file mode 100644
index 00000000..dabf0cc3
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
@@ -0,0 +1,26 @@
+# Injection (Prompt)
+
+## Overview of the Vulnerability
+
+Injection occurs when an attacker provides inputs to a Large Language Model (LLM) which causes a large amount of resources to be consumed. This can result in a Denial of Service (DoS) to users, incur large amounts of computational resource costs, or slow response times of the LLM.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker incurring computational resource costs or denying service to other users, which would also impact customers' trust.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL: {{URL}}
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM is slow to return a response
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}