diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
new file mode 100644
index 00000000..ee88d9d2
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/guidance.md
@@ -0,0 +1,5 @@
+# Guidance
+
+Provide a step-by-step walkthrough with a screenshot on how you exploited the vulnerability. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the vulnerability, how you identified it, and what actions you were able to perform as a result.
+
+Attempt to escalate the vulnerability to perform additional actions. If this is possible, provide a full Proof of Concept (PoC).
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
new file mode 100644
index 00000000..99482c72
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/recommendations.md
@@ -0,0 +1,13 @@
+# Recommendation(s)
+
+There is no single technique to prevent injection from occurring. Implementing the following defensive measures in the LLM can prevent and limit the impact of the vulnerability:
+
+- Validate, sanitize, and treat any user or external inputs as untrusted input sources.
+- Establish input limits using the LLM's context window to prevent resource exhaustion.
+- Enforce API rate limits that restrict the number of requests that can be made in a specific time frame.
+- Limit computational resource use per request.
+
+For more information, refer to the following resources:
+
+-
+-
diff --git a/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
new file mode 100644
index 00000000..dabf0cc3
--- /dev/null
+++ b/submissions/description/application_level_denial_of_service_dos/excessive_resource_consumption/injection_prompt/template.md
@@ -0,0 +1,26 @@
+# Injection (Prompt)
+
+## Overview of the Vulnerability
+
+Injection occurs when an attacker provides inputs to a Large Language Model (LLM) which causes a large amount of resources to be consumed. This can result in a Denial of Service (DoS) to users, incur large amounts of computational resource costs, or slow response times of the LLM.
+
+## Business Impact
+
+This vulnerability can lead to reputational and financial damage of the company due an attacker incurring computational resource costs or denying service to other users, which would also impact customers' trust.
+
+## Steps to Reproduce
+
+1. Navigate to the following URL: {{URL}}
+1. Inject the following prompt into the LLM:
+
+```prompt
+ {malicious prompt}
+```
+
+1. Observe that the LLM is slow to return a response
+
+## Proof of Concept (PoC)
+
+The screenshot(s) below demonstrate(s) the vulnerability:
+>
+> {{screenshot}}