Skip to content

Latest commit

 

History

History
105 lines (88 loc) · 3.75 KB

README.md

File metadata and controls

105 lines (88 loc) · 3.75 KB

🚀 LocalPrompt - AI Prompt Engineer

LocalPrompt is an AI-powered tool designed to refine and optimize AI prompts, helping users generate high-quality, structured prompts that work effectively with advanced AI models.

This tool is built to work locally with Mistral 7B, ensuring privacy and efficiency without relying on external cloud services. It is ideal for AI prompt engineers, machine learning developers, and researchers who want to run AI models offline.

🔍 Why Use Local AI Models?

  • 🔹 Privacy & Security - No external API calls, all processing happens on your own machine
  • 🔹 Cost-Effective - Avoid expensive API costs by running AI models completely offline
  • 🔹 Customization - Fine-tune the model and modify it for specific use cases
  • 🔹 Performance - Run low-latency AI models optimized for your local hardware (CPU/GPU)

📌 Features

  • Refines AI Prompts - Converts rough ideas into high-quality, structured prompts
  • Uses Mistral 7B - Locally hosted Mistral-7B-Instruct for privacy and efficiency
  • Customizable - Supports different prompt optimizations and fine-tuning
  • FastAPI Backend - Simple API with easy-to-use REST endpoints

📂 Folder Structure

LocalPrompt/
│️—— app/
│   ├—— api/                  # API Routes
│   │   ├—— prompt.py         # Prompt Engineering API
│   ├—— config/                 # Core Configurations
│   │   ├—— settings.py         # Environment & Settings
│   ├—— services/             # Business Logic
│   │   ├—— prompt_service.py # Prompt Processing Logic
│   └—— main.py               # FastAPI Entry Point
│️—— models/                    # AI Model Storage (Ignored in .gitignore)
│️—— .env                        # Environment Variables
│️—— .gitignore                  # Git Ignore File
│️—— requirements.txt            # Python Dependencies
│️—— README.md                   # Project Documentation

🛠 Installation & Setup

1️⃣ Clone the Repository

git clone https://github.com/sawadkk/LocalPrompt.git
cd LocalPrompt

2️⃣ Create a Virtual Environment

python -m venv venv
source venv/bin/activate  # On macOS/Linux
venv\Scripts\activate     # On Windows

3️⃣ Install Dependencies

pip install -r requirements.txt

4️⃣ Download Mistral 7B Model

mkdir models
curl -L -o models/mistral-7b-instruct-v0.1.Q4_K_M.gguf https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/resolve/main/mistral-7b-v0.1.Q4_K_M.gguf?download=true

5️⃣ Run the API

uvicorn app.main:app --reload

🔥 Usage

1️⃣ API Endpoint

  • URL: POST /api/generate_prompt/
  • Request:
{
    "idea": "Create a sci-fi world with AI robots.",
    "max_tokens": 100,
    "temperature": 0.7,
    "top_k": 40,
    "top_p": 0.9
}
  • Response:
{
    "original_idea": "Create a sci-fi world with AI robots.",
    "refined_prompt": "Design a detailed sci-fi world featuring advanced AI-driven civilizations...",
    "temperature": 0.7,
    "top_k": 40,
    "top_p": 0.9
}

📌 Future Enhancements

  • 🔹 Fine-tuning - Train Mistral 7B for more precise prompt refinement
  • 🔹 Web UI - Add a frontend interface for prompt generation
  • 🔹 Multi-model Support - Integrate with OpenAI, LLaMA, DeepSeek for comparisons

✨ Contributing

  1. Fork the repository
  2. Create a new branch: git checkout -b feature-name
  3. Commit changes: git commit -m "Added new feature"
  4. Push to branch: git push origin feature-name
  5. Open a Pull Request 🎉