Skip to content

taha-parsayan/OpenAI-and-Ollama-based-RAG-Engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI-and-Ollama-based-RAG-Engine

Static Badge Static Badge Static Badge Static Badge Static Badge

This repository contains a chatbot that uses the LlamaIndex library and OpenAI's GPT models for intelligent question-answering based on textual data. The chatbot ingests .txt files, processes them into an index, and allows for interactive querying with memory support.

Features

  • Text File Ingestion: Reads .txt files from a designated folder.
  • Indexing: Creates and persists a vector-based index using OpenAI embeddings.
  • Memory Support: Includes memory buffers for conversational context.
  • Interactive Chat: Provides a terminal-based chat interface for querying indexed documents.
  • Persistence: Saves chat history and indexes for reusability across sessions.

Installation

Prerequisites

  • Python 3.8 or later
  • OpenAI API key

Setup

  1. Clone the repository:

    git clone <repository-url>
    cd <repository-name>
  2. Set up a virtual environment:

    python -m venv chatbot
    
    # On Windows
    Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
    .\chatbot\Scripts\Activate.ps1
    
    # On macOS/Linux
    source chatbot/bin/activate
  3. Install dependencies:

    python -m pip install --upgrade pip
    pip install llama-index-llms-openai
    pip install python-dotenv
  4. Add environment variables: Create a .env file in the project root and add your OpenAI API key:

    OPENAI_API_KEY=your_openai_api_key
    

Usage

  1. Prepare the Data:

    • Place all .txt files you want to query in a folder named data within the project directory.
  2. Run the Script:

    python chatbot.py
  3. Interact:

    • Enter your questions in the terminal.
    • Type exit to terminate the session.

File Structure

<repository-name>/
├── data/                 # Directory for .txt files
├── chatbot.py            # Main script
├── .env                  # Environment variables
└── README.md             # Project documentation

Configuration

  • Chatbot Memory:

    • Memory is managed using a SimpleChatStore and ChatMemoryBuffer.
    • Adjust token_limit and chat_store_key in the script as needed.
  • Embedding Model:

    • The OpenAI GPT model is set to gpt-3.5-turbo. You can change it in the script if desired.

Dependencies

Limitations

  • Requires an OpenAI API key for operation.
  • Designed for .txt files; other formats are not supported out of the box.

License

This project is licensed under the MIT License. See the LICENSE file for more details.

Acknowledgements

  • Built using the LlamaIndex library and OpenAI's GPT models.

About

LLM model using OpenAI APIs (RAG application)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages