Before you get started with Local RAG, ensure you have:
- A local Ollama instance
- At least one model available within Ollama
llama3:8b
orllama2:7b
are good starter models
- Python 3.10+
WARNING: This application is untested
on Windows Subsystem for Linux. For best results, please utilize a Linux host if possible.
- New Terminal, run Ollama
ollama run gemma:7b
- New Terminal, run Streamlit
pip install pipenv && virtualenv venv
pyenv virtualenv 3.12.0 rag
pyenv activate rag
pip install -r requirements.txt
streamlit run main.py
docker compose up -d
If you are running Ollama as a service, you may need to add an additional configuration to your docker-compose.yml file:
extra_hosts:
- 'host.docker.internal:host-gateway'