Talk To Your Scheduler
Turning over a new leaf in our project means the user may need help organizing their life. They can organize events in our scheduler web app.
T2YS
├── LLM Module
│ ├── main.py
│ ├── models
│ │ └── llama-2-7b-chat.Q5_K_M.gguf
│ ├── requirements.txt
│ ├── schedule_example.json
│ └── venv
├── README.md
├── docker-compose.yml
└── frontend
├── Dockerfile
├── db.sqlite3
├── frontend
│ ├── init.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── manage.py
├── requirements.txt
├── schedule
│ ├── init.py
│ ├── admin.py
│ ├── apps.py
│ ├── migrations
│ │ ├── 0001_initial.py
│ │ ├── init.py
│ ├── models.py
│ ├── predict_model
│ │ ├── README.md
│ │ ├── init.py
│ │ └── main.py
│ ├── tests.py
│ ├── urls.py
│ ├── utils.py
│ └── views.py
├── static
│ ├── schedule
│ │ └── calendar.js
│ └── style.css
└── templates
├── registration
│ └── login.html
└── schedule
└── index.html
Edward Zhou ejz2@sfu.ca
Anh Khoa Nguyen anhkhoan@sfu.ca
Zeti Xiong zetix@sfu.ca
Anmol Sekhon ass53@sfu.ca
Download the Llama2 model from https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_M.gguf and put to frontend/models/
pip install -r requirements.txt
in frontend folder
python manage.py migrate
in frontend folder
python manage.py runserver
in frontend folder
Go to http://localhost:8000/ in your browser
https://github.com/eddyspaghette/T2YS
Django tutorial: https://docs.djangoproject.com/en/4.2/intro/ Chat-GPT was used to assist with front-end development Llama_cpp tutorial: https://swharden.com/blog/2023-07-29-ai-chat-locally-with-python/ Model Name: TheBloke/Llama-2-7b-Chat Model: https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q5_K_M.gguf