diff --git a/README.md b/README.md index 5755a9b..4bdd3c6 100644 --- a/README.md +++ b/README.md @@ -31,3 +31,6 @@ I made this script for my personal use, where I run it on a quite beefy 20 core Make sure to compile/configure both whisper.cpp and ollama to make use of GPU if you have a suitable one. +# Planned development + +After having printed the summary I will switch to a conversational interface where it's possible to ask the LLM questions regarding the transcribed content. diff --git a/summarize.sh b/summarize.sh old mode 100644 new mode 100755 index 3e881a2..9fba97c --- a/summarize.sh +++ b/summarize.sh @@ -6,7 +6,7 @@ # Thank you: yt-dlp, whisper.cpp, ffmpeg, ollama, curl and Mixtral # -OLLAMA_HOST=192.168.0.6:11434 +OLLAMA_HOST=127.0.0.1:11434 OLLAMA_MODEL=mixtral OLLAMA_CONTEXT_SIZE=8192 WHISPER_CPP_PATH="$HOME/whisper.cpp"