Skip to content

v0.3.0

Latest
Compare
Choose a tag to compare
@nalgeon nalgeon released this 06 Jan 08:24
· 3 commits to main since this release

The extension now supports local AI models via Ollama. Useful for those who prefer not to pay for Copilot or OpenAI.

Here's how to set it up:

  1. Download and install Ollama for your operating system.
  2. Set the environment variables to use less memory:
OLLAMA_KEEP_ALIVE = 1h
OLLAMA_FLASH_ATTENTION = 1
  1. Restart Ollama.
  2. Download the AI model Gemma 2:
ollama pull gemma2:2b
  1. Change the Proofread settings:
proofread.ai.vendor = ollama
proofread.ai.model = gemma2:2b

That's it!

Gemma 2 is a lightweight model that uses about 1GB of memory and works quickly without a GPU. For good results, send only a few paragraphs at a time for proofreading or translation.

For larger documents or improved results, try models like mistral or mistral-nemo.