Releases: nalgeon/vscode-proofread
Releases · nalgeon/vscode-proofread
v0.3.0
The extension now supports local AI models via Ollama. Useful for those who prefer not to pay for Copilot or OpenAI.
Here's how to set it up:
- Download and install Ollama for your operating system.
- Set the environment variables to use less memory:
OLLAMA_KEEP_ALIVE = 1h
OLLAMA_FLASH_ATTENTION = 1
- Restart Ollama.
- Download the AI model Gemma 2:
ollama pull gemma2:2b
- Change the Proofread settings:
proofread.ai.vendor = ollama
proofread.ai.model = gemma2:2b
That's it!
Gemma 2 is a lightweight model that uses about 1GB of memory and works quickly without a GPU. For good results, send only a few paragraphs at a time for proofreading or translation.
For larger documents or improved results, try models like mistral
or mistral-nemo
.
v0.2.0
v0.1.1
Proofread and translate text in VS Code. Just select it in the editor and run Proofread: Proofread Text or Proofread: Translate Text from the command palette. That's it!
Proofread.Demo.mp4
Notable features:
- Configurable language model and prompts (with good defaults).
- Supports any target language provided by OpenAI (default is English).