Skip to content

Releases: nalgeon/vscode-proofread

v0.3.0

06 Jan 08:24
Compare
Choose a tag to compare

The extension now supports local AI models via Ollama. Useful for those who prefer not to pay for Copilot or OpenAI.

Here's how to set it up:

  1. Download and install Ollama for your operating system.
  2. Set the environment variables to use less memory:
OLLAMA_KEEP_ALIVE = 1h
OLLAMA_FLASH_ATTENTION = 1
  1. Restart Ollama.
  2. Download the AI model Gemma 2:
ollama pull gemma2:2b
  1. Change the Proofread settings:
proofread.ai.vendor = ollama
proofread.ai.model = gemma2:2b

That's it!

Gemma 2 is a lightweight model that uses about 1GB of memory and works quickly without a GPU. For good results, send only a few paragraphs at a time for proofreading or translation.

For larger documents or improved results, try models like mistral or mistral-nemo.

v0.2.0

01 Jan 13:48
Compare
Choose a tag to compare

The extension now supports both Copilot (default) and OpenAI for proofreading. However, due to Copilot's limitations, translation only works with OpenAI.

To switch between providers, use the proofread.ai.vendor property.

v0.1.1

31 Dec 13:04
Compare
Choose a tag to compare

Proofread and translate text in VS Code. Just select it in the editor and run Proofread: Proofread Text or Proofread: Translate Text from the command palette. That's it!

Proofread.Demo.mp4

Notable features:

  • Configurable language model and prompts (with good defaults).
  • Supports any target language provided by OpenAI (default is English).