Transform your ideas into code using multiple AI models. Built on the foundation of GeminiCoder, enhanced with powerful new features.
NexaForge is a powerful web application that allows you to generate functional applications from natural language descriptions. This project is a significant enhancement of GeminiCoder, adding multiple AI model support and advanced features while maintaining the core simplicity.
-
Multi-Model Support: Integration with multiple AI providers:
- Google Gemini (1.5 Pro, 1.5 Flash, 2.0 Flash)
- Anthropic Claude
- OpenAI GPT
- DeepSeek
- Ollama
- Grok
-
Enhanced Development Features:
- Real-time code generation with streaming support
- Interactive chat interface for code refinement
- Runtime error detection and automatic fixing
- Token usage analytics and visualization
- Save and load previous generations
- Customizable AI settings per model
-
Advanced UI Components:
- Sandpack code editor integration
- Token usage analytics window
- AI settings configuration panel
- Error fixing interface
- Saved generations manager
-
Core:
- Next.js with App Router
- Tailwind CSS for styling
- Framer Motion for animations
- Radix UI for accessible components
-
Code Integration:
- Sandpack for live code preview
- Multiple AI Provider APIs:
- Gemini API
- Claude API
- OpenAI API
- DeepSeek API
- Ollama API
- Grok API
Follow these step-by-step instructions to set up NexaForge, even if you're a beginner!
-
Clone the Repository: Open your terminal and run:
git clone https://github.com/ageborn-dev/nexaforge-dev
This command will download the NexaForge project to your computer.
-
Navigate to the Project Directory:
cd nexaforge-dev
-
Install Node.js (if not already installed):
- Download and install Node.js from nodejs.org.
- Confirm installation by running:
node -v npm -v
-
Install Dependencies: Inside the project folder, run:
npm install
This will download all necessary libraries and dependencies.
-
Create a
.env
File:- In the project root, create a file named
.env
. - Add your API keys for supported providers:
GOOGLE_AI_API_KEY= ANTHROPIC_API_KEY= OPENAI_API_KEY= DEEPSEEK_API_KEY=
Note: Ollama does not require an API key but runs on your local server.
- In the project root, create a file named
-
Start the Development Server:
npm run dev
This will start the app locally. Open your browser and navigate to
http://localhost:3000
to use NexaForge. -
Set Up Ollama (Optional):
- Install the Ollama CLI by following the official guide.
- Start the Ollama server:
ollama start
- NexaForge will automatically detect and use available Ollama models. To add models, run:
Example:
ollama pull <model-name>
ollama pull llama-7b
You’re all set to explore the power of NexaForge!
The application supports customizable settings for each AI model:
- Temperature
- Max tokens
- Top P
- Stream output
- Frequency penalty
- Presence penalty
Note: Each provider and model in the settings section is equipped with help tooltips to guide users through configuration options and explain parameter functionalities.
- Real-time code streaming
- Syntax highlighting
- Live preview
- Error detection and fixing
- Interactive code refinement
- Context-aware suggestions
- Code modification history
- Token usage tracking
- Model performance metrics
- Cost estimation
- Save generations
- Load previous projects
- Export functionality
With the latest integration of Ollama, you can now leverage powerful LLaMA-based models optimized for conversational and generative tasks. NexaForge automatically detects available Ollama models and integrates them seamlessly into your workflow.
- Access to fine-tuned LLaMA models for code generation and conversational tasks.
- Enhanced multi-model capability, allowing dynamic model switching.
- Efficient handling of large-scale tasks with state-of-the-art performance.
-
Install Ollama CLI:
- Follow the official installation guide at Ollama CLI.
-
Run the Ollama Service:
ollama start
This command will start the Ollama server on your local machine, making its models accessible to NexaForge.
-
Add Models: Use the Ollama CLI to download and manage models:
ollama pull <model-name>
Example:
ollama pull llama-7b
-
Integration with NexaForge:
- NexaForge automatically detects and loads models from the running Ollama server.
- Simply ensure the Ollama server is running, and NexaForge will utilize the available models dynamically.
This project is based on GeminiCoder by osanseviero, which in turn was inspired by llamacoder. We've built upon their excellent foundation to create an enhanced multi-model experience.
This project is open-source and available under the MIT License.
Note: This is a community project and is not officially affiliated with any of the AI model providers.