English | 简体中文
CodexReel is a next-generation AI video content engine that leverages cutting-edge multimodal LLM technology to intelligently transform ordinary articles into professionally-produced interactive dialogue videos.
- 🤖 Smart Content Understanding - Automatically extract the essence of articles and deeply understand the main theme.
- 🎭 Multi-role Dialogue - AI-driven dialogue generation makes content more vivid and engaging.
- 🔍 Smart Material Matching - Semantic-based intelligent matching of video materials.
- 🗣️ AI Voice Synthesis - Natural and smooth multi-role dubbing system.
- 🎥 Professional Video Production - Automatic editing and synthesis to create refined video content.
Note: The following example videos have been edited and compressed, showing only part of the effect. Complete videos can be generated by clicking the title to view the original article.
- 📰 News Information Videos - Quickly convert hot news into short videos.
- 📚 Article Content Visualization - Make article content more expressive.
- 🎤 Podcast Content Creation - Automatically generate conversational podcasts.
- 📱 Short Video Content Production - Mass-produce high-quality short videos.
- 🎮 Game Information to Video - Video presentation of game strategies and news.
- Backend Framework: FastAPI
- Frontend Interface: Streamlit
- AI Service: OpenAI GPT API
- Voice Synthesis: Tongyi TTS
- Video Processing: FFmpeg
- Data Storage: SQLite
- Python 3.10+
- FFmpeg
- ImageMagick
- Clone the repository:
git clone https://github.com/chenwr727/CodexReel.git
cd CodexReel
- Create and activate conda environment:
conda create -n url2video python=3.10
conda activate url2video
- Install dependencies:
pip install -r requirements.txt
conda install -c conda-forge ffmpeg
- Copy the configuration template:
copy config-template.toml config.toml
- Edit
config.toml
and configure the following necessary parameters:
- OpenAI API key
- Tongyi TTS service key
- Pexels API key
- Other optional configurations
CodexReel/
├── api/ # API interface module
│ ├── crud.py # Database operations
│ ├── database.py # Database configuration
│ ├── models.py # Data models
│ ├── router.py # Route definitions
│ └── service.py # Business logic
├── schemas/ # Data model definitions
│ ├── config.py # Configuration model
│ ├── task.py # Task model
│ └── video.py # Video model
├── services/ # External service integration
│ ├── material/ # Material service
│ │ ├── base.py # Material service interface
│ │ ├── pexels.py # Pexels video material service
│ │ └── pixabay.py # Pixabay video material service
│ ├── tts/ # Voice synthesis service
│ │ ├── base.py # Voice synthesis service interface
│ │ ├── dashscope.py # DashScope voice synthesis service
│ │ ├── edge.py # Edge voice synthesis service
│ │ └── kokoro.py # Kokoro voice synthesis service
│ ├── llm.py # LLM service
│ └── video.py # Video processing service
├── utils/ # Utility modules
│ ├── config.py # Configuration management
│ ├── log.py # Logging tools
│ ├── subtitle.py # Subtitle handling
│ ├── text.py # Text processing
│ ├── url.py # URL handling
│ └── video.py # Video tools
└── web.py # Web interface entry point
- Start the service:
python app.py
- Launch the web interface:
streamlit run web.py --server.port 8000
Process a single URL:
python main.py https://example.com/article
Contributions are welcome! Please follow these steps:
- Fork this repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Submit a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- NotebookLlama - Inspiration for this project
- All contributors and users