T3-Neural is an advanced Tic Tac Toe project that features Reinforcement Learning (RL) bots trained using Q-learning to play as X
and O
. The project integrates a modern frontend built using Angular 17 with a backend powered by FastAPI for deploying the trained RL models. The RL models are implemented in PyTorch and trained on randomly generated games to master Tic Tac Toe strategies.
- RL-Powered Bots: Intelligent bots for
X
andO
trained using Q-learning. - Frontend: An interactive and responsive UI built with Angular 17.
- Backend: A FastAPI-based backend for serving the trained PyTorch models.
- Training Pipeline: Models trained on randomly generated games to ensure comprehensive learning of Tic Tac Toe strategies.
- Cross-Platform: Compatible with modern web browsers for a seamless experience.
- Frontend: Angular 17
- Backend: FastAPI
- Model: PyTorch
- Reinforcement Learning: Q-learning algorithm
- The frontend allows users to play Tic Tac Toe against RL bots.
- Bots are served via the backend, which uses PyTorch models trained with Q-learning.
- The game states are processed in real-time, and bots make moves based on their learned policies.
- Users can interact with the game and analyze the bot's strategies.
- The RL bots were trained using the Q-learning algorithm.
- Training involved randomly generated games to explore all possible Tic Tac Toe scenarios.
- PyTorch was used to implement and train the models.
- Dynamic Difficulty Adjustment: Allow users to set bot difficulty levels.
- Multiplayer Mode: Enable online multiplayer functionality.
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a new branch for your feature/fix.
- Commit your changes and push to your branch.
- Create a pull request.
This project is licensed under the MIT License. See the LICENSE
file for more details.