Skip to content

A Tic Tac Toe game powered by reinforcement learning bots using Q-learning, with a modern Angular frontend and a FastAPI backend.

License

Notifications You must be signed in to change notification settings

Atul-AI08/T3-Neural

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

T3-Neural

T3-Neural is an advanced Tic Tac Toe project that features Reinforcement Learning (RL) bots trained using Q-learning to play as X and O. The project integrates a modern frontend built using Angular 17 with a backend powered by FastAPI for deploying the trained RL models. The RL models are implemented in PyTorch and trained on randomly generated games to master Tic Tac Toe strategies.

Features

  • RL-Powered Bots: Intelligent bots for X and O trained using Q-learning.
  • Frontend: An interactive and responsive UI built with Angular 17.
  • Backend: A FastAPI-based backend for serving the trained PyTorch models.
  • Training Pipeline: Models trained on randomly generated games to ensure comprehensive learning of Tic Tac Toe strategies.
  • Cross-Platform: Compatible with modern web browsers for a seamless experience.

Tech Stack

  • Frontend: Angular 17
  • Backend: FastAPI
  • Model: PyTorch
  • Reinforcement Learning: Q-learning algorithm

How It Works

  1. The frontend allows users to play Tic Tac Toe against RL bots.
  2. Bots are served via the backend, which uses PyTorch models trained with Q-learning.
  3. The game states are processed in real-time, and bots make moves based on their learned policies.
  4. Users can interact with the game and analyze the bot's strategies.

Training Details

  • The RL bots were trained using the Q-learning algorithm.
  • Training involved randomly generated games to explore all possible Tic Tac Toe scenarios.
  • PyTorch was used to implement and train the models.

Future Enhancements

  • Dynamic Difficulty Adjustment: Allow users to set bot difficulty levels.
  • Multiplayer Mode: Enable online multiplayer functionality.

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature/fix.
  3. Commit your changes and push to your branch.
  4. Create a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for more details.


About

A Tic Tac Toe game powered by reinforcement learning bots using Q-learning, with a modern Angular frontend and a FastAPI backend.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published