This repository contains the code and documentation for the integrative. The project is divided into three main components:
In this part, you manage a warehouse using five robots equipped with advanced sensors. The robots are tasked with organizing the warehouse by navigating, picking up, and stacking objects while avoiding collisions. The focus is on simulating the robots' behavior, optimizing their performance, and proposing strategies to improve efficiency.
This section involves creating a 3D model of the warehouse and robots. The project covers texture mapping, lighting, and basic collision detection. The goal is to visually represent the robots as they perform their tasks in the simulated environment, adding a layer of realism to the simulation.
Here, the robots use cameras to identify and classify objects within the warehouse. Various computer vision models, such as YOLO or SAM, are implemented to enable the robots to recognize objects and make informed decisions based on their perceptions.
-
Part 1: Multi-Agent Systems
- Documentation: PDF with agent and environment properties, success metrics, class diagrams, and ontology diagrams.
- Code: Implemented simulation code for the multi-agent challenge.
-
Part 2: Computational Graphics
- File:
.unitypackage
containing all necessary assets for 3D rendering and simulation.
- File:
-
Part 3: Computer Vision
- Vision models and integration code.
- Clone the repository.
- Follow the instructions in the Part 1 and Part 2 directories to set up and run the simulations.
- Use Unity to open the
.unitypackage
file for the 3D simulation.
- The project simulates a warehouse with MxN grid spaces, K random objects, and at least 5 robots.
- Maximum execution time is limited by either steps or seconds.