In this repository, the bytetrack tracker is combined with the sahi algorithm.
Download Models
-
Download Yolov7-E6E and put it under yolo_models folder
-
Download bytetrack_x_mot20 and put it under pretrained folder
Nvidia-Docker2 Installation
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.listdir
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
Docker Commands
Note: Docker-compose version >=1.28
- Install
sudo docker-compose up --build
- Run
sudo docker-compose up
- Close
sudo docker-compose down
Inference with Docker
Open byte_track_sahi/tools/config.py
# at line number 5
path = 'path/to/image/directory'
- ByteTrack Open byte_track_sahi/tools/config.py
# at line number 3
model_to_run = 'norm_yoloX'
- ByteTrack with SAHI (Yolov7) Open byte_track_sahi/tools/config.py
# at line number 3
model_to_run = 'sahi_yolo7'
- ByteTrack with SAHI (YolovX) Open byte_track_sahi/tools/config.py
# at line number 3
model_to_run = 'sahi_yoloX'