Skip to content

Code implementation of the work "Real-time adaptive object detection and tracking for autonomous vehicles"

Notifications You must be signed in to change notification settings

Hffmann/yolov3-hart

Repository files navigation

yolov3-hart

Code implementation of the work "Real-time adaptive object detection and tracking for autonomous vehicles"

Adaptive Stage Switch Model (YOLOv3 + HART) [2D/3D] | Keras/Tensorflow v2.0

If you consider using this code, consult our paper on the references tab for additional information on our model and proposed adaptive system.

Quick start

  1. Clone this file

    $ git clone https://github.com/Hffmann/yolov3-hart.git
    
  2. You are supposed to install some dependencies before getting out hands with these codes.

    $ cd yolov3-hart
    $ pip install -r ./data/requirements.txt
    
  3. The yolo.h5 file can be generated using the YAD2K repository here: https://github.com/allanzelener/YAD2K

    Steps how to do it on windows:

    $ python yad2k.py yolo.cfg yolo.weights model_data/yolo.h5 
    
  4. Download the HART weights from http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/bvlc_alexnet.npy and put the file in the tracker/checkpoints folder.

  5. [3D-Deepbox] Download the weights file from https://drive.google.com/file/d/1yAFCmdSEz2nbYgU5LJNXExtsS0Gvt66U/view?usp=sharing or follow the training process from https://github.com/smallcorgi/3D-Deepbox and put it in the model_data folder.

  6. After downloading or training all weights on your own dataset, run the testing script :

    $ python main.py
    

References

- Real-time adaptive object detection and tracking for autonomous vehicles

-tensorflow-yolov3

-hart

- 3D-Deepbox

About

Code implementation of the work "Real-time adaptive object detection and tracking for autonomous vehicles"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published