Augmented reality application that creates artificial lights on the finger tips of hands. The effect resembles a gloving performance, which involves a person wearing gloves with LED lights on the tips. Hand keypoint detection is the core component of this project.
NOTE: Currently, the code only supports the use of a CPU, which takes takes about 10 seconds to process each frame. In a future release, I plan to add GPU support which should allow the application to run in real time.
Original Video: Void Dreamers - Gloving Vietnam
Python 3.6.3
OpenCV-python 3.4.1.15
NumPy 1.14.2
If you prefer to gain some background knowledge on hand keypoint detection before diving into the code of this project, I would highly recommend reading Vikas Gupta's tutorial.
Since we are not using the GPU, the setup is very minimal for this project.
- Install or update Python, OpenCV, and NumPy.
- Cloan or download this project.
- Download OpenPose's hand keypoint detection model (pose_iter_102000.caffemodel) and place it into the
hand_lights/caffe_model
folder. It can be downloaded from this repo's release or by running OpenPose's getModels script.
The following examples will demonstrate several ways to run the python program. Set the verbose flag to True if you would like to see the progress.
python hand_lights.py \
--input_video_path media/videos/vertical.mp4 \
--output_video_path media/videos/vertical_output.mp4
python hand_lights.py \
--input_video_path media/videos/vertical.mp4 \
--output_video_path media/videos/vertical_output.mp4 \
--background_alpha 0.3
python hand_lights.py \
--input_video_path media/videos/vertical.mp4 \
--output_video_path media/videos/vertical_output.mp4 \
--light_color red
python hand_lights.py \
--input_video_path media/videos/vertical.mp4 \
--output_video_path media/videos/vertical_output.mp4 \
--light_color all
python hand_lights.py \
--input_video_path media/videos/vertical.mp4 \
--output_video_path media/videos/vertical_output.mp4 \
--light_radius_frame_height_ratio 0.05
python hand_lights.py \
--input_video_path media/videos/draw_initials.mp4 \
--output_video_path media/videos/draw_initials_output.mp4 \
--fingers index \
--max_hands_detected 1 \
--light_duration_n_secs 12 \
--light_same_alpha True \
--mirror True
The hand keypoint detection model from OpenPose is used to detect the finger tips of each hand on each frame.
The locations of the finger tips that are detected in each frame will be added to a circular queue. The queue will keep track of the detections from the past n number of frames. On every new frame, the entire queue will be used to draw the lights.
The most recent finger tips will have fully opaque lights. As the lights become older, they will become more transparent.
A Gaussian Blur is used to smoothen out the overlapping edges between light circles.
To recreate the last example, use the following script.
python hand_lights.py \
--input_video_path media/videos/vertical.mp4 \
--output_video_path media/videos/vertical_output.mp4 \
--light_duration_n_secs 0.5 \
--light_radius_frame_height_ratio 0.03