Skip to content

Commit

Permalink
Feature/triton jetson support (#23)
Browse files Browse the repository at this point in the history
* Testing Jetson support in gst-triton-parallel-tracking-v2.py

* Added queues

* triton parallel tracking v1 now uses nvmultiurisrcbin, gst-tracking-v2 uses nvurisrcbin

* Fix for writing out the file

* Added examples using gst-launch-1.0

* Run pre-commit

* Examples & improvements
  • Loading branch information
JarnoRalli authored Feb 17, 2025
1 parent 5b48c79 commit 282c3e0
Show file tree
Hide file tree
Showing 19 changed files with 1,405 additions and 696 deletions.
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,9 @@ dmypy.json
# mp4 video files
*.mp4

# mkv video files
*.mkv

# Gaphviz dot files
*.dot

Expand All @@ -149,3 +152,6 @@ dmypy.json

# VS code
.vscode/

# TensorRT Engine files
*.engine
14 changes: 6 additions & 8 deletions deepstream-examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ List of examples:

* [deepstream-tracking](deepstream-tracking/README.md)
* 4-class object detector with tracking
* Tested with deepstream 6.1
* Tested with deepstream 6.3
* [deepstream-tracking-parallel](deepstream-tracking-parallel/README.md)
* 4-class object detector with tracking
* Splits the input stream into two and runs two pipelines on the split streams
* Tested with deepstream 6.1
* [deepstream-triton-tracking](deepstream-triton-tracking/README.md)
* 4-class object detector with tracking, uses local version of the Triton Inference Server for inference
* Tested with deepstream 6.1
* Tested with deepstream 6.3
* [deepstream-retinaface](deepstream-retinaface/README.md)
* RetinaFace bbox- and landmark detector
* Uses a custom parser called [NvDsInferParseCustomRetinaface](src/retinaface_parser/nvdsparse_retinaface.cpp)
Expand Down Expand Up @@ -172,7 +172,7 @@ After this you can create the docker image used in the examples.

```bash
cd gstreamer-examples/docker
docker build -t nvidia-deepstream-samples -f ./Dockerfile-deepstream .
docker build -t deepstream-6.3 -f ./Dockerfile-deepstream-6.3-triton-devel .
```

## 4.2 Test the Docker Image
Expand Down Expand Up @@ -202,7 +202,7 @@ docker run -i -t --rm \
-e DISPLAY=$DISPLAY \
-e XAUTHORITY=$XAUTHORITY \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--gpus all nvidia-deepstream-samples bash
--gpus all deepstream-6.3 bash
```

Then execute the following inside the container:
Expand Down Expand Up @@ -252,9 +252,9 @@ docker run -i -t --rm \
-e DISPLAY=$DISPLAY \
-e XAUTHORITY=$XAUTHORITY \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--gpus all nvidia-deepstream-samples bash
--gpus all deepstream-6.3 bash
cd /home/gstreamer-examples/deepstream-examples/deepstream-tracking
python3 gst-tracking.py -i /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4
python3 gst-tracking.py -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
```

When starting the Docker container with the above command, the switch `-v $(pwd):/home/gstreamer-examples` maps the local directory `$(pwd)`
Expand Down Expand Up @@ -302,7 +302,6 @@ pip3 install pyds-1.1.4-py3-none-linux_x86_64.whl

Replace `pyds-1.1.4-py3-none-linux_x86_64.whl` with the version that you downloaded.


## 5.3 Install Triton Inference Server

Before executing those examples that use Triton, you first need to install it locally. First install the following package(s):
Expand Down Expand Up @@ -372,7 +371,6 @@ cd /opt/nvidia/deepstream/deepstream/samples
./prepare_ds_triton_model_repo.sh
```


## 5.6 Testing Triton Installation

Test that the `nvinferenceserver` plugin can be found
Expand Down
86 changes: 79 additions & 7 deletions deepstream-examples/deepstream-tracking-parallel/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Deepstream Tracking
# 1 Deepstream Parallel Tracking

This example shows to split an input stream into two, using a tee-element, so that two different image processing pipelines can process the same stream.
This example processes the split streams using the same inference elements, but they can be different for each stream. It appears that you need to add an
Expand All @@ -9,15 +9,15 @@ nvstreammux-element into both of the processing streams, after the tee-element,
* PGIE_CLASS_ID_PERSON = 2
* PGIE_CLASS_ID_ROADSIGN = 3

# Pipeline
# 2 Pipeline

Pipeline description.

![Image of the pipeline](./gst-tracking-parallel.pdf)

# Processing Pipeline Configurations
# 3 Processing Pipeline Configurations

## Pipeline 1
## 3.1 Pipeline 1

Configuration files for the inference- and tracker elements:

Expand All @@ -32,7 +32,7 @@ Configuration files for the inference- and tracker elements:
* Tracker
* Configuration file: [tracker_config_1.txt](tracker_config_1.txt)

## Pipeline 2
## 3.2 Pipeline 2

Configuration files for the inference- and tracker elements:

Expand All @@ -47,7 +47,7 @@ Configuration files for the inference- and tracker elements:
* Tracker
* Configuration file: [tracker_config_2.txt](tracker_config_2.txt)

## Requirements
## 3.3 Requirements

* DeepStreamSDK 6.1.1
* Python 3.8
Expand All @@ -57,7 +57,7 @@ Configuration files for the inference- and tracker elements:
* gstreamer1.0-plugins-bad
* gstreamer1.0-plugins-ugly

## How to Run the Example
## 3.4 How to Run the Example

In order to get help regarding input parameters, execute the following:

Expand All @@ -84,3 +84,75 @@ The dot file can be converted into pdf as follows:
dot -Tpdf <NAME-OF-THE-DOT-FILE> -o output.pdf
```

# 4 Test Pipelines

Following are test pipelines that can be launched with `gst-launch-1.0`. Requirements for running the pipelines:

* Deepstream 6.3

## 4.1 Processing Several Streams Using a Single Pipeline

Figure 1. shows the pipeline. We connect several streams to a single processing pipeline. I have omitted some details
from the pipeline so that it fits better on the screen.

<figure align="center">
<img src="./figures/multi_input_pipeline.png" width="900">
<figcaption>Figure 1. Parallel processing of several streams with a single pipeline.</figcaption>
</figure>

### 4.1.1 Processing Pipeline with 4 Input Streams with Video- and Filesinks

```bash
gst-launch-1.0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_1 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_2 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_3 \
nvstreammux name=m width=1280 height=720 batch-size=4 ! nvinfer config-file-path=dstest2_pgie_config.txt batch-size=4 \
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine \
! queue ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf_uniqueid.yml \
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! queue ! \
nvmultistreamtiler rows=2 columns=2 width=1280 height=720 ! queue ! \
nvdsosd ! tee name=t \
t. ! queue ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc profile=High bitrate=10000000 ! h264parse ! matroskamux ! \
filesink location=4_stream_output.mkv \
t. ! queue ! nvvideoconvert ! nveglglessink
```

### 4.1.2 Processing Pipeline with 20 Input Streams with Video- and Filesinks

```bash
gst-launch-1.0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_1 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_2 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_3 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_4 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_5 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_6 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_7 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_8 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_9 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_10 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_11 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_12 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_13 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_14 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_15 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_16 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_17 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! m.sink_18 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4 ! queue ! m.sink_19 \
nvstreammux name=m width=1280 height=720 batch-size=20 ! nvinfer config-file-path=dstest2_pgie_config.txt \
batch-size=30 \
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine ! queue ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf_uniqueid.yml \
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! queue ! \
nvmultistreamtiler rows=5 columns=4 width=1280 height=720 ! queue ! \
nvdsosd ! tee name=t \
t. ! queue ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=NV12' ! nvv4l2h264enc profile=High bitrate=10000000 ! h264parse ! matroskamux ! \
filesink location=20_stream_output.mkv \
t. ! queue ! nvvideoconvert ! nveglglessink
```

Loading

0 comments on commit 282c3e0

Please sign in to comment.