Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated the deepstream hybrid example #25

Merged
merged 1 commit into from
Feb 28, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 23 additions & 16 deletions deepstream-examples/deepstream-hybrid/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ ls -la

### 1.1.1 Single Pipeline

* Tested in DeepStream 6.3

Following example executes a single image processing pipeline so that the detector (primary mode) runs locally in Triton and the classifier (secondary mode) that classifies
the car mmake/brand runs without Triton. You can launch the example by executing the following inside the `/home/gstreamer_examples/` directory:

Expand All @@ -50,58 +52,63 @@ nvdsosd display-clock=1 ! nvvideoconvert ! nveglglessink

### 1.1.2 Pipeline with Two Branches

* Tested in DeepStream 6.3

In this pipeline we process two different streams in the same detector (primary mode) running locally in Triton. After the tracker we split the streams using `nvstreamdemux` and
add a classifier (secondary mode) that classifies the car make/brand. The expectation is that the car model is displayed in the output in the branch that has the car make/brand classifier.
If the classifier is in the `src_0` branch then everything works as expected. However, if the classifier is in the `src_1` branch then this doesn't work. Figure 1. shows the pipeline.
Figure 1. shows the pipeline with the secondary classifier in the `nvstreamdemux` `src_1` output.

<figure align="center">
<img src="./figures/hybrid_pipeline.png" width="900">
<figcaption>Figure 1. Hybrid pipeline with two branches.</figcaption>
</figure>

**This works in Deepstream 6.3**
> [!IMPORTANT]
> Please note that you need to add `nvstreammux` after the `nvstreamdemux` in both of the outputs, with the correct `batch-size` parameter!

**Secondary Classifier in Demuxer src_0**

In this case the classifier is in the `src_0` output of the `nvstreamdemuxer` and we can see the car make in the sink. You can launch the example by executing the following inside the `/home/gstreamer_examples/` directory:

```bash
gst-launch-1.0 -e \
nvstreammux name=mux width=1280 height=720 batch-size=2 ! \
nvstreammux name=mux width=1920 height=1080 batch-size=2 ! \
nvinferserver config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt batch-size=2 ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! \
nvstreamdemux name=demux \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_1 \
demux.src_0 ! queue ! nvinfer config-file-path=dstest2_sgie2_config.txt ! nvdsosd ! nvvideoconvert ! nveglglessink \
demux.src_1 ! queue ! nvdsosd ! nvvideoconvert ! nveglglessink
demux.src_0 ! mux1.sink_0 nvstreammux name=mux1 width=1920 height=1080 batch-size=1 ! queue ! nvinfer config-file-path=dstest2_sgie2_config.txt ! nvdsosd ! nvvideoconvert ! nveglglessink \
demux.src_1 ! mux2.sink_0 nvstreammux name=mux2 width=1920 height=1080 batch-size=1 ! queue ! nvdsosd ! nvvideoconvert ! nveglglessink
```

Figure 2. shows the pipeline that works.
Figure 2. shows the pipeline with the classifier in src_0 Output.

<figure align="center">
<img src="./figures/pipeline_working.png" width="900">
<figcaption>Figure 2. Pipeline that works.</figcaption>
<img src="./figures/pipeline_1.png" width="900">
<figcaption>Figure 2. Secondary inference in nvstreammux src_0 output.</figcaption>
</figure>

**This doesn't work in Deepstream 6.3**
**Secondary Classifier in Demuxer src_1**

In this case the classifier is in the `src_1` output of the `nvstreamdemuxer` and we don't' see the car make in the sink. You can launch the example by executing the following inside the `/home/gstreamer_examples/` directory:
In this case the classifier is in the `src_1` output of the `nvstreamdemuxer` and we can see the car make in the sink. You can launch the example by executing the following inside the `/home/gstreamer_examples/` directory:

```bash
gst-launch-1.0 -e \
nvstreammux name=mux width=1280 height=720 batch-size=2 ! \
nvstreammux name=mux width=1920 height=1080 batch-size=2 ! \
nvinferserver config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt batch-size=2 ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! \
nvstreamdemux name=demux per-stream-eos=true \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_1 \
demux.src_0 ! queue ! nvdsosd ! nvvideoconvert ! nveglglessink \
demux.src_1 ! queue ! nvinfer config-file-path=dstest2_sgie2_config.txt ! nvdsosd ! nvvideoconvert ! nveglglessink
demux.src_0 ! mux1.sink_0 nvstreammux name=mux1 width=1920 height=1080 batch-size=1 ! queue ! nvdsosd ! nvvideoconvert ! nveglglessink \
demux.src_1 ! mux2.sink_0 nvstreammux name=mux2 width=1920 height=1080 batch-size=1 ! queue ! nvinfer config-file-path=dstest2_sgie2_config.txt ! nvdsosd ! nvvideoconvert ! nveglglessink
```

Figure 3. shows the pipeline that does not work.
Figure 3. shows the pipeline with the classifier in src_1 Output.

<figure align="center">
<img src="./figures/pipeline_not_working.png" width="900">
<figcaption>Figure 3. Pipeline that doesn not work.</figcaption>
<img src="./figures/pipeline_2.png" width="900">
<figcaption>Figure 3. Secondary inference in nvstreammux src_1 output.</figcaption>
</figure>

78 changes: 78 additions & 0 deletions deepstream-examples/deepstream-hybrid/dstest2_pgie_config.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
process-mode=1
model-color-format=0
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
Loading