Skip to content

Commit

Permalink
nvinfer and nvinferserver hybrid examples
Browse files Browse the repository at this point in the history
  • Loading branch information
JarnoRalli committed Feb 22, 2025
1 parent 282c3e0 commit 6968617
Show file tree
Hide file tree
Showing 8 changed files with 411 additions and 3 deletions.
9 changes: 6 additions & 3 deletions deepstream-examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,17 @@ List of examples:

* [deepstream-tracking](deepstream-tracking/README.md)
* 4-class object detector with tracking
* Tested with deepstream 6.3
* Tested with DeepStream 6.3
* [deepstream-tracking-parallel](deepstream-tracking-parallel/README.md)
* 4-class object detector with tracking
* Splits the input stream into two and runs two pipelines on the split streams
* Tested with deepstream 6.1
* Tested with DeepStream 6.1
* [deepstream-triton-tracking](deepstream-triton-tracking/README.md)
* 4-class object detector with tracking, uses local version of the Triton Inference Server for inference
* Tested with deepstream 6.3
* Tested with DeepStream 6.3
* [deepstream-hybrid](deepstream-hybrid/README.md)
* Examples related to combining `nvinfer` and `nvinferserver`
* Tested with DeepStream 6.3
* [deepstream-retinaface](deepstream-retinaface/README.md)
* RetinaFace bbox- and landmark detector
* Uses a custom parser called [NvDsInferParseCustomRetinaface](src/retinaface_parser/nvdsparse_retinaface.cpp)
Expand Down
79 changes: 79 additions & 0 deletions deepstream-examples/deepstream-hybrid/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# 1 Deepstream Hybrid Inference

This directory contains examples related to running Triton models, with `Gst-nvinferserver`, and and non-Triton models, with `Gst-nvinfer`, in the same pipeline.
This kind of hybrid approach gives a lot of flexibility when it comes down to creating different image processing pipelines.


## 1.1 Running the Examples with Docker

First you need to start the Docker container, this can be done with the following:

```bash
docker compose up --build
```

, you only need to use the `--build` option when you bring up the service for the first time. After
that you can verify the id of the container using

```bash
docker ps
```

, and then you can execute `bash` in the container

```bash
xhost +
docker exec -it <ID> bash
```

### 1.1.1 Single Pipeline

Following example executes a single image processing pipeline so that the detector (primary mode) runs locally in Triton and the classifier (secondary mode) that classifies
the car model runs without Triton.

```bash
gst-launch-1.0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 ! \
nvinferserver config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf.yml \
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! nvinfer config-file-path=dstest2_sgie2_config.txt ! \
nvdsosd display-clock=1 ! nvvideoconvert ! nveglglessink
```

### 1.1.2 Pipeline with Two Branches

In this example we test a pipeline where process two different streams in the same detector (primary mode) running locally in Triton

In this pipeline we process two different streams in the same detector (primary mode) running locally in Triton. After the tracker we split the streams using `nvstreamdemux` and
add a classifier (secondary mode) that classifies the car model. If the classifier is in the `src_0` branch then everything works as expected. However, if the classifier is in
the `src_1` branch then this doesn't work.

**This works in Deepstream 6.3**

```bash
gst-launch-1.0 -e \
nvstreammux name=mux width=1280 height=720 batch-size=2 ! \
nvinferserver config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt batch-size=2 ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! \
nvstreamdemux name=demux \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_1 \
demux.src_0 ! queue ! nvinfer config-file-path=dstest2_sgie2_config.txt ! nvdsosd ! nvvideoconvert ! nveglglessink \
demux.src_1 ! queue ! nvdsosd ! nvvideoconvert ! nveglglessink
```

**This doesn't work in Deepstream 6.3**

```bash
gst-launch-1.0 -e \
nvstreammux name=mux width=1280 height=720 batch-size=2 ! \
nvinferserver config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/config_infer_plan_engine_primary.txt batch-size=2 ! \
nvtracker tracker-width=640 tracker-height=480 ll-config-file=config_tracker_NvDCF_perf.yml ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! \
nvstreamdemux name=demux per-stream-eos=true \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_0 \
nvurisrcbin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! mux.sink_1 \
demux.src_0 ! queue ! nvdsosd ! nvvideoconvert ! nveglglessink \
demux.src_1 ! queue ! nvinfer config-file-path=dstest2_sgie2_config.txt ! nvdsosd ! nvvideoconvert ! nveglglessink
```

Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
%YAML:1.0
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

BaseConfig:
minDetectorConfidence: 0.5 # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking

TargetManagement:
enableBboxUnClipping: 1 # In case the bbox is likely to be clipped by image border, unclip bbox
maxTargetsPerStream: 150 # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

# [Creation & Termination Policy]
minIouDiff4NewTarget: 0.5 # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
minTrackerConfidence: 0.2 # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
probationAge: 5 # If the target's age exceeds this, the target will be considered to be valid.
maxShadowTrackingAge: 30 # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
earlyTerminationAge: 1 # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the target will be terminated prematurely.

TrajectoryManagement:
useUniqueID: 1 # Use 64-bit long Unique ID when assignining tracker ID. Default is [true]

DataAssociator:
dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
checkClassMatch: 1 # If checked, only the same-class objects are associated with each other. Default: true

# [Association Metric: Thresholds for valid candidates]
minMatchingScore4Overall: 0.0 # Min total score
minMatchingScore4SizeSimilarity: 0.6 # Min bbox size similarity score
minMatchingScore4Iou: 0.0 # Min IOU score
minMatchingScore4VisualSimilarity: 0.7 # Min visual similarity score

# [Association Metric: Weights]
matchingScoreWeight4VisualSimilarity: 0.6 # Weight for the visual similarity (in terms of correlation response ratio)
matchingScoreWeight4SizeSimilarity: 0.0 # Weight for the Size-similarity score
matchingScoreWeight4Iou: 0.4 # Weight for the IOU score

StateEstimator:
stateEstimatorType: 1 # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

# [Dynamics Modeling]
processNoiseVar4Loc: 2.0 # Process noise variance for bbox center
processNoiseVar4Size: 1.0 # Process noise variance for bbox size
processNoiseVar4Vel: 0.1 # Process noise variance for velocity
measurementNoiseVar4Detector: 4.0 # Measurement noise variance for detector's detection
measurementNoiseVar4Tracker: 16.0 # Measurement noise variance for tracker's localization

VisualTracker:
visualTrackerType: 1 # the type of visual tracker among { DUMMY=0, NvDCF=1 }

# [NvDCF: Feature Extraction]
useColorNames: 1 # Use ColorNames feature
useHog: 1 # Use Histogram-of-Oriented-Gradient (HOG) feature
featureImgSizeLevel: 2 # Size of a feature image. Valid range: {1, 2, 3, 4, 5}, from the smallest to the largest
featureFocusOffsetFactor_y: -0.2 # The offset for the center of hanning window relative to the feature height. The center of hanning window would move by (featureFocusOffsetFactor_y*featureMatSize.height) in vertical direction

# [NvDCF: Correlation Filter]
filterLr: 0.075 # learning rate for DCF filter in exponential moving average. Valid Range: [0.0, 1.0]
filterChannelWeightsLr: 0.1 # learning rate for the channel weights among feature channels. Valid Range: [0.0, 1.0]
gaussianSigma: 0.75 # Standard deviation for Gaussian for desired response when creating DCF filter [pixels]
42 changes: 42 additions & 0 deletions deepstream-examples/deepstream-hybrid/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
services:
triton-server:
build:
context: ../../docker
dockerfile: Dockerfile-deepstream-6.3-triton-devel
container_name: triton-server
entrypoint: |
/bin/bash -c "
cd /opt/nvidia/deepstream/deepstream/samples/ &&
if [ ! -f /opt/nvidia/deepstream/deepstream-6.3/samples/trtis_model_repo/init_done ]; then
./prepare_ds_triton_model_repo.sh &&
rm -fR /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/densenet_onnx &&
touch /opt/nvidia/deepstream/deepstream-6.3/samples/trtis_model_repo/init_done
fi &&
tritonserver --log-verbose=2 --log-info=1 --log-warning=1 --log-error=1 --model-repository=/opt/nvidia/deepstream/deepstream/samples/triton_model_repo"
networks:
- triton-network
ports:
- "8000:8000" # HTTP
- "8001:8001" # gRPC
- "8002:8002" # Metrics
environment:
DISPLAY: "${DISPLAY}" # Forward display for X11
XAUTHORITY: "${XAUTHORITY}" # X11 authority
NVIDIA_DRIVER_CAPABILITIES: "all" # Enable NVIDIA features
runtime: nvidia # NVIDIA runtime
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix # X11 socket
- models:/opt/nvidia/deepstream/deepstream-6.3/samples/trtis_model_repo
- ./:/home/gstreamer_examples

volumes:
models:

networks:
triton-network:
driver: bridge
82 changes: 82 additions & 0 deletions deepstream-examples/deepstream-hybrid/dstest2_sgie2_config.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=1
model-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
mean-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/mean.ppm
labelfile-path=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/cal_trt.bin
force-implicit-batch-dim=1
batch-size=16
# 0=FP32 and 1=INT8 mode
network-mode=1
input-object-min-width=64
input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=3
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0
32 changes: 32 additions & 0 deletions deepstream-examples/deepstream-hybrid/dstest2_tracker_config.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Mandatory properties for the tracker:
# tracker-width
# tracker-height: needs to be multiple of 6 for NvDCF
# gpu-id
# ll-lib-file: path to low-level tracker lib
# ll-config-file: required for NvDCF, optional for KLT and IOU
#
[tracker]
tracker-width=640
tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
#enable-past-frame=1
#enable-batch-process=1
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 6968617

Please sign in to comment.