Skip to content

Commit

Permalink
Triton tracking works with Docker compose
Browse files Browse the repository at this point in the history
  • Loading branch information
JarnoRalli committed Jan 8, 2025
1 parent 4fe34ea commit f3f4608
Show file tree
Hide file tree
Showing 10 changed files with 344 additions and 121 deletions.
1 change: 1 addition & 0 deletions conda/gst-pytorch-gpu-python3.10.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,6 @@ dependencies:
- matplotlib
- numpy
- scikit-image
- build


1 change: 1 addition & 0 deletions conda/gst-pytorch-gpu-python3.8.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,6 @@ dependencies:
- matplotlib
- numpy
- scikit-image
- build


141 changes: 129 additions & 12 deletions deepstream-examples/deepstream-triton-tracking/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,54 +22,171 @@ Following inference and tracker components are used:
There are two versions:
* [gst-triton-tracking.py](gst-triton-tracking.py)
* This version draws bounding box and object information using deepstream's native way.
* Triton and `gst-triton-tracking.py` are run in the same host.
* Uses configuration files from `/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton`
* [gst-triton-tracking-v2.py](gst-triton-tracking-v2.py)
* This version draws the information so that bounding- and text boxes for smaller objects are drawn first.
Everything else being the same, smaller objects tend to be further away from the camera. Also bounding bbox colors are different for each object type.
* Uses gRPC for inference. If we disable CUDA buffer sharing, then Triton server and `gst-triton-tracking-v2.py` can be run in different hosts.
* Uses configuration files from `/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton-grpc/`

## Observations

### DeepstreamSDK 6.1.1

When using the `nvv4l2h264enc` encoder in the file-sink branch the pipeline became unresponsive after having processed some frames. It seems to work with `x264enc`
without any problems.

### DeepstreamSDK 6.3

The Gst plug-in `x264enc` apparently has been removed.

## Requirements

* DeepStreamSDK 6.1.1
* Python 3.8
* DeepStreamSDK 6.1.1 or 6.3
* Nvidia Container toolkit and Docker compose (if using Docker)
* Python >=3.6
* Gst-python
* pyds 1.1.5
* gstreamer1.0-plugins-good
* gstreamer1.0-plugins-bad
* gstreamer1.0-plugins-ugly
* Triton Inference Server (locally built)
* Triton Inference Server (locally built or Docker image)

## How to Run the Example Locally

## How to Run the Example
Since the `gst-triton-tracking-v2.py` uses configuration files from `/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton-grpc`,
the expectation is that the Triton server is running in the same machine as where the code `gst-triton-tracking-v2.py` is run from. If this is not the case,
then you need to modify the IP-address of the Triton server in the configuration files.

First you need to launch the Tritonserver:
### Building Models

The first step is to build the TensorRT models:

```bash
cd <TRITON-SOURCE>/build/install/bin
./tritonserver \
/opt/nvidia/deepstream/deepstream/samples/prepare_ds_triton_model_repo.sh
```

Then you start the Triton server:

```bash
tritonserver \
--log-verbose=2 --log-info=1 --log-warning=1 --log-error=1 \
--model-repository=/opt/nvidia/deepstream/deepstream/samples/triton_model_repo
```

Replace `<TRITON-SOURCE>` with the location where Triton source code was cloned.
### Running the Tracking Example

In order to get help regarding input parameters, execute the following:
To get help regarding input parameters, execute:

```bash
python3 gst-triton-tracking.py -h
python3 gst-triton-tracking-v2.py -h
```

In order to process an mp4 file (with h264 encoded video), execute the following:

```bash
python3 gst-triton-tracking.py -i <PATH-TO-INPUT-FILE> -o <PATH-TO-OUTPUT-FILE>
python3 gst-triton-tracking-v2.py -i <PATH-TO-INPUT-FILE> -o <PATH-TO-OUTPUT-FILE>
```

If you have DeepStream with samples installed, you can execute the following:

```bash
python3 gst-triton-tracking.py -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
python3 gst-triton-tracking-v2.py -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
```

## How to Run the Example Using Docker

Here the expectation is that both the Docker container running the Triton server and the container where
the `gst-triton-tracking-v2.py` code is executed from, are running in the same host. We need to modify
the address of the Triton server from `localhost` to `triton-server` in the configuration files, as is explained later on.
You need to build the `deepstream-6.3` Docker image first.

```bash
cd gstreamer-examples/docker
docker build -t deepstream-6.3 -f ./Dockerfile-deepstream-6.3-triton-devel .
```

### Launch Triton Server Using Docker Compose

Launch Triton server using Docker compose as follows:

```bash
docker compose up --build
```

Verify that Triton is running correctly by executing the following command in the host computer:

```bash
curl -v http://localhost:8000/v2/health/ready
```

If Triton is running correctly, you should get an answer similar to:

```bash
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET /v2/health/ready HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Length: 0
< Content-Type: text/plain
<
* Connection #0 to host localhost left intact
```
### Launch Deepstream Tracking
Next we launch the Docker container that we use for executing the tracking code. Following commands
are run in the host. First we enable any client to interact with the local X server:
```bash
xhost +
```
Then we start the Docker container by running:
```bash
docker run -i -t \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $(pwd):/home/gstreamer-examples \
-e DISPLAY=$DISPLAY \
-e XAUTHORITY=$XAUTHORITY \
-e NVIDIA_DRIVER_CAPABILITIES=all \
--network deepstream-triton-tracking_triton-network \
--gpus all deepstream-6.3 bash
```
Once logged into the container, first we check that the container can display to the host's X server by running
the following:
```bash
glmark2
```
If everything works fine, we should see a video of a rotating horse. Then we verify that we can access the Triton
server:
```bash
curl -v http://triton-server:8000/v2/health/ready
```
The last step before executing the code is to replace the address `localhost:` in the configuration files with
`triton-server:` by executing:
```bash
cd /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton-grpc
find . -type f -name "*.txt" -exec sed -i 's/localhost:/triton-server:/g' {} +
find . -type f -name "*.txt" -exec sed -i 's/enable_cuda_buffer_sharing: true/enable_cuda_buffer_sharing: false/g' {} +
```
Now we are ready to run the Triton-tracking code.
```bash
cd /home/gstreamer-examples
python3 gst-triton-tracking-v2.py -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
```
41 changes: 41 additions & 0 deletions deepstream-examples/deepstream-triton-tracking/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
services:
triton-server:
build:
context: ../../docker
dockerfile: Dockerfile-deepstream-6.3-triton-devel
container_name: triton-server
entrypoint: |
/bin/bash -c "
cd /opt/nvidia/deepstream/deepstream/samples/ &&
if [ ! -f /opt/nvidia/deepstream/deepstream-6.3/samples/trtis_model_repo/init_done ]; then
./prepare_ds_triton_model_repo.sh &&
rm -fR /opt/nvidia/deepstream/deepstream/samples/triton_model_repo/densenet_onnx &&
touch /opt/nvidia/deepstream/deepstream-6.3/samples/trtis_model_repo/init_done
fi &&
tritonserver --log-verbose=2 --log-info=1 --log-warning=1 --log-error=1 --model-repository=/opt/nvidia/deepstream/deepstream/samples/triton_model_repo"
networks:
- triton-network
ports:
- "8000:8000" # HTTP
- "8001:8001" # gRPC
- "8002:8002" # Metrics
environment:
DISPLAY: "${DISPLAY}" # Forward display for X11
XAUTHORITY: "${XAUTHORITY}" # X11 authority
NVIDIA_DRIVER_CAPABILITIES: "all" # Enable NVIDIA features
runtime: nvidia # NVIDIA runtime
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix # X11 socket
- models:/opt/nvidia/deepstream/deepstream-6.3/samples/trtis_model_repo

volumes:
models:

networks:
triton-network:
driver: bridge
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@ tracker-height=384
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=config_tracker_NvDCF_perf.yml
enable-past-frame=1
enable-batch-process=1
#enable-past-frame=1
#enable-batch-process=1
Loading

0 comments on commit f3f4608

Please sign in to comment.