Skip to content

Commit

Permalink
Improvements to the documentation and code quality (#17)
Browse files Browse the repository at this point in the history
* Improvements to the documentation and code quality

* Added status badge

* Added README.md for Conda directory
  • Loading branch information
JarnoRalli authored Sep 6, 2024
1 parent eaa8210 commit c80d538
Show file tree
Hide file tree
Showing 38 changed files with 1,140 additions and 575 deletions.
3 changes: 3 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[flake8]
max-line-length = 170
extend-ignore = E203
25 changes: 25 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: pre-commit

on:
pull_request:
branches: [main]
push:
branches: [main]
workflow_dispatch:

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: install pre-commit
run: |
pip install --upgrade pip
pip install pre-commit jupyter
pre-commit install
- name: run pre-commit hooks
run: |
pre-commit run --color=always --all-files
27 changes: 27 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
repos:
- repo: local
hooks:
- id: jupyter-nb-clear-output
name: jupyter-nb-clear-output
files: \.ipynb$
stages: [commit]
language: system
entry: jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace

- repo: https://github.com/ambv/black
rev: 22.12.0
hooks:
- id: black
language_version: python3
files: \.py$

- repo: https://github.com/pycqa/flake8
rev: 6.0.0
hooks:
- id: flake8
files: \.py$

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
31 changes: 20 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,33 @@
[![pre-commit](https://github.com/JarnoRalli/gstreamer-examples/actions/workflows/pre-commit.yml/badge.svg?branch=main&event=push)](https://github.com/JarnoRalli/gstreamer-examples/actions/workflows/pre-commit.yml)

# GSTREAMER-EXAMPLES

This repository contains both GStreamer and Deepstream related examples in Python. Directories are as follows:
This repository contains examples related to GStreamer, Deepstream and Hailo. Some of the examples are written in Python
and some of them are written in C/C++.

# 1 Contents

Directories are as follows:

* [helper-package](helper-package/README.md). A package that contains helper functions and classes.
* [deepstream-examples](deepstream-examples/README.md). Deepstream related examples.
* [hailo-examples](hailo-examples/README.md). Hailo related examples.
* [gst-examples](gst-examples/README.md). Gst-examples.
* [docker](docker/README.md). Docker files for generating containers.
* [conda](conda/README.md). Conda virtual environments.

Paul Bridger has excellent tutorials regarding how to speed up inference. For anyone interested in the subject,
I recommend to take a look at:
I recommend you to take a look at:
* https://paulbridger.com/posts/video-analytics-pytorch-pipeline/
* https://paulbridger.com/posts/video-analytics-pipeline-tuning/

## Helper-Package
# 2 Helper-Package

Helpers is a Python package that contains some helper routines for creating gst-pipelines. Most of the examples, if not all,
use modules from this package, so it needs to be available to Python. Easiest way to make this accessible is to install it as follows.
Helpers is a Python package that contains some helper routines for creating gst-pipelines. Most of the examples, if not all,
use modules from this package, so it needs to be available to Python. The Docker images in the directory [docker](./docker/README.md) install
this package automatically. Easiest way to make this accessible is to install it as follows.

Make sure that you have the latest version of PyPA's build installed:
Make sure that you have the latest version of the `build` package installed using the following command:

```bash
python3 -m pip install --upgrade build
Expand All @@ -31,7 +40,7 @@ cd helper-package
python3 -m build
```

Above command creates a new directory called `dist` where the package can be found. In order to install the created package,
Above command creates a new directory called `dist` where the package can be found. In order to install the created package,
run the following command from the `dist` directory:

```bash
Expand All @@ -40,15 +49,15 @@ pip3 install ./helpers-0.0.1-py3-none-any.whl

Replace `helpers-0.0.1-py3-none-any.whl` with the actual name/path of the whl-file that was created.

### Usage
## 2.1 Usage

Once you have installed the `helpers` package, you can use is as follows:

```bash
from helpers import *
```python
from helpers import gsthelpers
```

### Python Packages and Modules
## 2.2 Python Packages and Modules

For more information regarding Python packagaging etc., take a look at:

Expand Down
39 changes: 39 additions & 0 deletions conda/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Conda Virtual Environments

If you use conda for managing Python virtual environments, first you need to install either [Miniconda](https://docs.conda.io/projects/miniconda/en/latest/) or [Anaconda](https://docs.anaconda.com/free/anaconda/install/index.html).

# 1 Libmamba Solver

Conda's own solver is very slow, so I recommend using `Libmamba`. To use the new solver, first update conda in the base environment (optional step):

```bash
conda update -n base conda
```

Then install and activate `Libmamba` as the solver:

```bash
conda install -n base conda-libmamba-solver
conda config --set solver libmamba
```

# 2 Environments

Following YAML configuration files for Conda environments are available:

* [gst-pytorch-gpu.yml](./gst-pytorch-gpu.yml)
* **Environment name:** gst-pytorch-gpu
* **Contains:** python 3.9, pytorch, pytorch-cuda=11.6, gstreamerm, matplotlib, numpy

You can create a new virtual environment as follows:

```bash
conda env create -f <NAME-OF-THE-FILE>
```

Once the environment has been created, you can activate it by executing the following command:

```bash
conda activate <NAME-OF-THE-ENVIRONMENT>
```

3 changes: 2 additions & 1 deletion conda/gst_pytorch_gpu.yml → conda/gst-pytorch-gpu.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: gst-pytorch
name: gst-pytorch-gpu
channels:
- menpo
- conda-forge
Expand All @@ -14,6 +14,7 @@ dependencies:
- gst-plugins-base
- gst-plugins-good
- gst-plugins-bad
- gst-libav
- gstreamer
- gst-python
- pip
Expand Down
36 changes: 22 additions & 14 deletions deepstream-examples/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,11 @@
# DEEPSTREAM EXAMPLES

This directory contains DeepStream related examples. Example code, along with configuration files etc., are placed inside sub-directories.
Before running the examples, it is a good idea to refresh the GStreamer plugin cache by running the following:

```bash
gst-inspect-1.0
```

---

Expand Down Expand Up @@ -76,7 +81,7 @@ Related directories:

# 3 Activating Nvidia GPU

Before executing any of the examples, you need to install Nvidia driver. However, some systems have several graphics
Before executing any of the examples, you need to install Nvidia driver. However, some systems have several graphics
cards, i.e. you might have both an Nvidia GPU and an Intel integrated graphics controller.
You can verify this by running the following command:

Expand Down Expand Up @@ -113,7 +118,7 @@ sudo prime-select <CARD>

---

# 3 Running the Examples Using Docker
# 4 Running the Examples Using Docker

This is the preferred way to run the tests. Before creating the docker image, you need to install Nvidia's Container Toolkit. Instructions can be found here:

Expand All @@ -139,7 +144,7 @@ You should see output following (or similar) output:
| 32% 38C P0 34W / 151W | 735MiB / 8192MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
Expand All @@ -149,7 +154,7 @@ You should see output following (or similar) output:

```

## 3.1 Create the Docker Image
## 4.1 Create the Docker Image

After this you can create the docker image used in the examples.

Expand All @@ -158,7 +163,7 @@ cd gstreamer-examples/docker
docker build -t nvidia-deepstream-samples -f ./Dockerfile-deepstream .
```

## 3.2 Test the Docker Image
## 4.2 Test the Docker Image

Some of the examples use GStreamer plugin `nveglglessink` for showing the results in realtime. `nveglglessink`
depends on OpenGL, so making sure that OpenGL works inside the container is essential. Make sure that `DISPLAY`
Expand Down Expand Up @@ -222,7 +227,7 @@ glmark2

A window should pop-up, displaying a horse.

## 3.3 Execute the Examples
## 4.3 Execute the Examples

Run the following, from the `gstreamer-examples` directory, in order to start the docker container in interactive
mode and run one of the examples:
Expand All @@ -240,9 +245,12 @@ cd /home/gstreamer-examples/deepstream-examples/deepstream-tracking
python3 gst-tracking.py -i /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_1080p_h264.mp4
```

When starting the Docker container with the above command, the switch `-v $(pwd):/home/gstreamer-examples` maps the local directory `$(pwd)`
to a directory `/home/gstreamer-examples` inside the container.

---

# 4 Running the Examples Without Docker
# 5 Running the Examples Without Docker

If you're not using Docker to run the examples, you need to install DeepStream, and Triton Inference Server if you are planning on
executing Triton related examples as well, in the host system. Due to the complexity of Nvidia's libraries, depending on the system your're using,
Expand All @@ -251,9 +259,9 @@ are for:

* Ubuntu 20.04

## 4.1 Install DeepStream SDK
## 5.1 Install DeepStream SDK

Follow these instructions for installing the DeepStream SDK
Follow these instructions for installing the DeepStream SDK
[https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html).

After installation, verify that `nvinfer` plug-in can be found
Expand All @@ -270,7 +278,7 @@ sudo /opt/nvidia/deepstream/deepstream/install.sh

And then reboot.

## 4.2 Install DeepStream Python Bindings
## 5.2 Install DeepStream Python Bindings

Information regarding DeepStream Python bindings can be found from here [https://github.com/NVIDIA-AI-IOT/deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps).
You can download ready to install packages from here [https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/releases).
Expand All @@ -283,7 +291,7 @@ pip3 install pyds-1.1.4-py3-none-linux_x86_64.whl
Replace `pyds-1.1.4-py3-none-linux_x86_64.whl` with the version that you downloaded.


## 4.3 Install Triton Inference Server
## 5.3 Install Triton Inference Server

Before executing those examples that use Triton, you first need to install it locally. First install the following package(s):

Expand All @@ -309,7 +317,7 @@ cd build/install
sudo cp -vr ./backends /opt/tritonserver
```

## 4.4 Set Environment Variables
## 5.4 Set Environment Variables

Triton libraries need to be discoverable by the the dynamic library loader:

Expand Down Expand Up @@ -337,7 +345,7 @@ If it cannot be found, but it is installed, you can add it to path:
export PATH=${PATH}:/usr/src/tensorrt/bin/
```

## 4.5 Build the Model Repo
## 5.5 Build the Model Repo

We will use the models shipped with the DeepStream SDK. However, first make sure that `trtexec` is found:

Expand All @@ -353,7 +361,7 @@ cd /opt/nvidia/deepstream/deepstream/samples
```


## 4.6 Testing Triton Installation
## 5.6 Testing Triton Installation

Test that the `nvinferenceserver` plugin can be found

Expand Down
26 changes: 8 additions & 18 deletions deepstream-examples/deepstream-retinaface/retinaface.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,22 +7,10 @@

import sys
import gi
import numpy as np
import argparse
import contextlib
import time
from functools import partial

gi.require_version('Gst', '1.0')
from gi.repository import Gst

#@contextlib.contextmanager
#def nvtx_range(msg):
# depth = torch.cuda.nvtx.range_push(msg)
# try:
# yield depth
# finally:
# torch.cuda.nvtx.range_pop()
gi.require_version("Gst", "1.0")
from gi.repository import Gst # noqa: E402


if __name__ == "__main__":
Expand All @@ -35,7 +23,7 @@
if args.input_file == "":
sys.exit("No input file has been given!")

pipeline_definition = f'''
pipeline_definition = f"""
filesrc location={args.input_file} !
qtdemux !
queue !
Expand All @@ -46,7 +34,7 @@
nvvideoconvert !
nvdsosd !
queue !
nveglglessink'''
nveglglessink"""

print("--- PIPELINE DEFINITION ---")
print(pipeline_definition)
Expand All @@ -57,9 +45,11 @@

try:
while True:
msg = pipeline.get_bus().timed_pop_filtered(Gst.SECOND, Gst.MessageType.EOS | Gst.MessageType.ERROR)
msg = pipeline.get_bus().timed_pop_filtered(
Gst.SECOND, Gst.MessageType.EOS | Gst.MessageType.ERROR
)
if msg:
text = msg.get_structure().to_string() if msg.get_structure() else ''
text = msg.get_structure().to_string() if msg.get_structure() else ""
msg_type = Gst.message_type_get_name(msg.type)
print(f"{msg.src.name}: [{msg.type}] {text}")
break
Expand Down
Loading

0 comments on commit c80d538

Please sign in to comment.