Skip to content

Commit

Permalink
Install dependencies with uv
Browse files Browse the repository at this point in the history
We've been installing Python dependencies with Pip, and not tracking
their versions. Since we've started using uv in some other
infrastructure team Python projects it makes sense to add here so that
speech-to-text can be tracked by infra-team's weekly dependency update
process.

Unlike pip, uv always installs system specific Python wheels. There
wasn't a wheel available for triton (an openai-whisper's dependency)
under Python3.8 so the installation of dependencies failed.

So, in addition to adding uv this PR also upgrades our base Docker image
from `nvidia/cuda:12.1.0-devel-ubuntu20.04` to
`nvidia/cuda:12.8.0-cudnn-devel-ubuntu22.04` which allows us to install
python3.10 when we `apt install python3`.

Since this significantly change Whisper's behavior I wanted to be able to
compare the VTT transcript output before and after the Docker image
change. I added the start of a benchmarking system that will allow us to
compare the output of a set of 22 SDR items, with a previous benchmark.
Ideally this benchmark would be human vetted, and actually represent a
ground truth for what we believe the transcript should be. But for the
time being it is simply a snapshot in time of what the transcript looked
like today. See the benchmark/README.md file for details.

Closes #80
Refs #65
  • Loading branch information
edsu committed Jan 31, 2025
1 parent 26fc976 commit 13e72ae
Show file tree
Hide file tree
Showing 182 changed files with 190,241 additions and 26 deletions.
17 changes: 17 additions & 0 deletions .autoupdate/preupdate
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@

#!/bin/bash

# This script is called by our weekly dependency update job in Jenkins

pip3 install uv > speech-to-text.txt &&
~/.local/bin/uv lock --upgrade >> speech-to-text.txt

retVal=$?

git add uv.lock &&
git commit -m "Update Python dependencies"

if [ $retVal -ne 0 ]; then
echo "ERROR UPDATING PYTHON (speech-to-text)"
cat speech-to-text.txt
fi
12 changes: 6 additions & 6 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: Test
on:
- push
jobs:
build:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
python-version: [3.11]
steps:

- name: checkout
Expand All @@ -31,15 +31,15 @@ jobs:
run: |
wget -O - https://raw.githubusercontent.com/jontybrook/ffmpeg-install-script/main/install-ffmpeg-static.sh | bash -s -- --stable --force
- name: Install Python dependencies
- name: Install uv
run: |
pip install -r requirements.txt
pip install uv
- name: Run type checking
run: mypy .
run: uv run mypy .

- name: Run tests
run: pytest --cov-branch --cov-report=xml
run: uv run pytest --cov-branch --cov-report=xml

- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
Expand Down
10 changes: 5 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM nvidia/cuda:12.1.0-devel-ubuntu20.04
FROM nvidia/cuda:12.8.0-cudnn-devel-ubuntu22.04

RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
python3 \
Expand All @@ -10,12 +10,12 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
WORKDIR /app

ADD ./whisper_models whisper_models
ADD ./requirements.txt requirements.txt
ADD ./pyproject.toml pyproject.toml

RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install -r requirements.txt
RUN python3 -m pip install uv

ADD ./speech_to_text.py speech_to_text.py
RUN python3 -m py_compile speech_to_text.py
RUN uv run python3 -m py_compile speech_to_text.py

ENTRYPOINT ["python3", "speech_to_text.py"]
ENTRYPOINT ["uv", "run", "speech_to_text.py"]
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,6 +303,10 @@ When updating the base Docker image, in order to prevent random segmentation fau
1. You are using an [nvidia/cuda](https://hub.docker.com/r/nvidia/cuda) base Docker image.
2. The version of CUDA you are using in the Docker container aligns with the version of CUDA that is installed in the host operating system that is running Docker.

## Benchmarking

It may be useful to compare the output of speech-to-text with previous runs. See docs/README.md for more about that.

## Linting and Type Checking

You may notice your changes fail in CI if they require reformatting or fail type checking. We use [ruff](https://docs.astral.sh/ruff/) for formatting Python code, and [mypy](https://mypy-lang.org/) for type checking. Both of those should be present in your virtual environment.
Expand Down
16 changes: 16 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Benchmark speech-to-text

If you make changes to speech-to-text's software dependencies, especially the base nvidia/cuda Docker image or the Whisper software, it's a good idea to see what divergence there might be in the transcripts that are generated. To do this properly we should send the media through the complete speech-to-text workflow in the SDR, since it has certain options configured for Whisper as well as pre and post-processing.

22 items have been deposited into the SDR development environment, and tagged so that they can be easily queued up for processing: https://github.com/sul-dlss/speech-to-text/wiki/Load-testing-the-speech%E2%80%90to%E2%80%90text-workflow

The VTT file output for these transcripts as of 2025-01-31 has been stored in the `baseline-transcripts` directory. If you are making major updates and would like to examine the differences in output you can:

1. Ensure your changes to speech-to-text pass tests locally or in Github.
2. Deploy your changes to the SDR QA environment by tagging a new version, e.g. `git tag rel-2025-01-29`, and pushing it to Github so that the release Github Action runs `git push --tags`. If this succeeds your changes will be live in the QA and Stage environments.
3. Use Argo's [bulk action](https://github.com/sul-dlss/speech-to-text/wiki/Load-testing-the-speech%E2%80%90to%E2%80%90text-workflow#running-text-extraction-as-a-bulk-action) to generate transcripts.
4. Run report.py: `python report.py`

You should find a `index.md` Markdown file in a date stamped directory inside the `reports` directory.

The `baseline` directory contains Cocina JSON and VTT files to use as baseline data. You may want to update these over time as understanding of what to use as a baseline changes.
Loading

0 comments on commit 13e72ae

Please sign in to comment.