-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
We've been installing Python dependencies with Pip, and not tracking their versions. Since we've started using uv in some other infrastructure team Python projects it makes sense to add here so that speech-to-text can be tracked by infra-team's weekly dependency update process. Unlike pip, uv always installs system specific Python wheels. There wasn't a wheel available for triton (an openai-whisper's dependency) under Python3.8 so the installation of dependencies failed. So, in addition to adding uv this PR also upgrades our base Docker image from `nvidia/cuda:12.1.0-devel-ubuntu20.04` to `nvidia/cuda:12.8.0-cudnn-devel-ubuntu22.04` which allows us to install python3.10 when we `apt install python3`. Since this significantly change Whisper's behavior I wanted to be able to compare the VTT transcript output before and after the Docker image change. I added the start of a benchmarking system that will allow us to compare the output of a set of 22 SDR items, with a previous benchmark. Ideally this benchmark would be human vetted, and actually represent a ground truth for what we believe the transcript should be. But for the time being it is simply a snapshot in time of what the transcript looked like today. See the benchmark/README.md file for details. Closes #80 Refs #65
- Loading branch information
Showing
181 changed files
with
190,076 additions
and
26 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
|
||
#!/bin/bash | ||
|
||
# This script is called by our weekly dependency update job in Jenkins | ||
|
||
pip3 install uv > speech-to-text.txt && | ||
~/.local/bin/uv lock --upgrade >> speech-to-text.txt | ||
|
||
retVal=$? | ||
|
||
git add uv.lock && | ||
git commit -m "Update Python dependencies" | ||
|
||
if [ $retVal -ne 0 ]; then | ||
echo "ERROR UPDATING PYTHON (speech-to-text)" | ||
cat speech-to-text.txt | ||
fi |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
# Benchmark speech-to-text | ||
|
||
If you make changes to speech-to-text's software dependencies, especially the base nvidia/cuda Docker image or the Whisper software, it's a good to see what divergence there might be in the transcripts that are generated. To do this properly we should send the media through the complete speech-to-text workflow in the SDR, since it has certain options configured for Whisper as well as pre and post-processing. | ||
|
||
22 items have been deposited into the SDR development environment, and tagged so that they can be easily queued up for processing: https://github.com/sul-dlss/speech-to-text/wiki/Load-testing-the-speech%E2%80%90to%E2%80%90text-workflow | ||
|
||
The VTT file output for these transcripts as of 2025-01-31 has been stored in the `baseline-transcripts` directory. If you are making major updates and would like to examine the differences in output you can: | ||
|
||
1. Ensure your changes to speech-to-text pass tests locally or in Github. | ||
2. Deploy your changes to the SDR QA environment by tagging a new version, e.g. `git tag rel-2025-01-29`, and pushing it to Github so that the release Github Action runs `git push --tags`. If this succeeds your changes will be live in the QA and Stage environments. | ||
3. Use Argo's [bulk action](https://github.com/sul-dlss/speech-to-text/wiki/Load-testing-the-speech%E2%80%90to%E2%80%90text-workflow#running-text-extraction-as-a-bulk-action) to generate transcripts. | ||
4. Run report.py: `python report.py` | ||
|
||
You should find a `index.md` Markdown file in a date stamped directory inside the `reports` directory. | ||
|
||
The `baseline` directory contains Cocina JSON and VTT files to use as baseline data. You may want to update these over time as understanding of what to use as a baseline changes. |
Oops, something went wrong.