-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #1 from chloelavrat/clavrat/proto-crepe
Clavrat/proto crepe --> main
- Loading branch information
Showing
22 changed files
with
2,357 additions
and
203 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,62 @@ | ||
name: CD | ||
|
||
on: | ||
push: | ||
branches: | ||
- main | ||
- clavrat/proto-crepe | ||
tags: | ||
- '*.*.*' # Adjust this pattern to match your tag format | ||
|
||
jobs: | ||
build_and_release: | ||
name: Build and Upload Release | ||
runs-on: ubuntu-latest | ||
|
||
steps: | ||
- name: Checkout code | ||
uses: actions/checkout@v3 | ||
|
||
- name: Set up Python | ||
uses: actions/setup-python@v4 | ||
with: | ||
python-version: '3.12' | ||
|
||
- name: Install Poetry | ||
run: | | ||
pip install poetry | ||
- name: Build the package | ||
run: | | ||
poetry build | ||
- name: Create Release | ||
id: create_release | ||
uses: actions/create-release@v1.0.0 | ||
env: | ||
GITHUB_TOKEN: ${{ secrets.PAT_TOKEN }} | ||
with: | ||
tag_name: ${{ github.ref }} | ||
release_name: Release ${{ github.ref }} | ||
draft: false | ||
prerelease: false | ||
|
||
- name: Get Name of Artifact | ||
run: | | ||
ARTIFACT_PATHNAME=$(ls dist/*.whl | head -n 1) | ||
ARTIFACT_NAME=$(basename $ARTIFACT_PATHNAME) | ||
echo "ARTIFACT_PATHNAME=${ARTIFACT_PATHNAME}" >> $GITHUB_ENV | ||
echo "ARTIFACT_NAME=${ARTIFACT_NAME}" >> $GITHUB_ENV | ||
- name: Upload Whl to Release Assets | ||
id: upload-release-asset | ||
uses: actions/upload-release-asset@v1.0.2 | ||
env: | ||
GITHUB_TOKEN: ${{ secrets.PAT_TOKEN }} | ||
with: | ||
upload_url: ${{ steps.create_release.outputs.upload_url }} | ||
asset_path: ${{ env.ARTIFACT_PATHNAME }} | ||
asset_name: ${{ env.ARTIFACT_NAME }} | ||
asset_content_type: application/x-wheel+zip | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
name: CI | ||
|
||
on: [push, pull_request] | ||
|
||
jobs: | ||
linter: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- name: Checkout code | ||
uses: actions/checkout@v3 | ||
|
||
- name: Lint with flake8 | ||
run: | | ||
pip install flake8 | ||
flake8 ./crepe --count --select=E9,F63,F7,F82 --show-source --statistics | ||
# flake8 ./crepe --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics | ||
|
||
|
||
doc_coverage: | ||
runs-on: ubuntu-latest | ||
steps: | ||
- name: Checkout code | ||
uses: actions/checkout@v3 | ||
|
||
- name: Check documentation coverage with pydocstyle | ||
run: | | ||
pip install pydocstyle | ||
pydocstyle ./crepe |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,76 @@ | ||
# TorchCrepe | ||
Implementation of the crepe pitch extractor in PyTorch | ||
<div align="center"> | ||
<img src="TorchCREPE_banner.png" alt="Banner" style="border-radius: 17px; width: 100%; max-width: 800px; height: auto;"> | ||
</div> | ||
|
||
<h3 align="center"> | ||
<b><a href="https://torch-crepe-demo.chloelavrat.com">Interactive Demo</a></b> | ||
• | ||
<b><a href="https://www.youtube.com">Video</a></b> | ||
• | ||
<b><a href="">Python API</a></b> | ||
</h3> | ||
|
||
<div align="center"> | ||
<a href="https://opensource.org/licenses/MIT"> | ||
<img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License"> | ||
</a> | ||
<img src="https://img.shields.io/badge/python-3.12.4-blue.svg" alt="Python Versions"> | ||
<a href="https://github.com/chloelavrat/TorchCrepe/actions/workflows/CI.yml"> | ||
<img src="https://github.com/chloelavrat/TorchCrepe/actions/workflows/CI.yml/badge.svg" alt="CI"> | ||
</a> | ||
</div> | ||
|
||
<p align="center">The <b>Torch-CREPE</b> project re-develop the CREPE pitch estimation model in PyTorch, empowering its optimization and adaptation for real-time voice pitch detection tasks. By re-developping this deep learning-based system, we unlock new research possibilities for music signal processing and audio analysis applications.</p> | ||
|
||
## How it Works | ||
|
||
The **PyTorch CREPE** implementation utilizes the **Torch** and **Torchaudio** library to process and analyze audio signals. The project's core functionality is based on the CREPE model, which estimates fundamental frequencies from audio data. | ||
|
||
The way this model achieve this is by doing a classification of 20ms audio chunks on 350 classes representing the audio range in cents of the observed fundamental frequency. | ||
|
||
## Features | ||
|
||
- **Real-time pitch detection:** Processing done in realtime using the given script. | ||
- **Optimized for instrument and voices:** Trained on instruments and voices for maximum usescases focuses. | ||
- **Deep learning-based**: system with full PyTorch implementation | ||
- **Fast Integration** with Torchaudio library | ||
- **Trainable on Consumer GPU** (complete train done on an RTX-3080) | ||
|
||
## Run app locally | ||
|
||
To run the PyTorch CREPE demo locally, you can use the following Python code: | ||
|
||
```py | ||
import torchaudio | ||
from crepe.model import crepe | ||
from crepe.utils import load_test_file | ||
|
||
crepe = crepe(model_capacity="tiny", device='cpu') | ||
|
||
audio, sr = load_test_file() | ||
|
||
time, frequency, confidence, activation = crepe.predict( | ||
audio=audio, | ||
sr = sr | ||
) | ||
``` | ||
|
||
## Python API | ||
|
||
For a detailed documentation of the PyTorch CREPE implementation, including the API and usage guidelines, please refer to [this link]. | ||
|
||
## Train | ||
|
||
The model is still in my training queue so only the 'tiny' version of **Crepe** has been trained yet. | ||
|
||
## Datasets | ||
|
||
[MIR-1K](http://mirlab.org/dataset/public/MIR-1K.zip) | ||
|
||
## Contributing | ||
|
||
This project is an open-source project, and contributions are always welcome. If you would like to contribute to the project, you can do so by submitting a pull request or by creating an issue on the project's GitHub page. | ||
|
||
## License | ||
|
||
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Oops, something went wrong.