-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continous Integration Tests #129
Changes from 20 commits
78fbcea
b34300b
955e1f1
a684136
ff5804c
4b6750a
122753d
b9f35af
be62dfb
065dc28
15dd65b
c99db26
fdb5513
3c8bfbd
d320945
71804e4
516389e
ca4a14e
0704475
ce075f3
a0235e0
88291fa
08df7cd
0c342e9
527c365
9c64665
9be0a9a
aa70dbe
005af6c
81f78bf
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
name: continuous-integration | ||
|
||
on: [push] | ||
|
||
jobs: | ||
test: | ||
runs-on: ${{ matrix.os }} | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
#os: [ubuntu-latest, windows-latest, macos-latest] # remove mac tests | ||
# Requirements file generated with python=3.11 | ||
python-version: ["3.11"] | ||
steps: | ||
- uses: actions/checkout@v4 | ||
|
||
- name: Set up Python ${{ matrix.python-version }} | ||
uses: actions/setup-python@v5 | ||
with: | ||
python-version: ${{ matrix.python-version }} | ||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
pip install -r requirements.txt # test with requirements file so can easily bump with dependabot | ||
pip install . | ||
|
||
- name: Compile cython module | ||
run: python setup.py build_ext --inplace | ||
|
||
- name: Test | ||
run: | | ||
python -m pytest tests/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
version: 2 | ||
updates: | ||
- package-ecosystem: "pip" | ||
directory: "/" # Location of your pyproject.toml or requirements.txt | ||
schedule: | ||
interval: "weekly" # Checks for updates every week | ||
commit-message: | ||
prefix: "deps" # Prefix for pull request titles | ||
open-pull-requests-limit: 5 # Limit the number of open PRs at a time |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
[build-system] | ||
requires = ["setuptools", "wheel", "numpy", "cython"] # Dependencies needed to build the package | ||
build-backend = "setuptools.build_meta" | ||
|
||
[project] | ||
name = "pyprophet" | ||
version = "2.2.8" | ||
description = "PyProphet: Semi-supervised learning and scoring of OpenSWATH results." | ||
readme = { file = "README.md", content-type = "text/markdown" } | ||
license = { text = "BSD" } | ||
authors = [{ name = "The PyProphet Developers", email = "rocksportrocker@gmail.com" }] | ||
classifiers = [ | ||
"Development Status :: 3 - Alpha", | ||
"Environment :: Console", | ||
"Intended Audience :: Science/Research", | ||
"License :: OSI Approved :: BSD License", | ||
"Operating System :: OS Independent", | ||
"Topic :: Scientific/Engineering :: Bio-Informatics", | ||
"Topic :: Scientific/Engineering :: Chemistry" | ||
] | ||
keywords = ["bioinformatics", "openSWATH", "mass spectrometry"] | ||
|
||
# Dependencies required for runtime | ||
dependencies = [ | ||
"Click", | ||
"duckdb", | ||
"duckdb-extensions", | ||
"duckdb-extension-sqlite-scanner", | ||
Comment on lines
+26
to
+28
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. duckdb is currently only used for OSW to parquet exporting right? I'm thinking if we can create a separate dependency version, so that if someone wants to be able to export to parquet, then they can install There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. From my initial tests duckdb tends to speed up sqlite statements with many table joins so I was thinking of extending its usage to scoring and tsv exporting as it is minimal changes required to do this. |
||
"numpy >= 1.9.0", | ||
"scipy", | ||
"pandas >= 0.17", | ||
"cython", | ||
"numexpr >= 2.10.1", | ||
"scikit-learn >= 0.17", | ||
"xgboost", | ||
"hyperopt", | ||
"statsmodels >= 0.8.0", | ||
"matplotlib", | ||
"tabulate", | ||
"pyarrow", | ||
"pypdf" | ||
] | ||
|
||
# Define console entry points | ||
[project.scripts] | ||
pyprophet = "pyprophet.main:cli" | ||
|
||
[tool.setuptools] | ||
packages = { find = { exclude = ["ez_setup", "examples", "tests"] } } | ||
include-package-data = true | ||
zip-safe = false |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -33,7 +33,12 @@ def statistics_report(data, outfile, context, analyte, parametric, pfdr, pi0_lam | |
outfile = outfile + "_" + str(data['run_id'].unique()[0]) | ||
|
||
# export PDF report | ||
save_report(outfile + "_" + context + "_" + analyte + ".pdf", outfile + ": " + context + " " + analyte + "-level error-rate control", data[data.decoy==1]["score"], data[data.decoy==0]["score"], stat_table["cutoff"], stat_table["svalue"], stat_table["qvalue"], data[data.decoy==0]["p_value"], pi0, color_palette) | ||
save_report(outfile + "_" + context + "_" + analyte + ".pdf", | ||
outfile + ": " + context + " " + analyte + "-level error-rate control", | ||
data[data.decoy==1]["score"].values, data[data.decoy==0]["score"].values, stat_table["cutoff"].values, | ||
stat_table["svalue"].values, stat_table["qvalue"].values, data[data.decoy==0]["p_value"].values, | ||
pi0, | ||
color_palette) | ||
|
||
return(data) | ||
|
||
|
@@ -184,7 +189,7 @@ def infer_proteins(infile, outfile, context, parametric, pfdr, pi0_lambda, pi0_m | |
con.close() | ||
|
||
if context == 'run-specific': | ||
data = data.groupby('run_id').apply(statistics_report, outfile, context, "protein", parametric, pfdr, pi0_lambda, pi0_method, pi0_smooth_df, pi0_smooth_log_pi0, lfdr_truncate, lfdr_monotone, lfdr_transformation, lfdr_adj, lfdr_eps, color_palette).reset_index() | ||
data = data.groupby('run_id').apply(statistics_report, outfile, context, "protein", parametric, pfdr, pi0_lambda, pi0_method, pi0_smooth_df, pi0_smooth_log_pi0, lfdr_truncate, lfdr_monotone, lfdr_transformation, lfdr_adj, lfdr_eps, color_palette) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Removing reset index is required to prevent the error
Same as below. Must be a change to pandas groupby functionality There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah seems like it, some groupby deprecations occured for Pandas v2.2.0
If you don't mind, would you be able to test with a version prior to pandas v2.2.0, to see if the old code works with the |
||
|
||
elif context in ['global', 'experiment-wide']: | ||
data = statistics_report(data, outfile, context, "protein", parametric, pfdr, pi0_lambda, pi0_method, pi0_smooth_df, pi0_smooth_log_pi0, lfdr_truncate, lfdr_monotone, lfdr_transformation, lfdr_adj, lfdr_eps, color_palette) | ||
|
@@ -257,7 +262,7 @@ def infer_peptides(infile, outfile, context, parametric, pfdr, pi0_lambda, pi0_m | |
con.close() | ||
|
||
if context == 'run-specific': | ||
data = data.groupby('run_id').apply(statistics_report, outfile, context, "peptide", parametric, pfdr, pi0_lambda, pi0_method, pi0_smooth_df, pi0_smooth_log_pi0, lfdr_truncate, lfdr_monotone, lfdr_transformation, lfdr_adj, lfdr_eps, color_palette).reset_index() | ||
data = data.groupby('run_id').apply(statistics_report, outfile, context, "peptide", parametric, pfdr, pi0_lambda, pi0_method, pi0_smooth_df, pi0_smooth_log_pi0, lfdr_truncate, lfdr_monotone, lfdr_transformation, lfdr_adj, lfdr_eps, color_palette) | ||
jcharkow marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
elif context in ['global', 'experiment-wide']: | ||
data = statistics_report(data, outfile, context, "peptide", parametric, pfdr, pi0_lambda, pi0_method, pi0_smooth_df, pi0_smooth_log_pi0, lfdr_truncate, lfdr_monotone, lfdr_transformation, lfdr_adj, lfdr_eps, color_palette) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,118 @@ | ||
# | ||
# This file is autogenerated by pip-compile with Python 3.11 | ||
# by the following command: | ||
# | ||
# pip-compile --all-extras --output-file=requirements.txt | ||
# | ||
click==8.1.7 | ||
# via pyprophet (setup.py) | ||
cloudpickle==3.1.0 | ||
# via hyperopt | ||
contourpy==1.3.0 | ||
# via matplotlib | ||
cycler==0.12.1 | ||
# via matplotlib | ||
cython==3.0.11 | ||
# via pyprophet (setup.py) | ||
duckdb==1.1.3 | ||
# via | ||
# duckdb-extension-sqlite-scanner | ||
# duckdb-extensions | ||
# pyprophet (setup.py) | ||
duckdb-extension-sqlite-scanner==1.1.3 | ||
# via pyprophet (setup.py) | ||
duckdb-extensions==1.1.3 | ||
# via pyprophet (setup.py) | ||
fonttools==4.55.0 | ||
# via matplotlib | ||
future==1.0.0 | ||
# via hyperopt | ||
hyperopt==0.2.7 | ||
# via pyprophet (setup.py) | ||
iniconfig==2.0.0 | ||
# via pytest | ||
joblib==1.4.2 | ||
# via scikit-learn | ||
kiwisolver==1.4.7 | ||
# via matplotlib | ||
matplotlib==3.9.2 | ||
# via pyprophet (setup.py) | ||
networkx==3.2.1 | ||
# via hyperopt | ||
numexpr==2.10.1 | ||
# via pyprophet (setup.py) | ||
numpy==2.0.2 | ||
# via | ||
# contourpy | ||
# hyperopt | ||
# matplotlib | ||
# numexpr | ||
# pandas | ||
# patsy | ||
# pyprophet (setup.py) | ||
# scikit-learn | ||
# scipy | ||
# statsmodels | ||
# xgboost | ||
nvidia-nccl-cu12==2.23.4 | ||
# via xgboost | ||
packaging==24.2 | ||
# via | ||
# matplotlib | ||
# pytest | ||
# statsmodels | ||
pandas==2.2.3 | ||
# via | ||
# pyprophet (setup.py) | ||
# statsmodels | ||
patsy==1.0.1 | ||
# via statsmodels | ||
pillow==11.0.0 | ||
# via matplotlib | ||
pluggy==1.5.0 | ||
# via pytest | ||
py4j==0.10.9.7 | ||
# via hyperopt | ||
pyarrow==18.0.0 | ||
# via pyprophet (setup.py) | ||
pyparsing==3.2.0 | ||
# via matplotlib | ||
pypdf==5.1.0 | ||
# via pyprophet (setup.py) | ||
pytest==8.3.3 | ||
# via | ||
# pyprophet (setup.py) | ||
# pytest-regtest | ||
pytest-regtest==2.3.3 | ||
# via pyprophet (setup.py) | ||
python-dateutil==2.9.0.post0 | ||
# via | ||
# matplotlib | ||
# pandas | ||
pytz==2024.2 | ||
# via pandas | ||
scikit-learn==1.5.2 | ||
# via pyprophet (setup.py) | ||
scipy==1.13.1 | ||
# via | ||
# hyperopt | ||
# pyprophet (setup.py) | ||
# scikit-learn | ||
# statsmodels | ||
# xgboost | ||
six==1.16.0 | ||
# via | ||
# hyperopt | ||
# python-dateutil | ||
statsmodels==0.14.4 | ||
# via pyprophet (setup.py) | ||
tabulate==0.9.0 | ||
# via pyprophet (setup.py) | ||
threadpoolctl==3.5.0 | ||
# via scikit-learn | ||
tqdm==4.67.0 | ||
# via hyperopt | ||
tzdata==2024.2 | ||
# via pandas | ||
xgboost==2.1.2 | ||
# via pyprophet (setup.py) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to run the CI on windows and mac as well, or does it not work?