Skip to content

Commit

Permalink
Merge pull request #2 from neuralabc/enh-simplified_list_fns
Browse files Browse the repository at this point in the history
adds simplified model_comp_simplified (and compute_average_simplified) code for working with lists of lists instead of directory structure.
  • Loading branch information
steelec authored Mar 14, 2024
2 parents 41b28ff + 67c6ee4 commit 515d65b
Show file tree
Hide file tree
Showing 5 changed files with 465 additions and 1 deletion.
12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ A set of functions to compute Mahalanobis D (D**2) on multimodal MRI images in a

[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10713027.svg)](https://doi.org/10.5281/zenodo.10713027)

[Biorxiv link](https://www.biorxiv.org/content/10.1101/2024.02.27.582381v1)

Reference:
```
Tremblay, SA, Alasmar, Z, Pirhadi, A, Carbonell, F, Iturria-Medina, Y, Gauthier, C, Steele, CJ, (under review) MVComp toolbox: MultiVariate Comparisons of brain MRI features accounting for common information across metrics (submitted)
Expand Down Expand Up @@ -50,8 +52,18 @@ Depending on the application, a different set of functions should be used. See c

- `voxel2voxel_dist` : To compute D2 between each voxel and all other voxels in a mask. Yields a symmetric 2-D matrix of size n voxels x n voxels containing D2 values between each pair of voxels.

## Dependencies

The `mvcomp` core functionality requires only `numpy`, `matplotlib`, `nibabel`, and `nilearn`.

# Extras

## Simplified calling based only on lists of lists of filenames

While the core functionality is based around a filename and data storage structure that _should_ help to reduce human error when using these functions, additional versions of two of the main functions (`compute_average` and `model_comp`) have also been implemented in a simplified form that is based on lists of lists (subject X feature). These functions have _little to no error-checking_ and have no way to ensure that the correct data has been input in the correct order - this is the responsibility of the user. These functions are more flexible and can be used for data stored in any accessible location and with any filename convention (while still relying on nibabel readable objects) and have fewer required inputs to simplify the workflow for users who are familiar with the approach, but the user should take care to ensure that the **ordering of inputs is correct and consistent across calls**.
- `compute_average_simplified` : takes a list (subjects) of lists (features), otherswise works identically to `compute_average`. Feature names will be auto-generated as indices if not provided.
- `model_comp_simplified` : takes a list (subjects) of lists (features), otherswise core functionality works identically to `model_comp`. Leave-one-out D2 computations is supported by not specifying a list of model features for input. Subject IDs will be auto-generated as indices if not provided.

## Code examples
- jupyter notebooks in `./examples/*.ipynb`

Expand Down
1 change: 1 addition & 0 deletions __init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from .mvcomp import mysort, feature_gen, norm_covar_inv, mah_dist_feat_mat, mah_dist_mat_2_roi, subject_list, feature_list, compute_average, model_comp, spatial_mvcomp, dist_plot, correlation_fig
from .mvcomp import model_comp_simplified, compute_average_simplified
from .version import __version__

# initial hack to not import optional plotting functions if necessary packages do not exist
Expand Down
Loading

0 comments on commit 515d65b

Please sign in to comment.