Skip to content

neuro-galaxy/brainsets

Repository files navigation

Documentation | Join our Discord community

PyPI version Documentation Status Tests Linting Discord

brainsets is a Python package for processing neural data into a standardized format.

Installation

brainsets is available for Python 3.8 to Python 3.11

To install the package, run the following command:

pip install brainsets

List of available brainsets

brainset_id Brainset Card Raw Data Size Processed Data Size
churchland_shenoy_neural_2012 Link 46 GB 25 GB
flint_slutzky_accurate_2012 Link 3.2 GB 151 MB
odoherty_sabes_nonhuman_2017 Link 22 GB 26 GB
pei_pandarinath_nlb_2021 Link 688 KB 22 MB
perich_miller_population_2018 Link 13 GB 2.9 GB

Acknowledgements

This work is only made possible thanks to the public release of these valuable datasets by the original researchers. If you use any of the datasets processed by brainsets in your research, please make sure to cite the appropriate original papers and follow any usage guidelines specified by the dataset creators. Proper attribution not only gives credit to the researchers who collected and shared the data but also helps promote open science practices in the neuroscience community. You can find the original papers and usage guidelines for each dataset in the brainsets documentation.

Using the brainsets CLI

Configuring data directories

First, configure the directories where brainsets will store raw and processed data:

brainsets config

You will be prompted to enter the paths to the raw and processed data directories.

$> brainsets config
Enter raw data directory: ./data/raw
Enter processed data directory: ./data/processed

You can update the configuration at any time by running the config command again.

Listing available datasets

You can list the available datasets by running the list command:

brainsets list

Preparing data

You can prepare a dataset by running the prepare command:

brainsets prepare <brainset>

Data preparation involves downloading the raw data from the source then processing it, following a set of rules defined in pipelines/<brainset>/.

For example, to prepare the Perich & Miller (2018) dataset, you can run:

brainsets prepare perich_miller_population_2018 --cores 8

Contributing

If you are planning to contribute to the package, you can install the package in development mode by running the following command:

pip install -e ".[dev]"

Install pre-commit hooks:

pre-commit install

Unit tests are located under test/. Run the entire test suite with

pytest

or test individual files via, e.g., pytest test/test_enum_unique.py

Cite

Please cite our paper if you use this code in your own work:

@inproceedings{
    azabou2023unified,
    title={A Unified, Scalable Framework for Neural Population Decoding},
    author={Mehdi Azabou and Vinam Arora and Venkataramana Ganesh and Ximeng Mao and Santosh Nachimuthu and Michael Mendelson and Blake Richards and Matthew Perich and Guillaume Lajoie and Eva L. Dyer},
    booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
    year={2023},
}