
This Python package is designed to process Variant Call Format (VCF) files and perform lookup operations on Genomic Variation Representation Service (GA4GH VRS) identifiers. The GA4GH VRS identifiers provide a standardized way to represent genomic variations, making it easier to exchange and share genomic information.
In addition, this project facilitates the retrieval of evidence associated with genomic alleles by leveraging the Genomic Data Representation and Knowledge Base (GA4GH MetaKB) service. GA4GH MetaKB provides a comprehensive knowledge base that links genomic variants to relevant clinical variant interpretations.
-
VCF File Processing:
- Streamlines reading and parsing of VCF files, to extract relevant genomic information.
-
GA4GH VRS Identifier Lookup:
- Utilizes the GA4GH VRS API to perform lookups for each genomic variation mentioned in the VCF file.
- Retrieves standardized identifiers for the alleles, enhancing interoperability with GA4GH-compliant systems.
- GA4GH MetaKB Service Integration: Utilizes the GA4GH MetaKB retrieve evidence associated with specified genomic alleles.
-
Output Generation:
- Generates summary metrics about throughput, errors, evidence, and hits.
- Presents the retrieved evidence in a structured format, providing access to information about studies, publications, and other relevant details.
- Create cohort allele frequency objects
-
Additional Features
- Provides configurable options like threading and caching for processing VCFs.
- Implements robust error handling to address issues like invalid input files, invalid variants, and more.
- Python 3.10 or later
- Internet connectivity for data dependency setup (seqrepo)
-
Get the repository either by...
- Source code
git clone https://github.com/gks-anvil/vrs_anvil_toolkit cd vrs_anvil_toolkit
- PyPi
pip install vrs_anvil_toolkit
-
Install dependencies either...
- for local use
# install postgresql@14 (required for vrs-python) brew install postgresql@14 bash scripts/setup.sh
- for use on Terra
bash terra/setup.sh
General All usage has the following general steps...
- Create a manifest to configure your VCF processing run
- Use the
vrs_bulk
CLI to create a metrics file of related evidence - Use the metrics files for downstream analysis
The follow steps are explained in detail below, with some additional info on using vrs-python to directly annotate VCFs with VRS IDs.
Manifest
The configuration of each VCF processing run run is controlled by a manifest.yaml
file. Most importantly, this file specifies the...
- input VCF file(s) to process
- working directories
- performance and strictness configurations
Use this commented sample manifest as a starting point on the specific variables you can specify per run.
CLI
Below are a list of command line utilities that may be useful
# activate the environment
source venv/bin/activate
# run the vrs_bulk command in the foreground
vrs_bulk annotate
# run the vrs_bulk command in parallel, one process per VCF file
vrs_bulk annotate --scatter
# run the vrs_bulk command in parallel in the background
nohup vrs_bulk annotate --scatter & # press enter to continue
# get the status of the processes for the most recent scatter run
vrs_bulk ps
The command line utility supports Google Cloud URIs and running commands in the background to interop with Terra out-of-the-box. This is described in the CLI usage above. For an example notebook, see vrs-anvil-demo.ipynb
on the vrs-anvil
workspace.
Given a variant and an optional phenotype of interest, get aggregated allele frequency info for a cohort of interest as a cohort allele frequency object (CAF).
- Variant of interest
- Valid VRS-annotated joint VCF
- genotyping laid out per-sample
- Precomputed VRS-VCF index (created using vrsix)
- this enables efficient retrieval of VCF row by VRS ID
- [Optional] Phenotype of interest to specify subcohort
- [Optional] Plugins for project-specific transformations (see here for more info)
-
Given a variant ID and VCF path, get the CAF for the entire cohort
- Get VCF row corresponding to variant ID using a variant -> VCF row index
- Get phenotypes corresponding to each participants using the phenotypes by patient table
- Aggregate counts for participants using their genotypes
- Create CAF object using counts
-
Given a variant ID, VCF path, and participant list, get the CAF for a subset of participants (subcohort)
- Same as 1 with subcohort defined by the participant list
-
Given a variant ID, VCF path, and phenotype, get the CAF for a subcohort with a specified phenotype
- Same as 1, but for a subcohort of samples with the phenotype
Given the broad variety of data representation used by data generators, we want to be able to generate CAFs for any data consortium. One way this is possible is through the use of a plugin architecture.
A plugin architecture allows users to customize the aggregation of cohort data specified to their data model. This addresses the problem where...
- a user has a project-specific phenotype data model
- Example: sample-level rare disease data is stored in an unaggregated Terra data table
- a user is interested in a subset of the cohort based on particular filters
- Example: samples must have phenotype A and a minimum read depth to be included in the subcohort
- each sample's genotype must be calculated uniquely depending on particular traits
- Example: sex is not represented within the VCF, so a user needs to integrate sample-level phenotype data to get accurate counts for chrX variants
These three problems above map to three different methods necessary in implementing a Plugin
:
__init__
: Given any set of parameters, create a phenotype index that maps each sample to its list of phenotypes.include_sample
: given a sample's variant-level or phenotype data, determine whether to include the sample in the allele count- This takes a
pysam.VariantRecord
as input to represent a particular variant record in a VCF - For more details, consult the pysam VariantRecord docs.
- This takes a
process_sample_genotype
: determine how to sum the alleles of a sample's genotype using variant-level or phenotype data- This also makes use of a
pysam.VariantRecord
as input - An
alt_index
is also passed in as an input, which is the index representing the allele of interest within the VCF row. The alt index matches the genotype according to VCF specification. For instance, a sample with the 2nd alt might have a genotype containing a 2, ie(2,1)
,(2,0)
,(2,2)
, etc.
- This also makes use of a
To implement your own plugin....
There are two types of user stories for plugins: one will be implementing the project-specific plugin, while the other is using an already-built plugin. The former user will be called a plugin author, while the latter will be called a plugin user.
Plugin Author
- Read through the default implementations defined in the
BasePlugin
. - Copy
simple_plugin.py
to your working directory. This has to be in the top-level directory where you will do your CAF generation. - Rename the plugin class and name (eg
MyProjectPlugin
andmy_project_plugin.py
). The file name must end in_plugin.py
to be a valid module. - Customize any of the methods described by the
BasePlugin
by implementing them in your own plugin class. For reference material...BasePlugin
is by default the parent class, so any methods that aren't defined in your plugin will inherit these methods. You can also define theBasePlugin
's implementations by callingsuper().<method_to_invoke>
if you want to do additional work after using the default methods.GregorPlugin
is a worked example of specific real-world implementation, refer to that for alternative ways to customize allele frequency generation.- Plugin utilities are a set of data transformation util methods that might be useful. For example, if your phenotypical data lives in a Terra data table, you could use
terra_data_table_to_dataframe
to rapidly retrieve the columns most relevant in creating a phenotype index.
Plugin User
- Confirm that you have the variant of interest, [optional] phenotype of interest, VCF path of interest at your disposal, and name of implemented plugin class
- See the
test_plugin_worked_example
function in test_plugin.py for a worked example on how to use plugin. Generally, the two main components related to using the plugin are...- For your
MyProjectPlugin
plugin, instantiate it with thePluginManager
and any input parameters specified. - Call
get_cohort_allele_frequency
passing in the instantiatedMyProjectPlugin
object with the "plugin" parameter.
- For your
Processing VCF Files (vrs-python)
vrs-python is a GA4GH GKS package centered around creating Variant Representation specification (VRS) IDs: consistent, globally unique identifiers for variation. Some of its functionality includes variant ID translation and VCF annotation. Used as a dependency in vrs_bulk
, it can also be used as a standalone package.
For Python usage, see vrs_vcf_annotator.py for an example.
For CLI usage:
python3 -m ga4gh.vrs.extras.vcf_annotation --vcf_in tests/fixtures/1kGP.chr1.1000.vcf --vcf_out annotated_output.vcf.gz --vrs_pickle_out allele_dicts.pkl --seqrepo_root_dir ~/seqrepo/latest
The above is an example using an example vcf. Replace the --vcf_out
and vrs_pickle_out
here with your desired output file path, where the output vcf can be BCF (vcf.gz
) or VCF (vcf
)
Also, see the VRS Annotator workflow on Dockstore for a way to do this on Terra.
- For chromosomes with ploidy of 1 (mitochondrial calling or sex chromosomes), focus allele counts (AC) and locus allele counts (AN) can have a maximum value of 1. Focus allele counts are 1 when the genotype has at least a single allele match (0/1, 1/1, or 1) otherwise it is none.
This project is open to contributions from the research community. If you are interested in contributing to the project, please contact the project team. See the contributing guide for more information on how to contribute to the project.
This project is licensed under the MIT License - see the LICENSE file for details.