LivecellX is a comprehensive deep learning framework for Python, designed specifically for segmenting, tracking, and analyzing single-cell trajectories in long-term live-cell imaging datasets. We address incorrect segmentation cases, particularly over-segmentation and under-segmentation, using a correction module that follows a whole-image level segmentation provided by deep learning segmentation methods. This framework simplifies the data collection process through active learning and a human-in-the-loop approach. Furthermore, we showcase the ease with which cell events can be detected and analyzed, particularly through a mitosis detection task. To the best of our knowledge, we are providing the community with the first microscopy imaging correct segmentation dataset and a mitosis trajectory dataset for deep learning training. Our framework achieves near-perfect detection accuracy, exceeding 99%. You can follow our Jupyter notebooks in ./notebooks to reproduce our results.
For more information, installation instructions, and tutorials, please visit our official documentation.
Note: This repository is in a pre-alpha stage. While it currently showcases basic use-cases like image segmentation and cell tracking, our complete version is slated for release in October 2023 alongside our manuscript. In the meantime, you may explore our previous pipeline repository maintained by Xing Lab.
If you encounter issues related to lap
and numpy
, or numba
and numpy
, please install numpy
first, then lap
. Follow the error messages to resolve any version conflicts between numba
and numpy
.
pip install -r requirements.txt
pip install -r napari_requirements.txt
pip install -e . # -e option allows for an editable installation, useful for development
Please refer to Pytorch Official Website to receive most recent installation instructions. Here we simply provide two examples used in our cases.
Install via pip:
conda install pytorch torchvision -c pytorch
On our 2080Ti/3090/4090 workstations and CUDA 11.7:
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
check if you are using cuda (refer to pytorch docs for TPU or other devices):
torch.cuda.is_available(), torch.cuda.current_device(), torch.cuda.device_count()
Please refer to latest detectron2 documentation to install detectron2 for segmentation if you cannot build from source with the following commands.
Prebuilt (Easier and preferred by us):
https://detectron2.readthedocs.io/en/latest/tutorials/install.html#install-pre-built-detectron2-linux-only
Build from source:
https://detectron2.readthedocs.io/en/latest/tutorials/install.html#build-detectron2-from-source
git clone https://github.com/facebookresearch/detectron2.git
python -m pip install -e detectron2
For {avi, mp4} movie generation, ffmpeg is required. Conda installation cmd we used shown below. For other installation methods, please refer to the ffmpeg official website.
conda install -c conda-forge ffmpeg
pip install pre-commit
pre-commit install
Note
If you already have satisfying segmentation models or segmentation results, you may skip Annotation and Segmentation part below.
input: raw image files After annotating imaging datasets, you should have json files in COCO format ready for segmentation training.
Apply labelme to your datasets following our annotation protocol.
A fixed version of labelme2coco implementation is included in our package. Please refer to our tutorial on how to convert your labelme json to COCO format.
For CVAT, please export the annotation results as COCO, as shown in our annotation protocol.
Segmentation has two phase. If you already have pytorch or tensorflow models trained on your dataset, you may skip training phase.
input: COCO json files
output: pytorch model (.pth file)
input: raw images, a trained machine-learning based model
outputs: SingleCellStatic json outputs
input: SingleCellStatic
- contour
- bounding box
output: SingleCellTrajectoryColletion
- holding a collection of singleCellTrajectory each containing single cell time-lapse data
- trajectory-wise feature can be calculated after track stage or at trajectory stage.
input: SingleCellTrajectoryColletion
output:
track.movie: generate_single_trajectory_movie()
visualizer: viz_traj, viz_traj_collection
{Documentation placeholder} [Move to docs/ and auto generate by readthedocs]
class designed to hold all information about a single cell at some timepoint
attributes
- time point
- id (optional)
- contour coordinates
- cell bounding box
- img crop (lazy)
- feature map
- original img (reference/pointer)
- timeframe_set