diff --git a/.napari/DESCRIPTION.md b/.napari/DESCRIPTION.md index f516dce..760ea5b 100644 --- a/.napari/DESCRIPTION.md +++ b/.napari/DESCRIPTION.md @@ -1,13 +1,13 @@ ## Description -A napari plugin to automatically count lung organoids from microscopy imaging data. A Faster R-CNN model was trained on patches of microscopy data. Model inference is run using a sliding window approach, with a 50% overlap and the option for predictiing on multiple window sizes and scales, the results of which are then merged using NMS. +A napari plugin to automatically count lung organoids from microscopy imaging data. A Faster R-CNN model was trained on patches of microscopy data. Model inference is run using a sliding window approach, with a 50% overlap and the option for predicting on multiple window sizes and scales, the results of which are then merged using NMS. -![Alt Text](https://github.com/HelmholtzAI-Consultants-Munich/napari-organoid-counter/blob/dev-v.0.2/readme-content/demo-plugin-v2) +![Alt Text](https://github.com/HelmholtzAI-Consultants-Munich/napari-organoid-counter/blob/main/readme-content/demo-plugin-v2.gif) ## What's new in v2? Here is a list of the main changes v2 of napari-organoid-counter offers: * Use of Faster R-CNN model for object detection -* Pyramid model inference with a sliding window approach and tunable parameters for window size and window downsamplign rate +* Pyramid model inference with a sliding window approach and tunable parameters for window size and window downsampling rate * Model confidence added as tunable parameter * Allow to load and correct existing annotations (note: these must have been saved previously from v2 of this plugin) * Object ID along with model confidence displayed in the viewer - this can now be related to box id in csv file of extracted features @@ -18,6 +18,17 @@ Technical Extensions: * Allows for Python 3.10 * Extensive testing +## Installation + +You can install `napari-organoid-counter` via [pip](https://pypi.org/project/napari-organoid-counter/): + + pip install napari-organoid-counter + + +To install latest development version : + + pip install git+https://github.com/HelmholtzAI-Consultants-Munich/napari-organoid-counter.git + ## Quickstart @@ -26,7 +37,12 @@ The use of the napari-organoid-counter plugin is straightforward. Here is a step 2. You can then select the layer you wish to work on by the drop-down box at the top of the input configurations 3. To improve the way the image is visualised you can pre-process them by clicking the _Preprocess_ button and the image layer will automatically be updated with the result 4. If you have a Faster R-CNN model you wish to use for the prediction, you can browser and select this by clicking on the _Choose_ button. Otherwise, the default model will be automatically downloaded from [here](https://zenodo.org/record/7708763#.ZDe6pS8Rpqs). Note that your own model must follow the specifications described here _(TODO)_. -5. You can adjust the _Window sizes_ and _Downsampling_ parameters to define the window size in the sliding window inference and the downsampling that is performed on the window. If you have multiple objects with different sizes, it might be good to set multiple window sizes, with corresponding downsampling rates. You can seperate these with a comma in the text box (e.g. ```2048, 512```). After you have set _Window sizes_ and _Downsampling_ hit **Enter** for the chanegs to be accepted. +5. You can adjust the _Window sizes_ and _Downsampling_ parameters to define the window size in the sliding window inference and the downsampling that is performed on the image. If you have multiple objects with different sizes, it might be good to set multiple window sizes, with corresponding downsampling rates. You can seperate these with a comma in the text box (e.g. ```2048, 512```). After you have set _Window sizes_ and _Downsampling_ hit **Enter** for each for the changes to be accepted. + +**_Downsampling parameter:_** To detect large organoids (and ignore smaller structures) you may need to increase the downsampling rate, whereas if your organoids are small and are being missed by the algorithm, consider reducing the downsampling rate. + +**_Window size parameter:_** The window size can also impact the number of objects detected: typically a ratio of 512 to 1 between window size and downsampling rate would give optimal results, while larger window sizes would lead to a drop in performance. However, please note that small window sizes will signicantly impact the runtime of the algorithm. + 6. By clicking the _Run Organoid Counter_ button the detection algorithm will run and a new shapes layer will be added to the viewer, with bounding boxes are placed around the detected organoid. You can add, edit or remove boxes using the _layer controls_ window (top left). The _Number of detected organoids_ will show you the number of organoids in the layer in real time. You can switch between viewing the box ids and model confidence for each box by toggling the _display text_ box in the _layer controls_ window. Boxes added by the user will by default have a confidence of 1. 7. If you feel that your model is over- or under-predicting you can use the _Model confidence_ scroll bar and select the value which best suits your problem. Default confidence is set to 0.8. 8. If you objects are typically bigger or smaller than those displayed you can use the _Minimum Diameter_ slider to set the minimum diameter of your objects. Default value is 30 um. @@ -36,34 +52,39 @@ The use of the napari-organoid-counter plugin is straightforward. Here is a step ## Getting Help -If you encounter any problems, please [file an issue] along with a detailed description. +If you encounter any problems, please [file an issue](https://github.com/HelmholtzAI-Consultants-Munich/napari-organoid-counter/issues) along with a detailed description. ## Intended Audience & Supported Data -This plugin has been developed and tested with 2D CZI microscopy images of lunch organoids. The images had been previously converted from a 3D stack to 2D using an extended focus algorithm. This plugin may be used as a baseline for developers who wish to extend the plugin to work with other types of input images and/or improve the detection algorithm. +This plugin has been developed and tested with 2D CZI microscopy images of lunch organoids. The images have been previously converted from a 3D stack to 2D using an extended focus algorithm. This plugin only supports single channel grayscale images. This plugin may be used as a baseline for developers who wish to extend the plugin to work with other types of input images and/or improve the detection algorithm. ## Dependencies -```napari-organoid-counter``` uses the ```napari-aicsimageio```[1] plugin for reading and processing CZI images. +```napari-organoid-counter``` uses the ```napari-aicsimageio```[1] [2] plugin for reading and processing CZI images. + +[1] Eva Maxfield Brown, Dan Toloudis, Jamie Sherman, Madison Swain-Bowden, Talley Lambert, AICSImageIO Contributors (2021). AICSImageIO: Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure Python [Computer software]. GitHub. https://github.com/AllenCellModeling/aicsimageio -[1] AICSImageIO Contributors (2021). AICSImageIO: Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure Python [Computer software]. GitHub. https://github.com/AllenCellModeling/aicsimageio +[2] Eva Maxfield Brown, Talley Lambert, Peter Sobolewski, Napari-AICSImageIO Contributors (2021). Napari-AICSImageIO: Image Reading in Napari using AICSImageIO [Computer software]. GitHub. https://github.com/AllenCellModeling/napari-aicsimageio ## How to Cite If you use this plugin for your work, please cite it using the following: + +> Christina Bukas, Harshavardhan Subramanian, & Marie Piraud. (2023). HelmholtzAI-Consultants-Munich/napari-organoid-counter: v0.2.0 (v0.2.0). Zenodo. https://doi.org/10.5281/zenodo.7859571 +> +bibtex: ``` @software{christina_bukas_2022_6457904, - author = {Christina Bukas}, + author = {Christina Bukas, Harshavardhan Subramanian, & Marie Piraud}, title = {{HelmholtzAI-Consultants-Munich/napari-organoid- - counter: first release of napari plugin for lung + counter: second release of the napari plugin for lung organoid counting}}, month = apr, - year = 2022, + year = 2023, publisher = {Zenodo}, - version = {v0.1.0-beta}, - doi = {10.5281/zenodo.6457904}, - url = {https://doi.org/10.5281/zenodo.6457904} + version = {v0.2.0}, + doi = {10.5281/zenodo.7859571}, + url = {https://doi.org/10.5281/zenodo.7859571} } ``` - diff --git a/README.md b/README.md index c823863..4f06459 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ [![codecov](https://codecov.io/gh/HelmholtzAI-Consultants-Munich/napari-organoid-counter/branch/main/graph/badge.svg)](https://codecov.io/gh/HelmholtzAI-Consultants-Munich/napari-organoid-counter) [![napari hub](https://img.shields.io/endpoint?url=https://api.napari-hub.org/shields/napari-organoid-counter)](https://napari-hub.org/plugins/napari-organoid-counter) -A napari plugin to automatically count lung organoids from microscopy imaging data. *Note that this only works for one channel grayscale images.* +A napari plugin to automatically count lung organoids from microscopy imaging data. *Note that this plugin only supports single channel grayscale images.* ![Alt Text](https://github.com/HelmholtzAI-Consultants-Munich/napari-organoid-counter/blob/main/readme-content/demo-plugin-v2.gif) @@ -84,17 +84,22 @@ If you encounter any problems, please [file an issue] along with a detailed desc ## Citing If you use this plugin for your work, please cite it using the following: + +> Christina Bukas, Harshavardhan Subramanian, & Marie Piraud. (2023). HelmholtzAI-Consultants-Munich/napari-organoid-counter: v0.2.0 (v0.2.0). Zenodo. https://doi.org/10.5281/zenodo.7859571 +> +bibtex: ``` @software{christina_bukas_2022_6457904, - author = {Christina Bukas}, + author = {Christina Bukas, Harshavardhan Subramanian, & Marie Piraud}, title = {{HelmholtzAI-Consultants-Munich/napari-organoid- - counter: first release of napari plugin for lung + counter: second release of the napari plugin for lung organoid counting}}, month = apr, - year = 2022, + year = 2023, publisher = {Zenodo}, - version = {v0.1.0-beta}, - doi = {10.5281/zenodo.6457904}, - url = {https://doi.org/10.5281/zenodo.6457904} + version = {v0.2.0}, + doi = {10.5281/zenodo.7859571}, + url = {https://doi.org/10.5281/zenodo.7859571} } ``` + diff --git a/napari_organoid_counter/_orgacount.py b/napari_organoid_counter/_orgacount.py index b710fea..3f22a94 100644 --- a/napari_organoid_counter/_orgacount.py +++ b/napari_organoid_counter/_orgacount.py @@ -36,12 +36,13 @@ def load_model_checkpoint(self, model_path): ckpt = torch.load(model_path, map_location=self.device) self.model.load_state_dict(ckpt) #.state_dict()) - def sliding_window(self, test_img, step, window_size, rescale_factor, pred_bboxes=[], scores_list=[]): + def sliding_window(self, test_img, step, window_size, rescale_factor, prepadded_height, prepadded_width, pred_bboxes=[], scores_list=[]): - img_height, img_width = test_img.size(2), test_img.size(3) + #img_height, img_width = test_img.size(2), test_img.size(3) - for i in range(0, img_height, step): - for j in range(0, img_width, step): + for i in range(0, prepadded_height, step): + for j in range(0, prepadded_width, step): + # crop img_crop = test_img[:, :, i:(i+window_size), j:(j+window_size)] # get predictions @@ -77,8 +78,8 @@ def run(self, window_size = round(window_size * rescale_factor) step = round(window_size * window_overlap) # prepare image for model - norm, tensor, etc. - ready_img = prepare_img(img, step, window_size, rescale_factor, self.transfroms , self.device) - bboxes, scores = self.sliding_window(ready_img, step, window_size, rescale_factor, bboxes, scores) + ready_img, prepadded_height, prepadded_width = prepare_img(img, step, window_size, rescale_factor, self.transfroms , self.device) + bboxes, scores = self.sliding_window(ready_img, step, window_size, rescale_factor, prepadded_height, prepadded_width, bboxes, scores) bboxes = torch.stack(bboxes) scores = torch.stack(scores) diff --git a/napari_organoid_counter/_utils.py b/napari_organoid_counter/_utils.py index 8053e93..8b10a08 100644 --- a/napari_organoid_counter/_utils.py +++ b/napari_organoid_counter/_utils.py @@ -81,7 +81,6 @@ def prepare_img(test_img, step, window_size, rescale_factor, trans, device): pad_x = (img_height//step)*step + window_size - img_height pad_y = (img_width//step)*step + window_size - img_width test_img = np.pad(test_img, ((0, int(pad_x)), (0, int(pad_y))), mode='edge') - # normalise and convert to RGB - model input has size 3 test_img = (test_img-np.min(test_img))/(np.max(test_img)-np.min(test_img)) test_img = (255*test_img).astype(np.uint8) @@ -92,7 +91,7 @@ def prepare_img(test_img, step, window_size, rescale_factor, trans, device): test_img = torch.unsqueeze(test_img, axis=0) #[B, C, H, W] test_img = test_img.to(device) - return test_img + return test_img, img_height, img_width def apply_nms(bbox_preds, scores_preds, iou_thresh=0.5): """ Function applies non max suppression to iteratively remove lower scoring boxes which have an IoU greater than iou_threshold diff --git a/napari_organoid_counter/_widget.py b/napari_organoid_counter/_widget.py index 211a84c..d10f596 100644 --- a/napari_organoid_counter/_widget.py +++ b/napari_organoid_counter/_widget.py @@ -25,7 +25,7 @@ class OrganoidCounterWidget(QWidget): The current napari viewer model_path: string, default 'model-weights/model_v1.ckpt' The relative path to the detection model used for organoid counting - will append current working dir to this path - window_sizes: list of ints, default [2048] + window_sizes: list of ints, default [1024] A list with the sizes of the windows on which the model will be run. If more than one window_size is given then the model will run on several window sizes and then combne the results downsampling:list of ints, default [2] @@ -59,12 +59,12 @@ class OrganoidCounterWidget(QWidget): def __init__(self, napari_viewer, model_path: str = 'model/model_v1.ckpt', - window_sizes: List = [2048], + window_sizes: List = [1024], downsampling: List = [2], min_diameter: int = 30, confidence: float = 0.8): super().__init__() - + # assign class variables self.viewer = napari_viewer self.model_path = os.path.join(os.getcwd(), model_path) @@ -100,7 +100,9 @@ def __init__(self, self.viewer.layers.events.inserted.connect(self._added_layer) self.viewer.layers.events.removed.connect(self._removed_layer) self.viewer.layers.selection.events.changed.connect(self._sel_layer_changed) - + + #self.slider_changed = False # used for changing slider and text of min diameter + def _sel_layer_changed(self, event): cur_layer_list = list(self.viewer.layers.selection) if len(cur_layer_list)==0: return @@ -115,6 +117,7 @@ def _sel_layer_changed(self, event): # update min diameter text and slider with previous value of that layer self.min_diameter = self.stored_diameters[self.cur_shapes_name] self.min_diameter_slider.setValue(self.min_diameter) + #self.min_diameter_label.setText('Minimum Diameter [um]: ') self.min_diameter_label.setText('Minimum Diameter [um]: '+str(self.min_diameter)) # update confidence text and slider with previous value of that layer self.confidence = self.stored_confidences[self.cur_shapes_name] @@ -232,12 +235,15 @@ def _on_run_click(self): return # update the viewer with the new bboxes labels_layer_name = 'Labels-'+self.image_layer_name + if labels_layer_name in self.shape_layer_names: + show_info('Found existing labels layer. Please remove or rename it and try again!') + return # run inference self.organoiDL.run(img_data, labels_layer_name, self.window_sizes, self.downsampling, - window_overlap = 1)# 0.5) + window_overlap = 0.5) # set the confidence threshold, remove small organoids and get bboxes in format o visualise bboxes, scores, box_ids = self.organoiDL.apply_params(labels_layer_name, self.confidence, self.min_diameter) @@ -299,13 +305,31 @@ def _rerun(self): # and get new boxes, scores and box ids based on new confidence and min_diameter values bboxes, scores, box_ids = self.organoiDL.apply_params(self.cur_shapes_name, self.confidence, self.min_diameter) self._update_vis_bboxes(bboxes, scores, box_ids, self.cur_shapes_name) - + def _on_diameter_changed(self): """ Is called whenever user changes the Minimum Diameter slider """ self.min_diameter = self.min_diameter_slider.value() self.min_diameter_label.setText('Minimum Diameter [um]: '+str(self.min_diameter)) self._rerun() + ''' + def _on_diameter_slider_changed(self): + """ Is called whenever user changes the Minimum Diameter slider """ + self.min_diameter = self.min_diameter_slider.value() + self.slider_changed = True + if int(self.min_diameter_textbox.text())!= self.min_diameter: + self.min_diameter_textbox.setText(str(self.min_diameter)) + self._rerun() + self.slider_changed = False + + def _on_diameter_textbox_changed(self): + if self.slider_changed: return + self.min_diameter = int(self.min_diameter_textbox.text()) + if self.min_diameter_slider.value() != self.min_diameter: + self.min_diameter_slider.setValue(self.min_diameter) + self._rerun() + ''' + def _on_confidence_changed(self): """ Is called whenever user changes the confidence slider """ self.confidence = self.confidence_slider.value()/100 @@ -435,7 +459,7 @@ def shapes_event_handler(self, event): new_ids = self.viewer.layers[self.cur_shapes_name].properties['box_id'] self._update_num_organoids(len(new_ids)) - # check if duplicate ids - this happens when user adds a box, currently only available fix current_properties doens't work + # check if duplicate ids - this happens when user adds a box, currently only available fix current_properties doesn't work if len(new_ids) > len(set(new_ids)): num_sim = len(new_ids) - len(set(new_ids)) if num_sim > 1: print('this should not happen!!!!!!!!!!!!!!!!!') @@ -617,12 +641,21 @@ def _setup_min_diameter_box(self): self.min_diameter_slider.setMaximum(100) self.min_diameter_slider.setSingleStep(10) self.min_diameter_slider.setValue(self.min_diameter) + #self.min_diameter_slider.valueChanged.connect(self._on_diameter_slider_changed) self.min_diameter_slider.valueChanged.connect(self._on_diameter_changed) # set up the label + #self.min_diameter_label = QLabel('Minimum Diameter [um]: ', self) self.min_diameter_label = QLabel('Minimum Diameter [um]: '+str(self.min_diameter), self) self.min_diameter_label.setAlignment(Qt.AlignCenter | Qt.AlignVCenter) + ''' + # set up text box + self.min_diameter_textbox = QLineEdit(self) + self.min_diameter_textbox.setText(str(self.min_diameter)) + self.min_diameter_textbox.returnPressed.connect(self._on_diameter_textbox_changed) + ''' # and add all these to the layout hbox.addWidget(self.min_diameter_label) + #hbox.addWidget(self.min_diameter_textbox) hbox.addSpacing(15) hbox.addWidget(self.min_diameter_slider) #self.min_diameter_box.setLayout(hbox)