|
--- |
|
'[object Object]': null |
|
license: apache-2.0 |
|
language: |
|
- en |
|
tags: |
|
- cell segmentation |
|
- stardist |
|
- hover-net |
|
metrics: |
|
- f1-score |
|
pipeline_tag: image-segmentation |
|
library_name: transformers |
|
--- |
|
|
|
# Model Card for cell-seg-sribd |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This repository provides the solution of team Sribd-med for NeurIPS-CellSeg Challenge. The details of our method are described in our paper [Multi-stream Cell Segmentation with Low-level Cues for Multi-modality Images]. Some parts of the codes are from the baseline codes of the NeurIPS-CellSeg-Baseline repository, |
|
|
|
You can reproduce our method as follows step by step: |
|
|
|
|
|
### How to Get Started with the Model |
|
|
|
Install requirements by python -m pip install -r requirements.txt |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
The competition training and tuning data can be downloaded from https://neurips22-cellseg.grand-challenge.org/dataset/ Besides, you can download three publiced data from the following link: Cellpose: https://www.cellpose.org/dataset Omnipose: http://www.cellpose.org/dataset_omnipose Sartorius: https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/overview |
|
|
|
## Environments and Requirements: |
|
Install requirements by |
|
|
|
```shell |
|
python -m pip install -r requirements.txt |
|
``` |
|
|
|
## Dataset |
|
The competition training and tuning data can be downloaded from https://neurips22-cellseg.grand-challenge.org/dataset/ |
|
Besides, you can download three publiced data from the following link: |
|
Cellpose: https://www.cellpose.org/dataset |
|
Omnipose: http://www.cellpose.org/dataset_omnipose |
|
Sartorius: https://www.kaggle.com/competitions/sartorius-cell-instance-segmentation/overview |
|
|
|
## Automatic cell classification |
|
You can classify the cells into four classes in this step. |
|
Put all the images (competition + Cellpose + Omnipose + Sartorius) in one folder (data/allimages). |
|
Run classification code: |
|
|
|
```shell |
|
python classification/unsup_classification.py |
|
``` |
|
The results can be stored in data/classification_results/ |
|
|
|
## CNN-base classification model training |
|
Using the classified images in data/classification_results/. A resnet18 is trained: |
|
```shell |
|
python classification/train_classification.py |
|
``` |
|
## Segmentation Training |
|
Pre-training convnext-stardist using all the images (data/allimages). |
|
```shell |
|
python train_convnext_stardist.py |
|
``` |
|
For class 0,2,3 finetune on the classified data (Take class1 as a example): |
|
```shell |
|
python finetune_convnext_stardist.py model_dir=(The pretrained convnext-stardist model) data_dir='data/classification_results/class1' |
|
``` |
|
For class 1 train the convnext-hover from scratch using classified class 3 data. |
|
```shell |
|
python train_convnext_hover.py data_dir='data/classification_results/class3' |
|
``` |
|
|
|
Finally, four segmentation models will be trained. |
|
|
|
## Trained models |
|
The models are in models/. |
|
|
|
## Inference |
|
The inference process includes classification and segmentation. |
|
```shell |
|
python predict.py -i input_path -o output_path --model_path './models' |
|
``` |
|
|
|
## Evaluation |
|
Calculate the F-score for evaluation: |
|
```shell |
|
python compute_metric.py --gt_path path_to_labels --seg_path output_path |
|
``` |
|
|
|
## Results |
|
The tuning set F1 score of our method is 0.8795. The rank running time of our method on all the 101 cases in the tuning set is zero in our local |
|
workstation. |