zongzhuofan commited on
Commit
5a4826d
·
verified ·
1 Parent(s): 4bb3432

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -3
README.md CHANGED
@@ -1,3 +1,63 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-segmentation
5
+ - vision
6
+ datasets:
7
+ - coco
8
+ ---
9
+
10
+ # DETRs with Collaborative Hybrid Assignments Training# DETRs with Collaborative Hybrid Assignments Training
11
+
12
+ ## Introduction
13
+
14
+ In this paper, we present a novel collaborative hybrid assignments training scheme, namely Co-DETR, to learn more efficient and effective DETR-based detectors from versatile label assignment manners.
15
+ 1. **Encoder optimization**: The proposed training scheme can easily enhance the encoder's learning ability in end-to-end detectors by training multiple parallel auxiliary heads supervised by one-to-many label assignments.
16
+ 2. **Decoder optimization**: We conduct extra customized positive queries by extracting the positive coordinates from these auxiliary heads to improve attention learning of the decoder.
17
+ 3. **State-of-the-art performance**: Co-DETR with ViT-Large (304M parameters) is **the first model to achieve 66.0 AP on COCO test-dev.**
18
+
19
+ ## Model Zoo
20
+
21
+ The Co-DETR model on the COCO dataset will be released in the near future.
22
+ | Model | Backbone | Aug | Dataset | box AP (val) | box AP (test-dev) |
23
+ | ------ | -------- | --- | ------- | ------------ | ---------------- |
24
+ | Co-DETR | ViT-L | DETR | COCO | 65.4 | - |
25
+ | Co-DETR (TTA) | ViT-L | DETR | COCO | 65.9 | 66.0 |
26
+
27
+ ## How to use
28
+
29
+ We implement Co-DETR using [MMDetection V2.25.3](https://github.com/open-mmlab/mmdetection/releases/tag/v2.25.3) and [MMCV V1.5.0](https://github.com/open-mmlab/mmcv/releases/tag/v1.5.0). Please refer to our [github repo](https://github.com/Sense-X/Co-DETR/tree/main) for more details.
30
+
31
+ ### Training
32
+ Train Co-Deformable-DETR + ResNet-50 with 8 GPUs:
33
+ ```shell
34
+ sh tools/dist_train.sh projects/configs/co_deformable_detr/co_deformable_detr_r50_1x_coco.py 8 path_to_exp
35
+ ```
36
+ Train using slurm:
37
+ ```shell
38
+ sh tools/slurm_train.sh partition job_name projects/configs/co_deformable_detr/co_deformable_detr_r50_1x_coco.py path_to_exp
39
+ ```
40
+
41
+ ### Testing
42
+ Test Co-Deformable-DETR + ResNet-50 with 8 GPUs, and evaluate:
43
+ ```shell
44
+ sh tools/dist_test.sh projects/configs/co_deformable_detr/co_deformable_detr_r50_1x_coco.py path_to_checkpoint 8 --eval bbox
45
+ ```
46
+ Test using slurm:
47
+ ```shell
48
+ sh tools/slurm_test.sh partition job_name projects/configs/co_deformable_detr/co_deformable_detr_r50_1x_coco.py path_to_checkpoint --eval bbox
49
+ ```
50
+
51
+ ## Cite Co-DETR
52
+
53
+ If you find this repository useful, please use the following BibTeX entry for citation.
54
+
55
+ ```latex
56
+ @inproceedings{zong2023detrs,
57
+ title={Detrs with collaborative hybrid assignments training},
58
+ author={Zong, Zhuofan and Song, Guanglu and Liu, Yu},
59
+ booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
60
+ pages={6748--6758},
61
+ year={2023}
62
+ }
63
+ ```