Transformers
PyTorch
English
bridgetower
Inference Endpoints
anahita-b commited on
Commit
d3b0449
·
1 Parent(s): 1df36e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md CHANGED
@@ -1,3 +1,114 @@
1
  ---
 
 
 
2
  license: mit
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ tags:
4
+ - bridgetower
5
  license: mit
6
+ datasets:
7
+ - conceptual_captions
8
+ - sbu_captions
9
+ - visual_genome
10
+ - mscoco_captions
11
  ---
12
+
13
+ # BridgeTower base-itm model
14
+
15
+ The BridgeTower model was proposed in [BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning] by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan.
16
+ The model was pretrained model on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in
17
+ [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in
18
+ [this repository](https://github.com/microsoft/BridgeTower).
19
+
20
+ ## Model description
21
+
22
+ The abstract from the paper is the following:
23
+ Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets.
24
+
25
+ ## Intended uses & limitations(TODO)
26
+
27
+ You can use the raw model for image and text retrieval.
28
+
29
+ ### How to use
30
+
31
+ Here is how to use this model to get the features of a given text in PyTorch:
32
+ ```python
33
+ import os
34
+ from PIL import Image
35
+ from glob import glob
36
+ from tqdm import tqdm
37
+ import torch
38
+ from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval
39
+
40
+ image_dir = "/datasets/COCO2017/val2017"
41
+ search_text = "a woman holding an umbrella"
42
+
43
+ processor = BridgeTowerProcessor.from_pretrained(("BridgeTower/bridgetower-base-itm"))
44
+ model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm")
45
+
46
+ max_score = float('-inf')
47
+ best_match_image = None
48
+ image_paths = glob(os.path.join(image_dir, '*.jpg'))[:1000]
49
+
50
+ for image_path in tqdm(image_paths, smoothing=1):
51
+ image = Image.open(image_path).convert("RGB")
52
+ inputs = processor(image, search_text, return_tensors="pt")
53
+
54
+ inputs = dict((k,v.to(device)) if isinstance(v, torch.Tensor) else (k,v) for k,v in inputs.items())
55
+
56
+ outputs = model(**inputs)
57
+
58
+ score = outputs.logits[0,1].item()
59
+
60
+ if score > max_score:
61
+ max_score = score
62
+ best_match_image = image_path
63
+
64
+ print(max_score)
65
+ print(best_match_image)
66
+ ```
67
+ ### Limitations and bias
68
+
69
+ TODO
70
+
71
+ ## Training data
72
+
73
+ The BridgeTower model was pretrained on four public image-caption datasets:
74
+ - [Conceptual Captions(CC)](https://ai.google.com/research/ConceptualCaptions/),
75
+ - [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/),
76
+ - [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf),
77
+ - [Visual Genome](https://visualgenome.org/)
78
+
79
+ The total number of unique images in the combined data is 4M.
80
+
81
+ ## Training procedure
82
+
83
+ ### Preprocessing
84
+
85
+ TODO
86
+
87
+ ### Pretraining
88
+
89
+ The model was pre-trained for 100k steps on 8 NVIDIA A100 GPUs with a batch size of 4096.
90
+ The optimizer used was AdamW with a learning rate of 1e-5. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 288 x 288.
91
+
92
+ ## Evaluation results
93
+ When fine-tuned on downstream tasks, this model achieves the following results:
94
+
95
+ | Task | | | | | | | | |
96
+ |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
97
+ | | | | | | | | | |
98
+
99
+ ### BibTeX entry and citation info
100
+ ```bibtex
101
+ @article{xu2022bridge,
102
+ title={Bridge-Tower: Building Bridges Between Encoders in Vision-Language Representation Learning},
103
+ author={Xu, Xiao and
104
+ Wu, Chenfei and
105
+ Rosenman, Shachar and
106
+ Lal, Vasudev and
107
+ Duan, Nan},
108
+ journal={arXiv preprint arXiv:2206.08657},
109
+ year={2022}
110
+ }
111
+ ```
112
+ <a href="https://huggingface.co/exbert/?model=BridgeTower/bridgetower-base">
113
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
114
+ </a>