PEFT
Safetensors
English
German
vidore
multimodal_embedding
visual-document-retrieval
tattrongvu commited on
Commit
5a2b6c3
·
verified ·
1 Parent(s): c24e298

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -1
README.md CHANGED
@@ -11,4 +11,38 @@ base_model:
11
  tags:
12
  - vidore
13
  - multimodal_embedding
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  tags:
12
  - vidore
13
  - multimodal_embedding
14
+ ---
15
+ # ColQwen2-2B: Visual Retriever based on Qwen2-VL-7B-Instruct with ColBERT strategy
16
+
17
+ ### This is the base version trained with batch_size 8x128 for 5 epoch and with the updated pad token
18
+
19
+ ColQwen is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
20
+ It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
21
+ It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
22
+
23
+ This version is the untrained base version to guarantee deterministic projection layer initialization.
24
+ <p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
25
+
26
+ ## Version specificity
27
+
28
+
29
+ This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
30
+ Maximal resolution is set so that 1024 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
31
+
32
+ This version is trained with `colpali-engine==0.3.4`.
33
+
34
+ Data is the same as the ColPali data described in the paper.
35
+
36
+
37
+ ## Model Training
38
+
39
+ ### Dataset
40
+ The dataset was extended from the original colpali train set with the gemini 1.5 flash generated QA on 35k images scraped from internet.
41
+
42
+ *Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
43
+
44
+ ### Parameters
45
+ We train models use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
46
+ with `alpha=128` and `r=128` on the transformer layers from the language model,
47
+ as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
48
+ We train on an 8xH100 GPU setup with distributed data parallelism (via accelerate), a learning rate of 2e-4 with linear decay with 1% warmup steps, batch size per device is 128, in `bfloat16` format