Image-Text-to-Text
Safetensors
pmod_llava_llama
JungleGym commited on
Commit
7407a8a
·
verified ·
1 Parent(s): 4b3b80f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -3
README.md CHANGED
@@ -1,3 +1,20 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - lmsys/vicuna-7b-v1.5
5
+ - openai/clip-vit-large-patch14-336
6
+ ---
7
+ # p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay
8
+ This is the official checkpoint library for [p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay](https://arxiv.org/abs/2412.04449).
9
+ Please refer to [this repository](https://github.com/MCG-NJU/p-MoD) for our code.
10
+
11
+ ## Model Description
12
+ This model is pretrained on [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) image caption data, and instruction-tuned on [llava-v1_5-mix-665k](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json).
13
+
14
+ ## Citation
15
+ TBD
16
+
17
+ ## License
18
+ Llama 2 is licensed under the LLAMA 2 Community License,
19
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
20
+