ZhangYuanhan commited on
Commit
ef2fe63
·
verified ·
1 Parent(s): 4988776

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -2
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  license: llama2
3
  ---
4
 
@@ -9,27 +10,33 @@ license: llama2
9
  ## Model details
10
 
11
  **Model type:**
 
12
  LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
13
- Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5)
 
14
 
15
  **Model date:**
 
16
  LLaVA-Next-Video-7B was trained in April 2024.
17
 
18
  **Paper or resources for more information:**
 
19
  https://github.com/LLaVA-VL/LLaVA-NeXT
20
 
21
  ## License
22
  Llama 2 is licensed under the LLAMA 2 Community License,
23
  Copyright (c) Meta Platforms, Inc. All Rights Reserved.
24
 
25
- **Where to send questions or comments about the model:**
26
  https://github.com/LLaVA-VL/LLaVA-NeXT/issues
27
 
28
  ## Intended use
29
  **Primary intended uses:**
 
30
  The primary use of LLaVA is research on large multimodal models and chatbots.
31
 
32
  **Primary intended users:**
 
33
  The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
34
 
35
  ## Training dataset
 
1
  ---
2
+ inference: false
3
  license: llama2
4
  ---
5
 
 
10
  ## Model details
11
 
12
  **Model type:**
13
+ <br>
14
  LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
15
+ <br>
16
+ Base LLM: lmsys/vicuna-7b-v1.5
17
 
18
  **Model date:**
19
+ <br>
20
  LLaVA-Next-Video-7B was trained in April 2024.
21
 
22
  **Paper or resources for more information:**
23
+ <br>
24
  https://github.com/LLaVA-VL/LLaVA-NeXT
25
 
26
  ## License
27
  Llama 2 is licensed under the LLAMA 2 Community License,
28
  Copyright (c) Meta Platforms, Inc. All Rights Reserved.
29
 
30
+ ## Where to send questions or comments about the model
31
  https://github.com/LLaVA-VL/LLaVA-NeXT/issues
32
 
33
  ## Intended use
34
  **Primary intended uses:**
35
+ <br>
36
  The primary use of LLaVA is research on large multimodal models and chatbots.
37
 
38
  **Primary intended users:**
39
+ <br>
40
  The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
41
 
42
  ## Training dataset