RichardErkhov commited on
Commit
4a1c79b
·
verified ·
1 Parent(s): bee8aa8

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ vicuna-7b-v1.3-attention-sparsity-20 - GGUF
11
+ - Model creator: https://huggingface.co/wang7776/
12
+ - Original model: https://huggingface.co/wang7776/vicuna-7b-v1.3-attention-sparsity-20/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q2_K.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [vicuna-7b-v1.3-attention-sparsity-20.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [vicuna-7b-v1.3-attention-sparsity-20.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [vicuna-7b-v1.3-attention-sparsity-20.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q3_K.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [vicuna-7b-v1.3-attention-sparsity-20.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q4_0.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [vicuna-7b-v1.3-attention-sparsity-20.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q4_K.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q4_1.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q5_0.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q5_K.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q5_1.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q6_K.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [vicuna-7b-v1.3-attention-sparsity-20.Q8_0.gguf](https://huggingface.co/RichardErkhov/wang7776_-_vicuna-7b-v1.3-attention-sparsity-20-gguf/blob/main/vicuna-7b-v1.3-attention-sparsity-20.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ inference: false
46
+ license: apache-2.0
47
+ ---
48
+ # Overview
49
+ This model has been pruned to 20% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
50
+
51
+
52
+ # Vicuna Model Card
53
+
54
+ ## Model Details
55
+
56
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
57
+
58
+ - **Developed by:** [LMSYS](https://lmsys.org/)
59
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
60
+ - **License:** Non-commercial license
61
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
62
+
63
+ ### Model Sources
64
+
65
+ - **Repository:** https://github.com/lm-sys/FastChat
66
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
67
+ - **Paper:** https://arxiv.org/abs/2306.05685
68
+ - **Demo:** https://chat.lmsys.org/
69
+
70
+ ## Uses
71
+
72
+ The primary use of Vicuna is research on large language models and chatbots.
73
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ - Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
78
+ - APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
79
+
80
+ ## Training Details
81
+
82
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
83
+ The training data is around 125K conversations collected from ShareGPT.com.
84
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
85
+
86
+ ## Evaluation
87
+
88
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
89
+
90
+ ## Difference between different versions of Vicuna
91
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
92
+