Commit
·
b3960f9
0
Parent(s):
Duplicate from localmodels/LLM
Browse files- .gitattributes +34 -0
- README.md +36 -0
- llama-30b.ggmlv3.q2_K.bin +3 -0
- llama-30b.ggmlv3.q3_K_L.bin +3 -0
- llama-30b.ggmlv3.q3_K_M.bin +3 -0
- llama-30b.ggmlv3.q3_K_S.bin +3 -0
- llama-30b.ggmlv3.q4_0.bin +3 -0
- llama-30b.ggmlv3.q4_1.bin +3 -0
- llama-30b.ggmlv3.q4_K_M.bin +3 -0
- llama-30b.ggmlv3.q4_K_S.bin +3 -0
- llama-30b.ggmlv3.q5_0.bin +3 -0
- llama-30b.ggmlv3.q5_1.bin +3 -0
- llama-30b.ggmlv3.q5_K_M.bin +3 -0
- llama-30b.ggmlv3.q5_K_S.bin +3 -0
- llama-30b.ggmlv3.q6_K.bin +3 -0
- llama-30b.ggmlv3.q8_0.bin +3 -0
.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
duplicated_from: localmodels/LLM
|
3 |
+
---
|
4 |
+
# LLaMA 30B ggml
|
5 |
+
|
6 |
+
From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
11 |
+
|
12 |
+
Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
|
13 |
+
|
14 |
+
### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
15 |
+
|
16 |
+
Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## Provided files
|
21 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
22 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
23 |
+
| llama-30b.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB| 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
24 |
+
| llama-30b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB| 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
25 |
+
| llama-30b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB| 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
26 |
+
| llama-30b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB| 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
27 |
+
| llama-30b.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB| 20.80 GB | Original quant method, 4-bit. |
|
28 |
+
| llama-30b.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB| 22.83 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
29 |
+
| llama-30b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB| 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
30 |
+
| llama-30b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB| 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
31 |
+
| llama-30b.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB| 24.87 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
32 |
+
| llama-30b.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB| 26.90 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
33 |
+
| llama-30b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB| 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
34 |
+
| llama-30b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB| 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
35 |
+
| llama-30b.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB| 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
36 |
+
| llama-30b.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB| 37.06 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
llama-30b.ggmlv3.q2_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f416b9174ae3d4f4c6f615069989a9757003cdbb67565aefe922ec46474a3445
|
3 |
+
size 13600299392
|
llama-30b.ggmlv3.q3_K_L.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b88a80b16e4f133e66172c2e3f3b457cab007c89b2402b26357d254c7522dcf7
|
3 |
+
size 17196269952
|
llama-30b.ggmlv3.q3_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c15b085d39f8e517072e1b791b1445261784b3a1c97e45f96dfa9efbb513619
|
3 |
+
size 15637168512
|
llama-30b.ggmlv3.q3_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c1884627dcecb6f4535dce9f29dc1008e8faa75e4a95c509982c4cb47b7e0c38
|
3 |
+
size 13980623232
|
llama-30b.ggmlv3.q4_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d2a441403944819492ec8c2002cc36fa38468149bfb4b7b4c52afc7bd9a7166d
|
3 |
+
size 18300766592
|
llama-30b.ggmlv3.q4_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4b56558f9c8a3aac9b1604985896ccf5524d4760ad0ca7dbf698e8427453f934
|
3 |
+
size 20333775232
|
llama-30b.ggmlv3.q4_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3319050a65461bc266031aaca37d5bee15d6465883877f678db0fb4f20b34d12
|
3 |
+
size 19565939072
|
llama-30b.ggmlv3.q4_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:28e0173dc34e95a0c18d4665ae58a377bed0219c9d6968627e0d8f87fc812f4f
|
3 |
+
size 18300766592
|
llama-30b.ggmlv3.q5_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3fbdf69839c85867c7244bc391e7a3a814183dc852338720ea7180140e6db424
|
3 |
+
size 22366783872
|
llama-30b.ggmlv3.q5_1.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:29a834cc7efa4b1feff64b39a75c9aaae6184d2b558ac1ec8ef9010c5b0ff9d1
|
3 |
+
size 24399792512
|
llama-30b.ggmlv3.q5_K_M.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a147aeaa78424e000a6cac8e4989fe80e6af56da32d23a32b4163e484ffb4bb5
|
3 |
+
size 23018539392
|
llama-30b.ggmlv3.q5_K_S.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:62aea9b2e50f21eeb846f98592d63f49b76e3786f1b2d364f7eef36f0489dede
|
3 |
+
size 22366783872
|
llama-30b.ggmlv3.q6_K.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b26827284d0da9f2b800f33f82879b0d67fb711f2e433df221cf57ed2d7c65c6
|
3 |
+
size 26686927232
|
llama-30b.ggmlv3.q8_0.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:94e3685c5e9fbc1534ed0e80184ac81f3524de1f6800d26aea0662e194eab12b
|
3 |
+
size 34564835712
|