Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,9 @@
|
|
1 |
This is not the original model I made, it's google's [Gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) and Quantized by [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
|
|
|
2 |
I quantized it with 4-bit, your GPU VRAM should be at least 8G in order to garauntee it work perfectly.
|
3 |
-
|
|
|
|
|
4 |
Below is the original model card, hope you guys having fun with it.
|
5 |
|
6 |
|
|
|
1 |
This is not the original model I made, it's google's [Gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) and Quantized by [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
|
2 |
+
|
3 |
I quantized it with 4-bit, your GPU VRAM should be at least 8G in order to garauntee it work perfectly.
|
4 |
+
|
5 |
+
By running some test on this AWQ model, this model is significantly brilliant.
|
6 |
+
|
7 |
Below is the original model card, hope you guys having fun with it.
|
8 |
|
9 |
|