Why so few 8 bit capable models?
1
#13 opened over 1 year ago
by
ibivibiv
![](https://cdn-avatars.huggingface.co/v1/production/uploads/60a4384dce31f0d28cda68c7/O3clz57FO_a3hTY_rei55.png)
Can Run "gptq_model-4bit--1g" but not "gptq-4bit-32g-actorder_True"
#12 opened over 1 year ago
by
0-hero
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6382255fcae34727b9cc149e/PYiwi8LVZParYvImmcGez.png)
comparison with bitsandbytes nf4, hope to increase GPTQ accuracy
12
#11 opened over 1 year ago
by
AIReach
Mininum VRAM?
7
#9 opened over 1 year ago
by
hierholzer
![](https://cdn-avatars.huggingface.co/v1/production/uploads/644ef16bf2ada99b2eb95116/LRSfurpUyQQ5J1zHatO0_.jpeg)
GGML version possible/coming?
2
#8 opened over 1 year ago
by
Thireus
vram requirements
1
#5 opened over 1 year ago
by
joujiboi
![](https://cdn-avatars.huggingface.co/v1/production/uploads/631608b395b55e2621d033a8/hwcbRiLms6AyzUhGIx-fC.png)