iproskurina
's Collections
Quantized LLMs
updated
iproskurina/Mistral-7B-v0.3-GPTQ-4bit-g128
Text Generation
•
Updated
•
39
iproskurina/bloom-7b1-GPTQ-4bit-g128
Text Generation
•
Updated
•
13
•
2
iproskurina/bloom-1b7-GPTQ-4bit-g128
Text Generation
•
Updated
•
95
iproskurina/bloom-3b-GPTQ-4bit-g128
Text Generation
•
Updated
•
66
iproskurina/bloom-560m-GPTQ-4bit-g128
Text Generation
•
Updated
•
95
iproskurina/bloom-1b1-GPTQ-4bit-g128
Text Generation
•
Updated
•
98
iproskurina/opt-2.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
124
iproskurina/opt-13b-GPTQ-4bit-g128
Text Generation
•
Updated
•
8
iproskurina/opt-6.7b-GPTQ-4bit-g128
Text Generation
•
Updated
•
110
iproskurina/opt-125m-GPTQ-4bit-g128
Text Generation
•
Updated
•
61
iproskurina/opt-350m-GPTQ-4bit-g128
Text Generation
•
Updated
•
97
iproskurina/opt-1.3b-GPTQ-4bit-g128
Text Generation
•
Updated
•
113
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g128
Text Generation
•
Updated
•
5
iproskurina/Mistral-7B-v0.3-GPTQ-8bit-g128
Text Generation
•
Updated
•
6
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64
Text Generation
•
Updated
•
10
iproskurina/Mistral-7B-v0.1-GPTQ-8bit-g64
Text Generation
•
Updated
•
3
iproskurina/Mistral-7B-v0.1-GPTQ-4bit-g128
Text Generation
•
Updated
•
6
iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g128
Text Generation
•
Updated
•
5
TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
Text Generation
•
Updated
•
4.39k
•
79
TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
Text Generation
•
Updated
•
356k
•
50
TheBloke/bloomz-176B-GPTQ
Text Generation
•
Updated
•
14
•
20
TheBloke/BLOOMChat-176B-v1-GPTQ
Text Generation
•
Updated
•
13
•
31
TheBloke/Llama-2-13B-chat-GPTQ
Text Generation
•
Updated
•
36.6k
•
362
When Quantization Affects Confidence of Large Language Models?
Paper
•
2405.00632
•
Published