Official AQLM quantization of mistralai/Mistral-7B-Instruct-v0.2 .

For this quantization, we used 2 codebooks of 8 bits.

Results:

Model Quantization MMLU (5-shot) Model size, Gb
mistralai/Mistral-7B-Instruct-v0.2 None 0.5912 14.5
2x8 0.4384 2.3
Downloads last month
121
Safetensors
Model size
2.01B params
Tensor type
FP16
·
I8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8

Adapters
1 model

Collection including ISTA-DASLab/Mistral-7B-Instruct-v0.2-AQLM-2Bit-2x8