BitsAndBytes 4 bits quantization from DeepSeek-R1-Distill-Qwen-14B commit 123265213609ea67934b1790bbb0203d3c50f54f

Downloads last month
34
Safetensors
Model size
8.37B params
Tensor type
FP16
·
F32
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for MPWARE/DeepSeek-R1-Distill-Qwen-14B-BnB-4bits

Quantized
(102)
this model

Collection including MPWARE/DeepSeek-R1-Distill-Qwen-14B-BnB-4bits