# medmekk/DeepSeek-R1-Distill-Qwen-1.5B.GGUF GGUF quantized versions of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) ## Available Formats: - `Q2_K`: DeepSeek-R1-Distill-Qwen-1.5B-Q2_K.gguf - `Q3_K_S`: DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_S.gguf - `Q3_K_M`: DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_M.gguf - `Q3_K_L`: DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_L.gguf - `Q4_0`: DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf - `Q4_K_S`: DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_S.gguf - `Q4_K_M`: DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M.gguf - `Q5_0`: DeepSeek-R1-Distill-Qwen-1.5B-Q5_0.gguf - `Q5_K_S`: DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_S.gguf - `Q5_K_M`: DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M.gguf - `Q6_K`: DeepSeek-R1-Distill-Qwen-1.5B-Q6_K.gguf - `Q8_0`: DeepSeek-R1-Distill-Qwen-1.5B-Q8_0.gguf - `IQ3_M_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-IQ3_M_imat.gguf - `IQ3_XXS_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-IQ3_XXS_imat.gguf - `Q4_K_M_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M_imat.gguf - `Q4_K_S_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_S_imat.gguf - `IQ4_NL_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-IQ4_NL_imat.gguf - `IQ4_XS_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-IQ4_XS_imat.gguf - `Q5_K_M_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_M_imat.gguf - `Q5_K_S_IMAT`: DeepSeek-R1-Distill-Qwen-1.5B-Q5_K_S_imat.gguf ## Usage with llama.cpp: ```bash # CLI: llama-cli --hf-repo medmekk/DeepSeek-R1-Distill-Qwen-1.5B.GGUF --hf-file MODEL_FILE -p "Your prompt" # Server: llama-server --hf-repo medmekk/DeepSeek-R1-Distill-Qwen-1.5B.GGUF --hf-file MODEL_FILE -c 2048 ```