Create README.md

#1
by pcuenq HF staff - opened
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - conversational
7
+ - mlx
8
+ base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
9
+ ---
10
+
11
+ # DeepSeek-R1-Distill-Qwen-32B-Q2-6
12
+
13
+ This model was converted to MLX from [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B), using mixed 2/6 bit quantization. This scheme preserves quality much more than a standard 2-bit quantization.
14
+
15
+ ## Use with mlx
16
+
17
+ ```bash
18
+ pip install mlx-lm
19
+ ```
20
+
21
+ ```bash
22
+ python -m mlx_lm.chat --model pcuenq/DeepSeek-R1-Distill-Qwen-32B-Q2-6 --max-tokens 10000 --temp 0.6 --top-p 0.7
23
+ ```