lemonilia commited on
Commit
a7b0983
·
verified ·
1 Parent(s): 239472f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -11
README.md CHANGED
@@ -9,16 +9,16 @@ base_model:
9
  ---
10
 
11
  # Mistral-Small-3-Reasoner-s1
12
- A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model basic reasoning capabilities similar to DeepSeek R1's. Surprisingly, they appear to work even outside math/STEM subjects.
13
 
14
  ## Usage notes
15
- Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, the model will generate `</think>` and then the final response.
16
 
17
- Make sure that the models' output length is long enough, and be prepared to make the model continue its response if it stops prematurely.
18
 
19
- Low-depth instructions (perhaps at depth-0, just before the assistant's rsponse) can be beneficial in steering how the model should think. An additional `[SYSTEM_PROMPT]` can be used there.
20
 
21
- I advise to remove old chain-of-thoughts from the chat history before the model generates a new one, but I haven't tested this in depth.
22
 
23
  ## Prompting format
24
  ```
@@ -30,10 +30,9 @@ Model response.</s>
30
  ```
31
 
32
  ## Known quirks and issues
33
- - Very long chain-of-thoughts might confuse the model during multi-turn conversations.
34
- - The model can override the requested formatting/style with that of the finetuning data.
35
- - Some of the non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
36
- - Most default guardrails apparently still work, but can be bypassed with a suitable prompt as with the original model.
37
 
38
  # What's in this repository
39
  - Checkpoints for epochs 1~5
@@ -41,10 +40,10 @@ Model response.</s>
41
  - Some static GGUF quantizations
42
 
43
  ## Dataset
44
- Almost the entirety of the [s1K dataset](https://huggingface.co/datasets/simplescaling/s1K) was used with minimal modifications to make it properly work with Mistral-Small-3-Instruct, with the exception of 4 rows that didn't fit within the training sequence length of 8192 tokens and 16 of the shortest ones that have been used as the test set instead. No samples have been clipped and no system prompt was added. All were single-turn.
45
 
46
  ## Training hyperparameters
47
- I tried to roughly follow indications from [appendix C in the paper](https://arxiv.org/abs/2501.19393) with the notable exception of using 4-bit LoRA finetuning and tuning the learning rate accordingly. The loss was not computed on questions within `[INST]...[/INST]` tags (including the tags), just on reasoning traces and solutions. The training sequence length was about the maximum I could use on one NVidia RTX3090 24GB GPU. The total training time was about 18 hours.
48
 
49
  Overfitting (increasing eval loss) occurred after 1 epoch, but the training loss behavior was similar to that observed in the paper.
50
 
 
9
  ---
10
 
11
  # Mistral-Small-3-Reasoner-s1
12
+ A simple [Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) finetune on the [s1K reasoning dataset by Muennighoff et al.](https://huggingface.co/datasets/simplescaling/s1K) to give the original model basic reasoning capabilities within `<think>` tags, like DeepSeek R1. Surprisingly, the model can reason even outside math/STEM subjects.
13
 
14
  ## Usage notes
15
+ Prepend the assistant response with `<think>` to make the model engage in a chain-of-thought. This should happen automatically with math questions on an empty context, but it needs to be forced in longer conversations. When done thinking, it will generate `</think>` and then the final response.
16
 
17
+ Make sure that the model's output length is long enough; be prepared to make the it continue its response if it stops prematurely.
18
 
19
+ Low-depth instructions (perhaps at depth-0, just before the assistant's rsponse) can be beneficial in steering how the model should think. An additional `[SYSTEM_PROMPT]` could be used there.
20
 
21
+ I recommend to remove old chain-of-thoughts from the chat history before the model generates a new one, but I haven't tested this in depth.
22
 
23
  ## Prompting format
24
  ```
 
30
  ```
31
 
32
  ## Known quirks and issues
33
+ - The chain-of-thoughts can confuse the model during prolonged multi-turn conversations (note that the finetuning data was exclusively single-turn).
34
+ - Besides multi-turn capabilities, other non-reasoning capabilities of the original `Mistral-Small-24B-Instruct-2501` model might have degraded.
35
+ - Most default guardrails apparently still work, but can be very easily bypassed with a suitable prompt as with the original model.
 
36
 
37
  # What's in this repository
38
  - Checkpoints for epochs 1~5
 
40
  - Some static GGUF quantizations
41
 
42
  ## Dataset
43
+ Almost the entirety of the [s1K dataset](https://huggingface.co/datasets/simplescaling/s1K) was used with minimal modifications to make it properly work with Mistral-Small-3-Instruct, except 4 rows that didn't fit within the training sequence length of 8192 tokens and 16 of the shortest ones that have been used as the test set instead. No samples have been clipped and no system prompt was added. All were single-turn.
44
 
45
  ## Training hyperparameters
46
+ I tried to roughly follow indications from [appendix C in the paper](https://arxiv.org/abs/2501.19393) with the notable exception of using 4-bit LoRA finetuning and tuning the learning rate accordingly. The loss was not computed on questions within `[INST]...[/INST]` tags (including the tags), just on reasoning traces and solutions. The training sequence length was close to the maximum I could use on one NVidia RTX3090 24GB GPU. The total training time was about 18 hours.
47
 
48
  Overfitting (increasing eval loss) occurred after 1 epoch, but the training loss behavior was similar to that observed in the paper.
49