File size: 1,429 Bytes
6a60f2f 07945b0 4140a41 4d6d573 6a60f2f 07945b0 6a60f2f d086526 6a60f2f 45fb3e8 07945b0 6a60f2f 5c3b7dd 6a60f2f 07945b0 6a60f2f 07945b0 6a60f2f 20abf17 b5677d2 20abf17 4d6d573 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- simplescaling/s1K-1.1
base_model:
- Qwen/Qwen2.5-32B-Instruct
library_name: transformers
---
# Model Summary
> s1.1 is our sucessor of [s1](https://huggingface.co/simplescaling/s1-32B) with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.
- **Logs:** https://wandb.ai/hashimoto-group/o1/runs/m1ilia77/overview
- **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
- **Paper:** https://arxiv.org/abs/2501.19393
This model is a successor of [s1-32B](https://huggingface.co/simplescaling/s1-32B) with slightly better performance. Thanks to [Ryan Marten](https://huggingface.co/ryanmarten) for helping generate r1 traces for s1K.
# Use
The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
# Evaluation
| Metric | s1-32B | s1.1-32B | o1-preview | o1 | DeepSeek-R1 | DeepSeek-R1-Distill-Qwen-32B |
|---|---|---|---|---|---|---|
| # examples | 1K | 1K | ? | ? | >800K | 800K |
| AIME2024 | 56.7 | 56.7 | 40.0 | 74.4 | 79.8 | 72.6 |
| AIME2025 I | 26.7 | 60.0 | 37.5 | ? | 65.0 | 46.1 |
| MATH500 | 93.0 | 95.4 | 81.4 | 94.8 | 97.3 | 94.3 |
| GPQA-Diamond | 59.6 | 63.6 | 75.2 | 77.3 | 71.5 | 62.1 |
Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" once or twice. |