s1.1-32B / README.md
Muennighoff's picture
add metadata to readme (#2)
4d6d573 verified
metadata
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
  - simplescaling/s1K-1.1
base_model:
  - Qwen/Qwen2.5-32B-Instruct
library_name: transformers

Model Summary

s1.1 is our sucessor of s1 with better reasoning performance by leveraging reasoning traces from r1 instead of Gemini.

This model is a successor of s1-32B with slightly better performance. Thanks to Ryan Marten for helping generate r1 traces for s1K.

Use

The model usage is documented here.

Evaluation

Metric s1-32B s1.1-32B o1-preview o1 DeepSeek-R1 DeepSeek-R1-Distill-Qwen-32B
# examples 1K 1K ? ? >800K 800K
AIME2024 56.7 56.7 40.0 74.4 79.8 72.6
AIME2025 I 26.7 60.0 37.5 ? 65.0 46.1
MATH500 93.0 95.4 81.4 94.8 97.3 94.3
GPQA-Diamond 59.6 63.6 75.2 77.3 71.5 62.1

Note that s1-32B and s1.1-32B use budget forcing in this table; specifically ignoring end-of-thinking and appending "Wait" once or twice.