Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ language:
|
|
21 |
|
22 |
## Model Summary
|
23 |
|
24 |
-
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
|
25 |
|
26 |
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
|
27 |
|
@@ -127,9 +127,13 @@ SmolLM2 models primarily understand and generate content in English. They can pr
|
|
127 |
|
128 |
## Citation
|
129 |
```bash
|
130 |
-
@misc{
|
131 |
-
title={SmolLM2 -
|
132 |
-
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and
|
133 |
-
year={
|
|
|
|
|
|
|
|
|
134 |
}
|
135 |
```
|
|
|
21 |
|
22 |
## Model Summary
|
23 |
|
24 |
+
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. More details in our paper: https://arxiv.org/abs/2502.02737
|
25 |
|
26 |
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
|
27 |
|
|
|
127 |
|
128 |
## Citation
|
129 |
```bash
|
130 |
+
@misc{allal2025smollm2smolgoesbig,
|
131 |
+
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
|
132 |
+
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
|
133 |
+
year={2025},
|
134 |
+
eprint={2502.02737},
|
135 |
+
archivePrefix={arXiv},
|
136 |
+
primaryClass={cs.CL},
|
137 |
+
url={https://arxiv.org/abs/2502.02737},
|
138 |
}
|
139 |
```
|