Add link to paper (#3)
Browse files- Add model card (a27f46b7fbbefc1b58bcee889b881d773d23f42c)
- Update README.md (5947669ee151c02157744a33c06fa905310399c2)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
@@ -15,6 +15,8 @@ library_name: transformers
|
|
15 |
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
16 |
</a>
|
17 |
|
|
|
|
|
18 |
## Introduction
|
19 |
|
20 |
Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.
|
|
|
15 |
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
16 |
</a>
|
17 |
|
18 |
+
This repository contains the model of the paper [Qwen2.5-1M Technical Report](https://huggingface.co/papers/2501.15383).
|
19 |
+
|
20 |
## Introduction
|
21 |
|
22 |
Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.
|