File size: 5,618 Bytes
f91f64e 3f34165 f91f64e 0c73f09 3f34165 f91f64e f1824b9 f91f64e f1824b9 f91f64e f1824b9 f91f64e f1824b9 f91f64e f1824b9 3f34165 f1824b9 f91f64e 6bfc667 f91f64e e3d99da 0e0f166 e3d99da f91f64e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
language:
- multilingual
license: apache-2.0
---
# Model Card for Sindibad-7B
# Table of Contents
0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
# TL;DR
# Model Details
## Model Description
- **Model type:** Language model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
# Usage
Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source):
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tiiuae/sindibad-7b")
model = AutoModelForCausalLM.from_pretrained("tiiuae/sindibad-7b")
input_text = "Question: How many hours in one day? Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tiiuae/sindibad-7b")
model = AutoModelForCausalLM.from_pretrained("tiiuae/sindibad-7b", device_map="auto")
input_text = "Question: How many hours in one day? Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tiiuae/sindibad-7b")
model = AutoModelForCausalLM.from_pretrained("tiiuae/sindibad-7b", device_map="auto", torch_dtype=torch.float16)
input_text = "Question: How many hours in one day? Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### 4-bit
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained("tiiuae/sindibad-7b")
model = AutoModelForCausalLM.from_pretrained("tiiuae/sindibad-7b", device_map="auto", quantization_config=BitsAndBytesConfig(load_in_4bit=True))
input_text = "Question: How many hours in one day? Answer: "
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
# Training Details
Jingwei
## Training Data
Guillaume
## Training Procedure
The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\times2048\\) to \\(b_{\mathrm{max}}=2048\times2048\\) tokens during the first 50 GT of the training. In the stable phase, we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\) and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\eta_{\mathrm{max}} / 256\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\eta / \sqrt{b}\\) is kept cosntant.
# Evaluation
## Benchmarks
We evaluate our model on all benchmarks of the leaderboard's version 2 using the `lm-evaluation-harness` package, and we evaluate it on the benchmarks of version 1 using `lighteval`.
| model_name | IFEval | BBH | MATH LvL5 | GPQA | MUSR | MMLU-PRO | **Average L2** | ARC | HellaSwag | MMLU | Winogrande | TruthfulQA | GSM8K | **Average L1** |
|------------------------------|--------|-------|-----------|-------|-------|----------|----------------|-------|-----------|-------|------------|------------|-------|----------------|
| `meta-llama/Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 |
| `tiiuae/falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.8 | 7.53 | 15.44 | 13.78 | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** |
| `mistralai/Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 |
| `Zyphra/Zamba-7B-v1` | - | - | - | - | - | - | - | 46.48 | 80.24 | 57.72 | 76.4 | - | - | - |
| Ours | 32.16 | 21.07 | 4.08 | 10.18 | 6.97 | 13.43 | **14.65** | 61.69 | 80.63 | 61.05 | 74.03 | 53.60 | 51.86 | 63.81 |
## Throughput
This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands:
```bash
pip install "causal-conv1d>=1.4.0" mamba-ssm
```
Refer to our technical report for more details about performance evaluation.
|