RichardErkhov commited on
Commit
1e44372
·
verified ·
1 Parent(s): 2dabf67

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ OLMoE-1B-7B-0924-Instruct-Base - GGUF
11
+ - Model creator: https://huggingface.co/1024m/
12
+ - Original model: https://huggingface.co/1024m/OLMoE-1B-7B-0924-Instruct-Base/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q2_K.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q2_K.gguf) | Q2_K | 2.39GB |
18
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q3_K_S.gguf) | Q3_K_S | 2.82GB |
19
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q3_K.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q3_K.gguf) | Q3_K | 3.11GB |
20
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q3_K_M.gguf) | Q3_K_M | 3.11GB |
21
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q3_K_L.gguf) | Q3_K_L | 3.36GB |
22
+ | [OLMoE-1B-7B-0924-Instruct-Base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.IQ4_XS.gguf) | IQ4_XS | 3.5GB |
23
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q4_0.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q4_0.gguf) | Q4_0 | 3.66GB |
24
+ | [OLMoE-1B-7B-0924-Instruct-Base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.IQ4_NL.gguf) | IQ4_NL | 3.69GB |
25
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q4_K_S.gguf) | Q4_K_S | 3.69GB |
26
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q4_K.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q4_K.gguf) | Q4_K | 3.92GB |
27
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q4_K_M.gguf) | Q4_K_M | 3.92GB |
28
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q4_1.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q4_1.gguf) | Q4_1 | 4.05GB |
29
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q5_0.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q5_0.gguf) | Q5_0 | 4.45GB |
30
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q5_K_S.gguf) | Q5_K_S | 4.45GB |
31
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q5_K.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q5_K.gguf) | Q5_K | 4.59GB |
32
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q5_K_M.gguf) | Q5_K_M | 4.59GB |
33
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q5_1.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q5_1.gguf) | Q5_1 | 4.85GB |
34
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q6_K.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q6_K.gguf) | Q6_K | 5.29GB |
35
+ | [OLMoE-1B-7B-0924-Instruct-Base.Q8_0.gguf](https://huggingface.co/RichardErkhov/1024m_-_OLMoE-1B-7B-0924-Instruct-Base-gguf/blob/main/OLMoE-1B-7B-0924-Instruct-Base.Q8_0.gguf) | Q8_0 | 6.85GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: apache-2.0
43
+ language:
44
+ - en
45
+ tags:
46
+ - moe
47
+ - olmo
48
+ - olmoe
49
+ co2_eq_emissions: 1
50
+ datasets:
51
+ - allenai/ultrafeedback_binarized_cleaned
52
+ base_model: allenai/OLMoE-1B-7B-0924-SFT
53
+ library_name: transformers
54
+ ---
55
+
56
+ <img alt="OLMoE Logo." src="olmoe-logo.png" width="250px">
57
+
58
+ # Model Summary
59
+
60
+ > OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in September 2024 (0924) that has been adapted via SFT and DPO from [OLMoE-1B-7B](https://hf.co/allenai/OLMoE-1B-7B-0924). It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B-Chat. OLMoE is 100% open-source.
61
+
62
+ This information and more can also be found on the [**OLMoE GitHub repository**](https://github.com/allenai/OLMoE).
63
+ - **Paper**: https://arxiv.org/abs/2409.02060
64
+ - **Pretraining** [Checkpoints](https://hf.co/allenai/OLMoE-1B-7B-0924), [Code](https://github.com/allenai/OLMo/tree/Muennighoff/MoE), [Data](https://huggingface.co/datasets/allenai/OLMoE-mix-0924) and [Logs](https://wandb.ai/ai2-llm/olmoe/reports/OLMoE-1B-7B-0924--Vmlldzo4OTcyMjU3).
65
+ - **SFT (Supervised Fine-Tuning)** [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-SFT), [Code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [Data](https://hf.co/datasets/allenai/tulu-v3.1-mix-preview-4096-OLMoE) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-sft-logs.txt).
66
+ - **DPO/KTO (Direct Preference Optimization/Kahneman-Tversky Optimization)**, [Checkpoints](https://huggingface.co/allenai/OLMoE-1B-7B-0924-Instruct), [Preference Data](https://hf.co/datasets/allenai/ultrafeedback_binarized_cleaned), [DPO code](https://github.com/allenai/open-instruct/tree/olmoe-sft), [KTO code](https://github.com/Muennighoff/kto/blob/master/kto.py) and [Logs](https://github.com/allenai/OLMoE/blob/main/logs/olmoe-dpo-logs.txt).
67
+
68
+ # Use
69
+
70
+ Install `transformers` **from source** until a release after [this PR](https://github.com/huggingface/transformers/pull/32406) & `torch` and run:
71
+
72
+ ```python
73
+ from transformers import OlmoeForCausalLM, AutoTokenizer
74
+ import torch
75
+
76
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
77
+
78
+ # Load different ckpts via passing e.g. `revision=kto`
79
+ model = OlmoeForCausalLM.from_pretrained("allenai/OLMoE-1B-7B-0924-Instruct").to(DEVICE)
80
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMoE-1B-7B-0924-Instruct")
81
+ messages = [{"role": "user", "content": "Explain to me like I'm five what is Bitcoin."}]
82
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(DEVICE)
83
+ out = model.generate(inputs, max_length=100)
84
+ print(tokenizer.decode(out[0]))
85
+ """
86
+ <|endoftext|><|user|>
87
+ Explain to me like I'm five what is Bitcoin.
88
+ <|assistant|>
89
+ Bitcoin is like a special kind of money that you can use to buy things online. But unlike regular money, like dollars or euros, Bitcoin isn't printed by governments or banks. Instead, it's created by a special computer program that helps people keep track of it.
90
+
91
+ Here's how it works: imagine you have a bunch of toys, and you want to
92
+ """
93
+ ```
94
+
95
+ Branches:
96
+ - `main`: Preference tuned via DPO model of https://hf.co/allenai/OLMoE-1B-7B-0924-SFT (`main` branch)
97
+ - `load-balancing`: Ablation with load balancing loss during DPO starting from the `load-balancing` branch of https://hf.co/allenai/OLMoE-1B-7B-0924-SFT
98
+ - `non-annealed`: Ablation starting from the `non-annealed` branch of https://hf.co/allenai/OLMoE-1B-7B-0924-SFT which is an SFT of the pretraining checkpoint prior to annealing (branch `step1200000-tokens5033B` of https://hf.co/allenai/OLMoE-1B-7B-0924)
99
+ - `kto`: Ablation using KTO instead of DPO. This branch is the checkpoint after 5,000 steps with the RMS optimizer. The other `kto*` branches correspond to the other checkpoints mentioned in the paper.
100
+
101
+ # Evaluation Snapshot
102
+
103
+ | Task (→) | MMLU | GSM8k | BBH | Human-Eval | Alpaca-Eval 1.0 | XSTest | IFEval | Avg |
104
+ |---------------|------|-------|------|------------|-----------------|--------|--------|------|
105
+ | **Setup (→)** | 0-shot | 8-shot CoT | 3-shot | 0-shot | 0-shot | 0-shot | 0-shot | |
106
+ | **Metric (→)** | EM | EM | EM | Pass@10 | %win | F1 | Loose Acc | |
107
+ | | | | | | | | | |
108
+ | OLMo-1B (0724) | 25.0 | 7.0 | 22.5 | 16.0 | - | 67.6 | 20.5 | - |
109
+ | +SFT | 36.0 | 12.5 | 27.2 | 21.2 | 41.5 | 81.9 | 26.1 | 35.9 |
110
+ | +DPO | 36.7 | 12.5 | 30.6 | 22.0 | 50.9 | 79.8 | 24.2 | 37.4 |
111
+ | OLMo-7B (0724) | 50.8 | 32.5 | 36.9 | 32.3 | - | 80.8 | 19.6 | - |
112
+ | +SFT | 54.2 | 25.0 | 35.7 | 38.5 | 70.9 | 86.1 | 39.7 | 49.3 |
113
+ | +DPO | 52.8 | 9.0 | 16.6 | 35.0 | 83.5 | **87.5** | 37.9 | 49.1 |
114
+ | JetMoE-2B-9B | 45.6 | 43.0 | 37.2 | 54.6 | - | 68.2 | 20.0 | - |
115
+ | +SFT | 46.1 | 53.5 | 35.6 | 64.8 | 69.3 | 55.6 | 30.5 | 50.4 |
116
+ | DeepSeek-3B-16B | 37.7 | 18.5 | 39.4 | 48.3 | - | 65.9 | 13.5 | - |
117
+ | +Chat | 48.5 | 46.5 | **40.8** | **70.1** | 74.8 | 85.6 | 32.3 | 57.0 |
118
+ | Qwen1.5-3B-14B | **60.4** | 13.5 | 27.2 | 60.2 | - | 73.4 | 20.9 | - |
119
+ | +Chat | 58.9 | **55.5** | 21.3 | 59.7 | 83.9 | 85.6 | 36.2 | 57.3 |
120
+ | **OLMoE (This Model)** | 49.8 | 3.0 | 33.6 | 22.4 | - | 59.7 | 16.6 | - |
121
+ | **+SFT** | 51.4 | 40.5 | 38.0 | 51.6 | 69.2 | 84.1 | 43.3 | 54.0 |
122
+ | **+DPO** | 51.9 | 45.5 | 37.0 | 54.8 | **84.0** | 82.6 | **48.1** | **57.7** |
123
+
124
+ # Citation
125
+
126
+ ```bibtex
127
+ @misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
128
+ title={OLMoE: Open Mixture-of-Experts Language Models},
129
+ author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
130
+ year={2024},
131
+ eprint={2409.02060},
132
+ archivePrefix={arXiv},
133
+ primaryClass={cs.CL},
134
+ url={https://arxiv.org/abs/2409.02060},
135
+ }
136
+ ```
137
+