rubenroy commited on
Commit
728d74d
·
verified ·
1 Parent(s): 9688b82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -7
README.md CHANGED
@@ -1,22 +1,139 @@
1
  ---
2
- base_model: unsloth/qwen2.5-1.5b-instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - qwen2
8
  - trl
 
 
 
 
9
  license: apache-2.0
10
  language:
11
  - en
 
 
 
 
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** rubenroy
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-bnb-4bit
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - qwen2
8
  - trl
9
+ - gammacorpus
10
+ - zurich
11
+ - chat
12
+ - conversational
13
  license: apache-2.0
14
  language:
15
  - en
16
+ datasets:
17
+ - rubenroy/GammaCorpus-v2-100k
18
+ pipeline_tag: text-generation
19
+ library_name: transformers
20
  ---
21
 
22
+ ![Zunich Banner](https://cdn.ruben-roy.com/AI/Zurich/img/banner-1.5B-100k.png)
23
 
24
+ # Zurich 1.5B GammaCorpus v2-100k
25
+ *A Qwen 2.5 model fine-tuned on the GammaCorpus dataset*
 
26
 
27
+ ## Overview
28
+ Zurich 1.5B GammaCorpus v2-100k is a fine-tune of Alibaba's **Qwen 2.5 1.5B Instruct** model. Zurich is designed to outperform other models that have a similar size while also showcasing [GammaCorpus v2-100k](https://huggingface.co/datasets/rubenroy/GammaCorpus-v2-100k).
29
 
30
+ ## Model Details
31
+ - **Base Model:** [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
32
+ - **Type:** Causal Language Models
33
+ - **Architecture:** Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
34
+ - **Number of Parameters:** 1.54B
35
+ - **Number of Paramaters (Non-Embedding)**: 1.31B
36
+ - **Number of Layers:** 28
37
+ - **Number of Attention Heads (GQA):** 12 for Q and 2 for KV
38
+
39
+
40
+ ## Training Details
41
+
42
+ Zurich-1.5B-GCv2-100k underwent fine-tuning with 1 A100 GPU for ~10 minutes and trained with the [Unsloth](https://unsloth.ai/) framework. Zurich-1.5B-GCv2-100k was trained for **60 Epochs**.
43
+
44
+ ## Usage
45
+
46
+ ### Requirements
47
+
48
+ We **strongly** recommend you use the latest version of the `transformers` package. You may install it via `pip` as follows:
49
+
50
+ ```
51
+ pip install transformers
52
+ ```
53
+
54
+ ### Quickstart
55
+
56
+ Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents;
57
+
58
+ ```python
59
+ from transformers import AutoModelForCausalLM, AutoTokenizer
60
+
61
+ model_name = "rubenroy/Zurich-1.5B-GCv2-100k"
62
+
63
+ model = AutoModelForCausalLM.from_pretrained(
64
+ model_name,
65
+ torch_dtype="auto",
66
+ device_map="auto"
67
+ )
68
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
69
+
70
+ prompt = "How tall is the Eiffel tower?"
71
+ messages = [
72
+ {"role": "system", "content": "You are Zurich, an AI assistant built on the Qwen 2.5 1.5B model developed by Alibaba Cloud, and fine-tuned by Ruben Roy. You are a helpful assistant."},
73
+ {"role": "user", "content": prompt}
74
+ ]
75
+ text = tokenizer.apply_chat_template(
76
+ messages,
77
+ tokenize=False,
78
+ add_generation_prompt=True
79
+ )
80
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
81
+
82
+ generated_ids = model.generate(
83
+ **model_inputs,
84
+ max_new_tokens=512
85
+ )
86
+ generated_ids = [
87
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
88
+ ]
89
+
90
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
91
+ ```
92
+
93
+ ## About GammaCorpus
94
+
95
+ This model, and all Zurich models, are trained with GammaCorpus. GammaCorpus is a dataset on HuggingFace that is filled with structured and filtered multi-turn conversations.
96
+ GammaCorpus has 4 version with different sizes in each. These are the following versions and sizes:
97
+
98
+ ### GammaCorpus v1
99
+ - 10k UNFILTERED
100
+ - 50k UNFILTERED
101
+ - 70k UNFILTERED
102
+
103
+ Here is a link to the GCv1 dataset collection:<br>
104
+ https://huggingface.co/collections/rubenroy/gammacorpus-v1-67935e4e52a04215f15a7a60
105
+
106
+ ### GammaCorpus v2
107
+ - 10k
108
+ - 500
109
+ - **100k <-- This is the version of GammaCorpus v2 that the Zurich model you are using was trained on.**
110
+ - 500k
111
+ - 1m
112
+ - 5m
113
+
114
+ Here is a link to the GCv2 dataset collection:<br>
115
+ https://huggingface.co/collections/rubenroy/gammacorpus-v2-67935e895e1259c404a579df
116
+
117
+ ### GammaCorpus CoT
118
+ - Math 170k
119
+
120
+ Here is a link to the GC-CoT dataset collection:<br>
121
+ https://huggingface.co/collections/rubenroy/gammacorpus-cot-6795bbc950b62b1ced41d14f
122
+
123
+ ### GammaCorpus QA
124
+ - Fact 450k
125
+
126
+ Here is a link to the GC-QA dataset collection:<br>
127
+ https://huggingface.co/collections/rubenroy/gammacorpus-qa-679857017bb3855234c1d8c7
128
+
129
+ ### The link to the full GammaCorpus dataset collection can be found [here](https://huggingface.co/collections/rubenroy/gammacorpus-67765abf607615a0eb6d61ac).
130
+
131
+ ## Known Limitations
132
+
133
+ - **Bias:** We have tried our best to mitigate as much bias we can, but please be aware of the possibility that the model might generate some biased answers.
134
+
135
+ ## Additional Information
136
+
137
+ ### Licensing Information
138
+
139
+ The model is released under the **[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0)**. Please refer to the license for usage rights and restrictions.