Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- Locutusque/hercules-v5.0
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
---
|
8 |
+
|
9 |
+
|
10 |
+
# Orca-2.0-Tau-1.8B
|
11 |
+
|
12 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
13 |
+
We fine-tuned qwen2-1.5B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon. We use the ChatML prompt format.
|
14 |
+
|
15 |
+
|
16 |
+
## Model Details
|
17 |
+
|
18 |
+
### Model Description
|
19 |
+
|
20 |
+
<!-- Provide a longer summary of what this model is. -->
|
21 |
+
|
22 |
+
This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.
|
23 |
+
|
24 |
+
- **Developed by:** M4-ai
|
25 |
+
- **Language(s) (NLP):** English and maybe Chinese
|
26 |
+
- **License:** apache-2.0
|
27 |
+
- **Finetuned from model:** [qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)
|
28 |
+
|
29 |
+
## Uses
|
30 |
+
|
31 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
32 |
+
|
33 |
+
General purpose assistant, question answering, chain-of-thought, etc..
|
34 |
+
|
35 |
+
### Recommendations
|
36 |
+
|
37 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
38 |
+
|
39 |
+
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
40 |
+
|
41 |
+
## Evaluation
|
42 |
+
Coming soon
|
43 |
+
|
44 |
+
|
45 |
+
## Training Details
|
46 |
+
|
47 |
+
### Training Data
|
48 |
+
|
49 |
+
- Locutusque/hercules-v5.0
|
50 |
+
|
51 |
+
## Evaluations
|
52 |
+
|
53 |
+
coming soon
|
54 |
+
|
55 |
+
#### Training Hyperparameters
|
56 |
+
|
57 |
+
- **Training regime:** bf16 non-mixed precision
|
58 |
+
## Technical Specifications
|
59 |
+
|
60 |
+
#### Hardware
|
61 |
+
|
62 |
+
We used 8 Kaggle TPUs, and we trained at a global batch size of 256 and sequence length of 1536.
|