Update README.md
Browse files
README.md
CHANGED
@@ -18,17 +18,15 @@ language:
|
|
18 |
|
19 |
## Base model info
|
20 |
|
21 |
-
|
22 |
-
It is based on the line of progress on [structured state space models](https://github.com/state-spaces/s4),
|
23 |
-
with an efficient hardware-aware design and implementation in the spirit of [FlashAttention](https://github.com/Dao-AILab/flash-attention).
|
24 |
|
25 |
## Dataset info
|
26 |
|
27 |
-
The OpenHermes dataset is composed of 242,000 entries of primarily GPT-4 generated data
|
28 |
|
29 |
OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset!
|
30 |
|
31 |
-
OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data
|
32 |
|
33 |
- GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
|
34 |
- WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
|
@@ -36,7 +34,7 @@ OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, fro
|
|
36 |
- Camel-AI's domain expert datasets, by the Camel-AI Team
|
37 |
- CodeAlpaca, by Sahil2801
|
38 |
- GPT4-LLM and Unnatural Instructions, by Microsoft
|
39 |
-
Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more
|
40 |
The base dataset mix is identical to the original Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.
|
41 |
|
42 |
## Usage
|
|
|
18 |
|
19 |
## Base model info
|
20 |
|
21 |
+
`Stable LM 2 1.6B` is a 1.6 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.
|
|
|
|
|
22 |
|
23 |
## Dataset info
|
24 |
|
25 |
+
The OpenHermes dataset is composed of 242,000 entries of primarily GPT-4 generated data from open datasets across the AI landscape, including:
|
26 |
|
27 |
OpenHermes 13B is the first fine tune of the Hermes dataset that has a fully open source dataset!
|
28 |
|
29 |
+
OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data from open datasets across the AI landscape, including:
|
30 |
|
31 |
- GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
|
32 |
- WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
|
|
|
34 |
- Camel-AI's domain expert datasets, by the Camel-AI Team
|
35 |
- CodeAlpaca, by Sahil2801
|
36 |
- GPT4-LLM and Unnatural Instructions, by Microsoft
|
37 |
+
Filtering included the removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more
|
38 |
The base dataset mix is identical to the original Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.
|
39 |
|
40 |
## Usage
|