codybum commited on
Commit
8b37406
·
1 Parent(s): ba90481

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ Medical Education Language Transformer (MELT)
12
 
13
  The MELT-Mistral-3x7B-Instruct-v0.1 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.
14
 
15
- MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE benchmarks.
16
 
17
  This is MoE model, thanks to [Charles Goddard](https://huggingface.co/chargoddard) for code/tools.
18
 
@@ -21,7 +21,7 @@ This is MoE model, thanks to [Charles Goddard](https://huggingface.co/chargoddar
21
 
22
  The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
23
 
24
- While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/) example questions, its use it intented to be more broadly applicable.
25
 
26
  ### Model Description
27
 
@@ -93,7 +93,7 @@ The following datasets were used for training:
93
 
94
  <!-- This section describes the evaluation protocols and provides the results. -->
95
 
96
- MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE benchmarks.
97
 
98
  ### Mistral-3x7B-Instruct-v0.1
99
  - **medqa:** {'base': {'Average': 42.88, 'STEP-1': 43.51, 'STEP-2&3': 42.16}}
 
12
 
13
  The MELT-Mistral-3x7B-Instruct-v0.1 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.
14
 
15
+ MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
16
 
17
  This is MoE model, thanks to [Charles Goddard](https://huggingface.co/chargoddard) for code/tools.
18
 
 
21
 
22
  The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.
23
 
24
+ While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.
25
 
26
  ### Model Description
27
 
 
93
 
94
  <!-- This section describes the evaluation protocols and provides the results. -->
95
 
96
+ MELT-Mistral-3x7B-Instruct-v0.1 demonstrated a average 19.7% improvement over Mistral-3x7B-Instruct-v0.1 (MoE of 3 X Mistral-7B-Instruct-v0.1) across 3 USMLE, 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks.
97
 
98
  ### Mistral-3x7B-Instruct-v0.1
99
  - **medqa:** {'base': {'Average': 42.88, 'STEP-1': 43.51, 'STEP-2&3': 42.16}}