Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,9 @@ This model is primarily designed for the **automatic grading of English essays**
|
|
5 |
The training dataset used is the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) Corpus.
|
6 |
This freely available resource comprises approximately 6,500 writing composition samples from English language learners,
|
7 |
each scored for overall holistic language proficiency as well as analytic scores pertaining to cohesion, syntax, vocabulary,
|
8 |
-
phraseology, grammar, and conventions. The scores were obtained through assessments by a number of professional English teachers
|
|
|
|
|
9 |
The model's performance on the test dataset, which includes around 980 English essays, is summarized by the following metrics: 'accuracy'= 0.87 and 'f1 score' = 0.85.
|
10 |
|
11 |
Upon inputting an essay, the model outputs six scores corresponding to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Each score ranges from 1 to 5, with higher scores indicating greater proficiency within the essay. These dimensions collectively assess the quality of the input essay from multiple perspectives. The model serves as a valuable tool for EFL teachers and researchers, and it is also beneficial for English L2 learners and parents for self-evaluating their composition skills.
|
@@ -13,7 +15,7 @@ Upon inputting an essay, the model outputs six scores corresponding to cohesion,
|
|
13 |
To test the model, run the following code or paste your essay into the API interface:
|
14 |
|
15 |
|
16 |
-
|
17 |
#import packages
|
18 |
|
19 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
|
|
5 |
The training dataset used is the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) Corpus.
|
6 |
This freely available resource comprises approximately 6,500 writing composition samples from English language learners,
|
7 |
each scored for overall holistic language proficiency as well as analytic scores pertaining to cohesion, syntax, vocabulary,
|
8 |
+
phraseology, grammar, and conventions. The scores were obtained through assessments by a number of professional English teachers
|
9 |
+
adhering to rigorous procedures. The training dataset guarantees that our model acuqires high practicality and accuracy, closely emulating professional grading standards.
|
10 |
+
|
11 |
The model's performance on the test dataset, which includes around 980 English essays, is summarized by the following metrics: 'accuracy'= 0.87 and 'f1 score' = 0.85.
|
12 |
|
13 |
Upon inputting an essay, the model outputs six scores corresponding to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Each score ranges from 1 to 5, with higher scores indicating greater proficiency within the essay. These dimensions collectively assess the quality of the input essay from multiple perspectives. The model serves as a valuable tool for EFL teachers and researchers, and it is also beneficial for English L2 learners and parents for self-evaluating their composition skills.
|
|
|
15 |
To test the model, run the following code or paste your essay into the API interface:
|
16 |
|
17 |
|
18 |
+
```
|
19 |
#import packages
|
20 |
|
21 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|