kevintu commited on
Commit
155b873
·
verified ·
1 Parent(s): 350f243

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -1,8 +1,11 @@
1
  ---
2
  license: mit
3
  ---
4
- This model is primarily designed for the **automatic grading of English essays**, particularly those written by second language (L2) learners. The training dataset utilized is the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) Corpus. This freely available resource comprises approximately 6,500 writing samples from English language learners, each scored for overall holistic language proficiency as well as analytic scores pertaining to cohesion, syntax, vocabulary, phraseology, grammar, and conventions.
5
-
 
 
 
6
  The model's performance on the test dataset, which includes around 980 English essays, is summarized by the following metrics: 'accuracy'= 0.87 and 'f1 score' = 0.85.
7
 
8
  Upon inputting an essay, the model outputs six scores corresponding to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Each score ranges from 1 to 5, with higher scores indicating greater proficiency within the essay. These dimensions collectively assess the quality of the input essay from multiple perspectives. The model serves as a valuable tool for EFL teachers and researchers, and it is also beneficial for English L2 learners and parents for self-evaluating their composition skills.
 
1
  ---
2
  license: mit
3
  ---
4
+ This model is primarily designed for the **automatic grading of English essays**, particularly those written by second language (L2) learners.
5
+ The training dataset used is the English Language Learner Insight, Proficiency, and Skills Evaluation (ELLIPSE) Corpus.
6
+ This freely available resource comprises approximately 6,500 writing composition samples from English language learners,
7
+ each scored for overall holistic language proficiency as well as analytic scores pertaining to cohesion, syntax, vocabulary,
8
+ phraseology, grammar, and conventions. The scores were obtained through assessments by a number of professional English teachers adhering to rigorous procedures. The training dataset guarantees high practicality and accuracy, closely emulating professional grading standards.
9
  The model's performance on the test dataset, which includes around 980 English essays, is summarized by the following metrics: 'accuracy'= 0.87 and 'f1 score' = 0.85.
10
 
11
  Upon inputting an essay, the model outputs six scores corresponding to cohesion, syntax, vocabulary, phraseology, grammar, and conventions. Each score ranges from 1 to 5, with higher scores indicating greater proficiency within the essay. These dimensions collectively assess the quality of the input essay from multiple perspectives. The model serves as a valuable tool for EFL teachers and researchers, and it is also beneficial for English L2 learners and parents for self-evaluating their composition skills.