Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,9 @@ It achieves the following results on the evaluation set:
|
|
20 |
- Loss: 0.0585
|
21 |
- F1: 0.9536
|
22 |
|
|
|
|
|
|
|
23 |
## Model Usage
|
24 |
|
25 |
```python
|
|
|
20 |
- Loss: 0.0585
|
21 |
- F1: 0.9536
|
22 |
|
23 |
+
The best scores for Named Entity Recognition (NER) on CoNLL 2003 (English) is 94.6% from [the ConLL2003 NER benchmark]
|
24 |
+
(https://paperswithcode.com/sota/named-entity-recognition-ner-on-conll-2003), This fine-tuned Model has achieved 95.36%.
|
25 |
+
|
26 |
## Model Usage
|
27 |
|
28 |
```python
|