Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
This model is used to tag the tokens in an input sequence with information about the different signs of syntactic complexity that they contain. For more details, please see Chapters 2 and 3 of my thesis (https://rj3vans.github.io/Evans2020_SentenceSimplificationForTextProcessing.pdf.
|
2 |
|
3 |
It was derived using code written by Dr. Le An Ha at the University of Wolverhampton.
|
@@ -43,5 +55,4 @@ predictions = torch.argmax(outputs, dim=2)
|
|
43 |
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])
|
44 |
~~~
|
45 |
|
46 |
-
======================================================================
|
47 |
-
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
base_model:
|
8 |
+
- google-bert/bert-large-cased
|
9 |
+
pipeline_tag: token-classification
|
10 |
+
tags:
|
11 |
+
- code
|
12 |
+
---
|
13 |
This model is used to tag the tokens in an input sequence with information about the different signs of syntactic complexity that they contain. For more details, please see Chapters 2 and 3 of my thesis (https://rj3vans.github.io/Evans2020_SentenceSimplificationForTextProcessing.pdf.
|
14 |
|
15 |
It was derived using code written by Dr. Le An Ha at the University of Wolverhampton.
|
|
|
55 |
print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())])
|
56 |
~~~
|
57 |
|
58 |
+
======================================================================
|
|