javicorvi/pretoxtm-ner
Browse files- README.md +18 -18
- config.json +1 -1
- model.safetensors +1 -1
- runs/Apr04_21-52-04_ec306db945fe/events.out.tfevents.1712267525.ec306db945fe.765.0 +3 -0
- training_args.bin +2 -2
README.md
CHANGED
@@ -14,15 +14,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
14 |
|
15 |
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
|
16 |
It achieves the following results on the evaluation set:
|
17 |
-
- Loss: 0.
|
18 |
-
- Study Test: {'precision': 0.
|
19 |
-
- Manifestation: {'precision': 0.
|
20 |
-
- Finding: {'precision': 0.
|
21 |
-
- Specimen: {'precision': 0.
|
22 |
-
- Dose: {'precision': 0.
|
23 |
-
- Dose Qualification: {'precision': 0.
|
24 |
-
- Sex: {'precision': 0.
|
25 |
-
- Group: {'precision': 0.
|
26 |
|
27 |
## Model description
|
28 |
|
@@ -51,16 +51,16 @@ The following hyperparameters were used during training:
|
|
51 |
|
52 |
### Training results
|
53 |
|
54 |
-
| Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding
|
55 |
-
|
56 |
-
| No log | 1.0 | 257 | 0.
|
57 |
-
| 0.
|
58 |
-
| 0.
|
59 |
|
60 |
|
61 |
### Framework versions
|
62 |
|
63 |
-
- Transformers 4.
|
64 |
-
- Pytorch 2.1
|
65 |
-
- Datasets 2.
|
66 |
-
- Tokenizers 0.15.
|
|
|
14 |
|
15 |
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
|
16 |
It achieves the following results on the evaluation set:
|
17 |
+
- Loss: 0.1810
|
18 |
+
- Study Test: {'precision': 0.8215384615384616, 'recall': 0.8841059602649006, 'f1': 0.8516746411483254, 'number': 302}
|
19 |
+
- Manifestation: {'precision': 0.8041958041958042, 'recall': 0.905511811023622, 'f1': 0.8518518518518519, 'number': 127}
|
20 |
+
- Finding: {'precision': 0.6886657101865137, 'recall': 0.7570977917981072, 'f1': 0.7212622088655146, 'number': 634}
|
21 |
+
- Specimen: {'precision': 0.7944162436548223, 'recall': 0.8236842105263158, 'f1': 0.8087855297157622, 'number': 380}
|
22 |
+
- Dose: {'precision': 0.8647540983606558, 'recall': 0.9461883408071748, 'f1': 0.9036402569593148, 'number': 223}
|
23 |
+
- Dose Qualification: {'precision': 0.65, 'recall': 0.8125, 'f1': 0.7222222222222223, 'number': 32}
|
24 |
+
- Sex: {'precision': 0.9285714285714286, 'recall': 0.9285714285714286, 'f1': 0.9285714285714286, 'number': 84}
|
25 |
+
- Group: {'precision': 0.5666666666666667, 'recall': 0.6938775510204082, 'f1': 0.6238532110091742, 'number': 49}
|
26 |
|
27 |
## Model description
|
28 |
|
|
|
51 |
|
52 |
### Training results
|
53 |
|
54 |
+
| Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding | Specimen | Dose | Dose Qualification | Sex | Group |
|
55 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|
|
56 |
+
| No log | 1.0 | 257 | 0.2005 | {'precision': 0.6658227848101266, 'recall': 0.8708609271523179, 'f1': 0.7546628407460545, 'number': 302} | {'precision': 0.7647058823529411, 'recall': 0.9212598425196851, 'f1': 0.8357142857142856, 'number': 127} | {'precision': 0.6425339366515838, 'recall': 0.6719242902208202, 'f1': 0.6569005397070162, 'number': 634} | {'precision': 0.7099767981438515, 'recall': 0.8052631578947368, 'f1': 0.75462392108508, 'number': 380} | {'precision': 0.8969957081545065, 'recall': 0.9372197309417041, 'f1': 0.9166666666666667, 'number': 223} | {'precision': 0.6764705882352942, 'recall': 0.71875, 'f1': 0.696969696969697, 'number': 32} | {'precision': 0.7448979591836735, 'recall': 0.8690476190476191, 'f1': 0.8021978021978022, 'number': 84} | {'precision': 0.3880597014925373, 'recall': 0.5306122448979592, 'f1': 0.4482758620689655, 'number': 49} |
|
57 |
+
| 0.2932 | 2.0 | 514 | 0.1689 | {'precision': 0.8170347003154574, 'recall': 0.8576158940397351, 'f1': 0.8368336025848143, 'number': 302} | {'precision': 0.8226950354609929, 'recall': 0.9133858267716536, 'f1': 0.8656716417910448, 'number': 127} | {'precision': 0.6904400606980273, 'recall': 0.7176656151419558, 'f1': 0.7037896365042536, 'number': 634} | {'precision': 0.7746478873239436, 'recall': 0.868421052631579, 'f1': 0.8188585607940446, 'number': 380} | {'precision': 0.8870292887029289, 'recall': 0.9506726457399103, 'f1': 0.9177489177489178, 'number': 223} | {'precision': 0.7567567567567568, 'recall': 0.875, 'f1': 0.8115942028985507, 'number': 32} | {'precision': 0.8695652173913043, 'recall': 0.9523809523809523, 'f1': 0.909090909090909, 'number': 84} | {'precision': 0.6, 'recall': 0.673469387755102, 'f1': 0.6346153846153846, 'number': 49} |
|
58 |
+
| 0.2932 | 3.0 | 771 | 0.1810 | {'precision': 0.8215384615384616, 'recall': 0.8841059602649006, 'f1': 0.8516746411483254, 'number': 302} | {'precision': 0.8041958041958042, 'recall': 0.905511811023622, 'f1': 0.8518518518518519, 'number': 127} | {'precision': 0.6886657101865137, 'recall': 0.7570977917981072, 'f1': 0.7212622088655146, 'number': 634} | {'precision': 0.7944162436548223, 'recall': 0.8236842105263158, 'f1': 0.8087855297157622, 'number': 380} | {'precision': 0.8647540983606558, 'recall': 0.9461883408071748, 'f1': 0.9036402569593148, 'number': 223} | {'precision': 0.65, 'recall': 0.8125, 'f1': 0.7222222222222223, 'number': 32} | {'precision': 0.9285714285714286, 'recall': 0.9285714285714286, 'f1': 0.9285714285714286, 'number': 84} | {'precision': 0.5666666666666667, 'recall': 0.6938775510204082, 'f1': 0.6238532110091742, 'number': 49} |
|
59 |
|
60 |
|
61 |
### Framework versions
|
62 |
|
63 |
+
- Transformers 4.38.2
|
64 |
+
- Pytorch 2.2.1+cu121
|
65 |
+
- Datasets 2.18.0
|
66 |
+
- Tokenizers 0.15.2
|
config.json
CHANGED
@@ -57,7 +57,7 @@
|
|
57 |
"pad_token_id": 0,
|
58 |
"position_embedding_type": "absolute",
|
59 |
"torch_dtype": "float32",
|
60 |
-
"transformers_version": "4.
|
61 |
"type_vocab_size": 2,
|
62 |
"use_cache": true,
|
63 |
"vocab_size": 28996
|
|
|
57 |
"pad_token_id": 0,
|
58 |
"position_embedding_type": "absolute",
|
59 |
"torch_dtype": "float32",
|
60 |
+
"transformers_version": "4.38.2",
|
61 |
"type_vocab_size": 2,
|
62 |
"use_cache": true,
|
63 |
"vocab_size": 28996
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 430954348
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3310998b3007e3241d50263db893b2075c1a6b9d2163cd275f90a5c2197a1552
|
3 |
size 430954348
|
runs/Apr04_21-52-04_ec306db945fe/events.out.tfevents.1712267525.ec306db945fe.765.0
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5fcb01cc00e984bbc73a686dc68fb1c076eef273afa9916000f21edd7766c839
|
3 |
+
size 6757
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:80484f580dbe9de0dc0cb2a29c6199bae154c9e7ece1db865994094b9eb0da9c
|
3 |
+
size 4856
|