javicorvi commited on
Commit
c39107c
·
verified ·
1 Parent(s): 8f61abf

javicorvi/pretoxtm-ner

Browse files
README.md CHANGED
@@ -14,15 +14,15 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.1823
18
- - Study Test: {'precision': 0.7779111644657863, 'recall': 0.9012517385257302, 'f1': 0.8350515463917525, 'number': 719}
19
- - Manifestation: {'precision': 0.8337950138504155, 'recall': 0.9148936170212766, 'f1': 0.872463768115942, 'number': 329}
20
- - Finding: {'precision': 0.7869535045107564, 'recall': 0.7935619314205739, 'f1': 0.7902439024390244, 'number': 1429}
21
- - Specimen: {'precision': 0.7981530343007915, 'recall': 0.8344827586206897, 'f1': 0.8159136884693189, 'number': 725}
22
- - Dose: {'precision': 0.8948247078464107, 'recall': 0.9403508771929825, 'f1': 0.9170230966638152, 'number': 570}
23
- - Dose Qualification: {'precision': 0.696969696969697, 'recall': 0.8070175438596491, 'f1': 0.7479674796747967, 'number': 57}
24
- - Sex: {'precision': 0.945, 'recall': 0.9356435643564357, 'f1': 0.9402985074626865, 'number': 202}
25
- - Group: {'precision': 0.6666666666666666, 'recall': 0.8571428571428571, 'f1': 0.75, 'number': 112}
26
 
27
  ## Model description
28
 
@@ -51,16 +51,16 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding | Specimen | Dose | Dose Qualification | Sex | Group |
55
- |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|
56
- | No log | 1.0 | 257 | 0.2018 | {'precision': 0.7382075471698113, 'recall': 0.8706536856745479, 'f1': 0.7989789406509252, 'number': 719} | {'precision': 0.8271954674220963, 'recall': 0.8875379939209727, 'f1': 0.8563049853372434, 'number': 329} | {'precision': 0.7308461025982678, 'recall': 0.7676696990902729, 'f1': 0.7488054607508532, 'number': 1429} | {'precision': 0.7324290998766955, 'recall': 0.8193103448275862, 'f1': 0.7734375, 'number': 725} | {'precision': 0.8511705685618729, 'recall': 0.8929824561403509, 'f1': 0.8715753424657534, 'number': 570} | {'precision': 0.5555555555555556, 'recall': 0.43859649122807015, 'f1': 0.4901960784313725, 'number': 57} | {'precision': 0.9285714285714286, 'recall': 0.900990099009901, 'f1': 0.914572864321608, 'number': 202} | {'precision': 0.5214723926380368, 'recall': 0.7589285714285714, 'f1': 0.6181818181818183, 'number': 112} |
57
- | 0.2702 | 2.0 | 514 | 0.1916 | {'precision': 0.7688622754491018, 'recall': 0.8929068150208623, 'f1': 0.8262548262548264, 'number': 719} | {'precision': 0.7885117493472585, 'recall': 0.9179331306990881, 'f1': 0.848314606741573, 'number': 329} | {'precision': 0.7945707997065297, 'recall': 0.7578726382085375, 'f1': 0.7757879656160458, 'number': 1429} | {'precision': 0.7962466487935657, 'recall': 0.8193103448275862, 'f1': 0.8076138681169274, 'number': 725} | {'precision': 0.867430441898527, 'recall': 0.9298245614035088, 'f1': 0.8975444538526672, 'number': 570} | {'precision': 0.71875, 'recall': 0.8070175438596491, 'f1': 0.7603305785123967, 'number': 57} | {'precision': 0.9633507853403142, 'recall': 0.9108910891089109, 'f1': 0.9363867684478371, 'number': 202} | {'precision': 0.6217948717948718, 'recall': 0.8660714285714286, 'f1': 0.7238805970149254, 'number': 112} |
58
- | 0.2702 | 3.0 | 771 | 0.1823 | {'precision': 0.7779111644657863, 'recall': 0.9012517385257302, 'f1': 0.8350515463917525, 'number': 719} | {'precision': 0.8337950138504155, 'recall': 0.9148936170212766, 'f1': 0.872463768115942, 'number': 329} | {'precision': 0.7869535045107564, 'recall': 0.7935619314205739, 'f1': 0.7902439024390244, 'number': 1429} | {'precision': 0.7981530343007915, 'recall': 0.8344827586206897, 'f1': 0.8159136884693189, 'number': 725} | {'precision': 0.8948247078464107, 'recall': 0.9403508771929825, 'f1': 0.9170230966638152, 'number': 570} | {'precision': 0.696969696969697, 'recall': 0.8070175438596491, 'f1': 0.7479674796747967, 'number': 57} | {'precision': 0.945, 'recall': 0.9356435643564357, 'f1': 0.9402985074626865, 'number': 202} | {'precision': 0.6666666666666666, 'recall': 0.8571428571428571, 'f1': 0.75, 'number': 112} |
59
 
60
 
61
  ### Framework versions
62
 
63
- - Transformers 4.35.2
64
- - Pytorch 2.1.0+cu118
65
- - Datasets 2.15.0
66
- - Tokenizers 0.15.0
 
14
 
15
  This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.1810
18
+ - Study Test: {'precision': 0.8215384615384616, 'recall': 0.8841059602649006, 'f1': 0.8516746411483254, 'number': 302}
19
+ - Manifestation: {'precision': 0.8041958041958042, 'recall': 0.905511811023622, 'f1': 0.8518518518518519, 'number': 127}
20
+ - Finding: {'precision': 0.6886657101865137, 'recall': 0.7570977917981072, 'f1': 0.7212622088655146, 'number': 634}
21
+ - Specimen: {'precision': 0.7944162436548223, 'recall': 0.8236842105263158, 'f1': 0.8087855297157622, 'number': 380}
22
+ - Dose: {'precision': 0.8647540983606558, 'recall': 0.9461883408071748, 'f1': 0.9036402569593148, 'number': 223}
23
+ - Dose Qualification: {'precision': 0.65, 'recall': 0.8125, 'f1': 0.7222222222222223, 'number': 32}
24
+ - Sex: {'precision': 0.9285714285714286, 'recall': 0.9285714285714286, 'f1': 0.9285714285714286, 'number': 84}
25
+ - Group: {'precision': 0.5666666666666667, 'recall': 0.6938775510204082, 'f1': 0.6238532110091742, 'number': 49}
26
 
27
  ## Model description
28
 
 
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Study Test | Manifestation | Finding | Specimen | Dose | Dose Qualification | Sex | Group |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|
56
+ | No log | 1.0 | 257 | 0.2005 | {'precision': 0.6658227848101266, 'recall': 0.8708609271523179, 'f1': 0.7546628407460545, 'number': 302} | {'precision': 0.7647058823529411, 'recall': 0.9212598425196851, 'f1': 0.8357142857142856, 'number': 127} | {'precision': 0.6425339366515838, 'recall': 0.6719242902208202, 'f1': 0.6569005397070162, 'number': 634} | {'precision': 0.7099767981438515, 'recall': 0.8052631578947368, 'f1': 0.75462392108508, 'number': 380} | {'precision': 0.8969957081545065, 'recall': 0.9372197309417041, 'f1': 0.9166666666666667, 'number': 223} | {'precision': 0.6764705882352942, 'recall': 0.71875, 'f1': 0.696969696969697, 'number': 32} | {'precision': 0.7448979591836735, 'recall': 0.8690476190476191, 'f1': 0.8021978021978022, 'number': 84} | {'precision': 0.3880597014925373, 'recall': 0.5306122448979592, 'f1': 0.4482758620689655, 'number': 49} |
57
+ | 0.2932 | 2.0 | 514 | 0.1689 | {'precision': 0.8170347003154574, 'recall': 0.8576158940397351, 'f1': 0.8368336025848143, 'number': 302} | {'precision': 0.8226950354609929, 'recall': 0.9133858267716536, 'f1': 0.8656716417910448, 'number': 127} | {'precision': 0.6904400606980273, 'recall': 0.7176656151419558, 'f1': 0.7037896365042536, 'number': 634} | {'precision': 0.7746478873239436, 'recall': 0.868421052631579, 'f1': 0.8188585607940446, 'number': 380} | {'precision': 0.8870292887029289, 'recall': 0.9506726457399103, 'f1': 0.9177489177489178, 'number': 223} | {'precision': 0.7567567567567568, 'recall': 0.875, 'f1': 0.8115942028985507, 'number': 32} | {'precision': 0.8695652173913043, 'recall': 0.9523809523809523, 'f1': 0.909090909090909, 'number': 84} | {'precision': 0.6, 'recall': 0.673469387755102, 'f1': 0.6346153846153846, 'number': 49} |
58
+ | 0.2932 | 3.0 | 771 | 0.1810 | {'precision': 0.8215384615384616, 'recall': 0.8841059602649006, 'f1': 0.8516746411483254, 'number': 302} | {'precision': 0.8041958041958042, 'recall': 0.905511811023622, 'f1': 0.8518518518518519, 'number': 127} | {'precision': 0.6886657101865137, 'recall': 0.7570977917981072, 'f1': 0.7212622088655146, 'number': 634} | {'precision': 0.7944162436548223, 'recall': 0.8236842105263158, 'f1': 0.8087855297157622, 'number': 380} | {'precision': 0.8647540983606558, 'recall': 0.9461883408071748, 'f1': 0.9036402569593148, 'number': 223} | {'precision': 0.65, 'recall': 0.8125, 'f1': 0.7222222222222223, 'number': 32} | {'precision': 0.9285714285714286, 'recall': 0.9285714285714286, 'f1': 0.9285714285714286, 'number': 84} | {'precision': 0.5666666666666667, 'recall': 0.6938775510204082, 'f1': 0.6238532110091742, 'number': 49} |
59
 
60
 
61
  ### Framework versions
62
 
63
+ - Transformers 4.38.2
64
+ - Pytorch 2.2.1+cu121
65
+ - Datasets 2.18.0
66
+ - Tokenizers 0.15.2
config.json CHANGED
@@ -57,7 +57,7 @@
57
  "pad_token_id": 0,
58
  "position_embedding_type": "absolute",
59
  "torch_dtype": "float32",
60
- "transformers_version": "4.35.2",
61
  "type_vocab_size": 2,
62
  "use_cache": true,
63
  "vocab_size": 28996
 
57
  "pad_token_id": 0,
58
  "position_embedding_type": "absolute",
59
  "torch_dtype": "float32",
60
+ "transformers_version": "4.38.2",
61
  "type_vocab_size": 2,
62
  "use_cache": true,
63
  "vocab_size": 28996
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c2a4b6f530a6f0bc2aa319246c7a437b4e1cbfcd2aa0065c910ca38347aeba2
3
  size 430954348
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3310998b3007e3241d50263db893b2075c1a6b9d2163cd275f90a5c2197a1552
3
  size 430954348
runs/Apr04_21-52-04_ec306db945fe/events.out.tfevents.1712267525.ec306db945fe.765.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fcb01cc00e984bbc73a686dc68fb1c076eef273afa9916000f21edd7766c839
3
+ size 6757
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9dcf50d238ab57bc197c55488c52792927aa76f64c524479a0e045ce65920c76
3
- size 4536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80484f580dbe9de0dc0cb2a29c6199bae154c9e7ece1db865994094b9eb0da9c
3
+ size 4856