kaixkhazaki commited on
Commit
9c00368
·
verified ·
1 Parent(s): 70f0998

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -16,24 +16,26 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # turkish-medical-question-answering
18
 
 
 
19
  This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) optimized for medical domain question answering in Turkish using incidelen/MedTurkQuAD dataset.
20
  It uses a BERT-based architecture with additional dropout regularization to prevent overfitting and is specifically trained to extract answers from medical text contexts.
21
 
22
 
23
- It achieves the following results on the evaluation set:
24
  - Loss: 1.2814
25
  - Exact Match: 52.7881
26
  - F1: 76.1437
27
 
28
  Validation Metrics
29
- {'eval_loss': 1.2329986095428467,
30
- 'eval_exact_match': 56.52724968314322,
31
- 'eval_f1': 76.17448254104453}
32
 
33
  Test Metrics
34
- {'eval_loss': 1.2814178466796875,
35
- 'eval_exact_match': 52.78810408921933,
36
- 'eval_f1': 76.14367323441282}
37
 
38
  ## Usage
39
  ```python
@@ -94,9 +96,7 @@ pipe(question=question, context=context)
94
 
95
  ```
96
 
97
- ## Model description
98
 
99
- More information needed
100
 
101
  ## Intended uses & limitations
102
 
 
16
 
17
  # turkish-medical-question-answering
18
 
19
+ ## Model description
20
+
21
  This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) optimized for medical domain question answering in Turkish using incidelen/MedTurkQuAD dataset.
22
  It uses a BERT-based architecture with additional dropout regularization to prevent overfitting and is specifically trained to extract answers from medical text contexts.
23
 
24
 
25
+ It achieves the following results on the test evaluation set:
26
  - Loss: 1.2814
27
  - Exact Match: 52.7881
28
  - F1: 76.1437
29
 
30
  Validation Metrics
31
+ - eval_loss': 1.2329986095428467
32
+ - eval_exact_match': 56.52724968314322
33
+ - eval_f1': 76.17448254104453
34
 
35
  Test Metrics
36
+ - eval_loss: 1.2814178466796875
37
+ - eval_exact_match: 52.78810408921933
38
+ - eval_f1: 76.14367323441282
39
 
40
  ## Usage
41
  ```python
 
96
 
97
  ```
98
 
 
99
 
 
100
 
101
  ## Intended uses & limitations
102