Commit
·
920af01
1
Parent(s):
3cb895b
Update README.md
Browse files
README.md
CHANGED
@@ -28,31 +28,18 @@ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co
|
|
28 |
6. We then trained the same model on the noisy data and apply it to an held-out test set from the original set split.
|
29 |
7. Training with couple of thousands noisy "positives" and "negatives" yielded a test set accuracy of about 95%.
|
30 |
|
31 |
-
Accuracy results for Logistic Regression (LR) and BERT (base-cased) are shown
|
32 |
-
|
33 |
-
Accuracy | Kaggle | Enhanced noisy data set
|
34 |
-
LR | 79.0% | 95.1%
|
35 |
-
BERT | 88.7% | 95.2%
|
36 |
-
|
37 |
-
Here we describe the process in more detail alongside with other metrics - https://drive.google.com/file/d/1MI9gRdppactVZ_XvhCwvoaOV1aRfprrd/view?usp=sharing
|
38 |
-
|
39 |
-
|
40 |
|
|
|
41 |
|
42 |
|
43 |
## Model description
|
44 |
|
45 |
-
|
46 |
|
47 |
## Intended uses & limitations
|
48 |
|
49 |
-
|
50 |
-
|
51 |
-
## Training and evaluation data
|
52 |
-
|
53 |
-
More information needed
|
54 |
-
|
55 |
-
## Training procedure
|
56 |
|
57 |
### Training hyperparameters
|
58 |
|
|
|
28 |
6. We then trained the same model on the noisy data and apply it to an held-out test set from the original set split.
|
29 |
7. Training with couple of thousands noisy "positives" and "negatives" yielded a test set accuracy of about 95%.
|
30 |
|
31 |
+
Accuracy results for Logistic Regression (LR) and BERT (base-cased) are shown in the attached pdf:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
+
https://drive.google.com/file/d/1MI9gRdppactVZ_XvhCwvoaOV1aRfprrd/view?usp=sharing
|
34 |
|
35 |
|
36 |
## Model description
|
37 |
|
38 |
+
BERT model trained on noisy data from search results. See PDF for more details.
|
39 |
|
40 |
## Intended uses & limitations
|
41 |
|
42 |
+
Intended for use on finance news sentiment analysis with 3 options: "Positive", "Neutral" and "Negative"
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
### Training hyperparameters
|
45 |
|