TanviGupta commited on
Commit
1312223
·
1 Parent(s): 171c80c

End of training

Browse files
Files changed (1) hide show
  1. README.md +14 -20
README.md CHANGED
@@ -1,18 +1,13 @@
1
  ---
2
- base_model: ybelkada/falcon-7b-sharded-bf16
3
  tags:
 
 
4
  - generated_from_trainer
 
5
  model-index:
6
  - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
7
  results: []
8
- license: mit
9
- datasets:
10
- - heliosbrahma/mental_health_chatbot_dataset
11
- language:
12
- - en
13
- metrics:
14
- - rouge
15
- pipeline_tag: conversational
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -20,24 +15,22 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  # falcon-7b-sharded-bf16-finetuned-mental-health-conversational
22
 
23
- This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on a custom [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) dataset.
24
 
25
  ## Model description
26
 
27
- This model is fine-tuned on custom mental health conversational dataset. The rationale behind this is to answer mental health related queries that can be factually verified without responding gibberish words.
28
 
29
  ## Intended uses & limitations
30
 
31
- The model was trained on the dataset which may contain sensitive information related to mental health. It is important to note that while mental health chatbots built using this model can be helpful, they are not a replacement for professional mental health care.
32
 
33
  ## Training and evaluation data
34
 
35
- This model was trained on custom [heliosbrahma/mental_health_chatbot_dataset](https://huggingface.co/datasets/heliosbrahma/mental_health_chatbot_dataset) dataset which 172 rows of conversational pair of questions and answers.
36
 
37
  ## Training procedure
38
 
39
- This model was trained using QLoRA technique to fine-tune on a custom dataset on free-tier GPU available in Google Colab.
40
-
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
@@ -50,7 +43,7 @@ The following hyperparameters were used during training:
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: cosine
52
  - lr_scheduler_warmup_ratio: 0.03
53
- - training_steps: 320
54
 
55
  ### Training results
56
 
@@ -58,7 +51,8 @@ The following hyperparameters were used during training:
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.31.0
62
- - Pytorch 2.0.1+cu118
63
- - Datasets 2.14.2
64
- - Tokenizers 0.13.3
 
 
1
  ---
2
+ library_name: peft
3
  tags:
4
+ - trl
5
+ - sft
6
  - generated_from_trainer
7
+ base_model: ybelkada/falcon-7b-sharded-bf16
8
  model-index:
9
  - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
10
  results: []
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  # falcon-7b-sharded-bf16-finetuned-mental-health-conversational
17
 
18
+ This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
19
 
20
  ## Model description
21
 
22
+ More information needed
23
 
24
  ## Intended uses & limitations
25
 
26
+ More information needed
27
 
28
  ## Training and evaluation data
29
 
30
+ More information needed
31
 
32
  ## Training procedure
33
 
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: cosine
45
  - lr_scheduler_warmup_ratio: 0.03
46
+ - training_steps: 20
47
 
48
  ### Training results
49
 
 
51
 
52
  ### Framework versions
53
 
54
+ - PEFT 0.7.2.dev0
55
+ - Transformers 4.37.0.dev0
56
+ - Pytorch 2.1.1+cu121
57
+ - Datasets 2.16.1
58
+ - Tokenizers 0.15.0