SowmiyaG commited on
Commit
2e58b2e
·
verified ·
1 Parent(s): ca6ab44

End of training

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  base_model: ybelkada/falcon-7b-sharded-bf16
 
3
  tags:
 
 
4
  - generated_from_trainer
5
  model-index:
6
  - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
@@ -40,7 +43,8 @@ The following hyperparameters were used during training:
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: cosine
42
  - lr_scheduler_warmup_ratio: 0.03
43
- - training_steps: 25
 
44
 
45
  ### Training results
46
 
@@ -48,7 +52,8 @@ The following hyperparameters were used during training:
48
 
49
  ### Framework versions
50
 
51
- - Transformers 4.32.0
 
52
  - Pytorch 2.3.1+cu121
53
  - Datasets 2.13.1
54
- - Tokenizers 0.13.3
 
1
  ---
2
  base_model: ybelkada/falcon-7b-sharded-bf16
3
+ library_name: peft
4
  tags:
5
+ - trl
6
+ - sft
7
  - generated_from_trainer
8
  model-index:
9
  - name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: cosine
45
  - lr_scheduler_warmup_ratio: 0.03
46
+ - training_steps: 15
47
+ - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
 
52
 
53
  ### Framework versions
54
 
55
+ - PEFT 0.12.0
56
+ - Transformers 4.44.0
57
  - Pytorch 2.3.1+cu121
58
  - Datasets 2.13.1
59
+ - Tokenizers 0.19.1