vincentmin commited on
Commit
9d26c10
·
1 Parent(s): 4f94bf4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -22,11 +22,11 @@ It achieves the following results on the evaluation set:
22
  - Loss: 0.4810
23
  - Accuracy: 0.7869
24
 
25
- See also [vincentmin/llama-2-7b-reward-oasst1](https://huggingface.co/vincentmin/llama-2-13b-reward-oasst1) for a 7b version of this model.
26
 
27
  ## Model description
28
 
29
- This is a reward model trained with QLoRA in 4bit precision. The base model is [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) for which you need to have accepted the license in order to be able use it. Once you've been given permission, you can load the reward model as follows:
30
  ```
31
  import torch
32
  from peft import PeftModel, PeftConfig
 
22
  - Loss: 0.4810
23
  - Accuracy: 0.7869
24
 
25
+ See also [vincentmin/llama-2-7b-reward-oasst1](https://huggingface.co/vincentmin/llama-2-7b-reward-oasst1) for a 7b version of this model.
26
 
27
  ## Model description
28
 
29
+ This is a reward model trained with QLoRA in 4bit precision. The base model is [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) for which you need to have accepted the license in order to be able use it. Once you've been given permission, you can load the reward model as follows:
30
  ```
31
  import torch
32
  from peft import PeftModel, PeftConfig