Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ tags:
|
|
17 |
|
18 |
## Model Details
|
19 |
|
20 |
-
- **
|
21 |
- **Task**: [TOFU - Forget05](https://arxiv.org/abs/2401.06121)
|
22 |
- **Method**: [SimNPO](https://arxiv.org/abs/2410.07163)
|
23 |
- **Base Model**: [🤗meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
|
@@ -45,7 +45,7 @@ model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-TOFU-forget05-L
|
|
45 |
```
|
46 |
|
47 |
## Evaluation Results
|
48 |
-
|
49 |
|---|---|---|
|
50 |
|Origin|0.00|0.62|
|
51 |
|Retrain|1.00|0.62|
|
@@ -56,14 +56,11 @@ model = AutoModelForCausalLM.from_pretrained("OPTML-Group/SimNPO-TOFU-forget05-L
|
|
56 |
|
57 |
If you use this model in your research, please cite:
|
58 |
```
|
59 |
-
@
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
archivePrefix={arXiv},
|
65 |
-
primaryClass={cs.CL},
|
66 |
-
url={https://arxiv.org/abs/2410.07163},
|
67 |
}
|
68 |
```
|
69 |
|
|
|
17 |
|
18 |
## Model Details
|
19 |
|
20 |
+
- **Unlearning**:
|
21 |
- **Task**: [TOFU - Forget05](https://arxiv.org/abs/2401.06121)
|
22 |
- **Method**: [SimNPO](https://arxiv.org/abs/2410.07163)
|
23 |
- **Base Model**: [🤗meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
|
|
|
45 |
```
|
46 |
|
47 |
## Evaluation Results
|
48 |
+
||Forgeting Quality (FQ)|Model Utility (MU)|
|
49 |
|---|---|---|
|
50 |
|Origin|0.00|0.62|
|
51 |
|Retrain|1.00|0.62|
|
|
|
56 |
|
57 |
If you use this model in your research, please cite:
|
58 |
```
|
59 |
+
@article{fan2024simplicity,
|
60 |
+
title={Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning},
|
61 |
+
author={Fan, Chongyu and Liu, Jiancheng and Lin, Licong and Jia, Jinghan and Zhang, Ruiqi and Mei, Song and Liu, Sijia},
|
62 |
+
journal={arXiv preprint arXiv:2410.07163},
|
63 |
+
year={2024}
|
|
|
|
|
|
|
64 |
}
|
65 |
```
|
66 |
|