Update README.md
Browse files
README.md
CHANGED
@@ -137,9 +137,9 @@ Given a post P and two comments (A,B) we only included the preference A > B in t
|
|
137 |
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
138 |
4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
|
139 |
|
140 |
-
Reddit makes it very difficult to get anything beyond the top 1000 posts for subreddit.
|
141 |
-
We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using
|
142 |
-
By doing this recursively, we scraped up to 7500 post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post.
|
143 |
We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.
|
144 |
|
145 |
### Preprocessing
|
@@ -160,9 +160,9 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
|
|
160 |
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input.
|
161 |
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
|
162 |
If this is still over 512 tokens, simply skip the example.
|
163 |
-
|
164 |
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
|
165 |
-
|
166 |
Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
|
167 |
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
|
168 |
|
|
|
137 |
3. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
|
138 |
4. The post has a score >= 10 and each comment has a score >= 2 (upvoted at least once).
|
139 |
|
140 |
+
Reddit makes it very difficult to get anything beyond the top 1000 posts for each subreddit.
|
141 |
+
We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function.
|
142 |
+
By doing this recursively, we scraped up to 7500 unique post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post.
|
143 |
We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.
|
144 |
|
145 |
### Preprocessing
|
|
|
160 |
Although models like FLAN-T5 use positional embeddings, we found that the loss would not converge if we finetuned it on the entire input.
|
161 |
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
|
162 |
If this is still over 512 tokens, simply skip the example.
|
163 |
+
4. **Train for 1 epoch only**, as the [InstructGPT paper](https://arxiv.org/abs/2203.02155) paper suggests.
|
164 |
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
|
165 |
+
5. **Train on less data.**
|
166 |
Preferences with a large score ratio (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
|
167 |
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
|
168 |
|