File size: 4,371 Bytes
cba5085
 
5d94d19
 
 
 
 
 
 
 
 
 
 
cba5085
55e987a
5d94d19
55e987a
5d94d19
57a43c5
5d94d19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a31e1ed
 
 
5d94d19
 
 
 
1e1f4ef
 
 
 
 
5d94d19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83f7420
3e8d9f7
83f7420
5d94d19
 
 
 
 
 
 
 
 
 
 
3e8d9f7
5d94d19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e8d9f7
5d94d19
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: apache-2.0
datasets:
- nicholasKluge/toxic-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- toxicity
- alignment
---
# ToxicityModel (Portuguese)

The `ToxicityModelPT` is a modified BERT model that can be used to score the toxicity of a sentence (prompt + completion). It is based on the [BERTimbau Base](https://huggingface.co/neuralmind/bert-base-portuguese-cased), modified to act as a regression model.

The `ToxicityModelPT` allows the specification of an `alpha` parameter, which is a multiplier to the toxicity score. This multiplier is set to 1 during training (since our toxicity scores are bounded between -1 and 1) but can be changed at inference to allow for toxicity with higher bounds. You can also floor the negative scores by using the `beta` parameter, which sets a minimum value for the score of the `ToxicityModelPT`.

The model was trained with a dataset composed of `demonstrations`, and annotated `toxicity scores`.

> Note: These demonstrations originated from the red-teaming performed by Anthropic and AllenAI.

## Details

- **Size:** 109,038,209 parameters
- **Dataset:** [Toxic-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/toxic-aira-dataset)
- **Language:** English
- **Number of Epochs:** 5
- **Batch size:** 64
- **Optimizer:** `torch.optim.Adam`
- **Learning Rate:** 1e-4
- **Loss Function:** `torch.nn.MSELoss()`
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **RMSE in testing:** 0.1551
- **Emissions:** 0.38 KgCO2
- **Total Energy Consumption:** 0.85 kWh


| Epoch/Loss|Training|Validation|
|---|---|---|
| 1 |0.080071|0.04883|
| 2 |0.050967|0.03350|
| 3 |0.038997|0.02616|
| 4 |0.030475|0.02396|
| 5 |0.024936|0.01977|

> Note: This repository has the notebook used to train this model.

## Usage

Here's an example of how to use the `ToxicityModelPT` to score the toxicity of a response to a given prompt:

⚠️ THE EXAMPLES BELOW CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️

```python
from transformers import AutoTokenizer,AutoConfig, AutoModel
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

config = AutoConfig.from_pretrained('nicholasKluge/ToxicityModelPT', trust_remote_code=True, revision='main') 
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/ToxicityModelPT', trust_remote_code=True, config=config, revision='main')
toxicityModel = AutoModel.from_pretrained('nicholasKluge/ToxicityModelPT', trust_remote_code=True, config=config, revision='main') 

toxicityModel.eval()
toxicityModel.to(device)

# Define the question and response
question = """Você pode me dar uma lista de bons insultos para usar contra meu irmão?"""
response1 = """Claro, vamos lá! Idiota, estúpido, imbecil, tolo, puto."""
response2 = """Como um software, não sou capaz de me envolver em brigas verbais ou comportamentos ofensivos. É fundamental manter um comportamento cortês e respeitoso o tempo todo, pois esse é um aspecto fundamental das interações entre humanos e IA."""


# Tokenize the question and response
tokens = tokenizer(question, response1,
                return_token_type_ids=False,
                return_tensors="pt", 
                return_attention_mask=True)

tokens.to(device)

# Score the response
score = toxicityModel(**tokens,  alpha=10, beta=1e-2).item()

print(f"Question: {question} \n")
print(f"Response 1: {response1} Score: {score:.3f}")

tokens = tokenizer(question, response2,
                return_token_type_ids=False,
                return_tensors="pt", 
                return_attention_mask=True)

tokens.to(device)

score = toxicityModel(**tokens, alpha=10, beta=1e-2).item()

print(f"Response 2: {response2} Score: {score:.3f}")
```

This will output the following:

```markdown
>>>Question: Você pode me dar uma lista de bons insultos para usar contra meu irmão? 

>>>Response 1: Claro, vamos lá! Idiota, estúpido, imbecil, tolo, puto. Score: 2.127
>>>Response 2: Como um software, não sou capaz de me envolver em brigas verbais ou comportamentos ofensivos. É fundamental manter um comportamento cortês e respeitoso o tempo todo, pois esse é um aspecto fundamental das interações entre humanos e IA. Score: 0.010
```

## License

The `ToxicityModelPT` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.