YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Multi-Task BERT Model for Fake News, Hate Speech, and Toxicity Detection
Model Description
This model is a multi-task learning framework based on BERT (bert-base-german-cased), designed to perform binary classification on three tasks simultaneously:
- Fake News Detection
- Hate Speech Detection
- Toxicity Detection
The model utilizes a shared BERT encoder with task-specific fully connected layers for each classification task. It is fine-tuned on task-specific labeled datasets and supports input text in German.
Model Architecture
- Base Model: bert-base-german-cased
- Classifier Heads:
- Fully connected layers with 1 output unit for each task (fake news, hate speech, and toxicity).
- Activation Function: Sigmoid activation for binary classification.
Intended Use
This model is intended for applications in:
- Social media monitoring
- Content moderation
- Research on online discourse in German
Example Use Case
The model can analyze German text to predict whether it contains:
- Fake news (1: True, 0: False)
- Hate speech (1: True, 0: False)
- Toxicity (1: True, 0: False)
Usage
Requirements
pip install torch transformers
Code Example
import torch
from transformers import BertTokenizer, BertModel
import torch.nn as nn
# Load the tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')
# Define the Multi-Task Model
class MultiTaskModel(nn.Module):
def __init__(self):
super(MultiTaskModel, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-german-cased')
self.dropout = nn.Dropout(0.3)
self.fc_fake_news = nn.Linear(self.bert.config.hidden_size, 1)
self.fc_hate_speech = nn.Linear(self.bert.config.hidden_size, 1)
self.fc_toxicity = nn.Linear(self.bert.config.hidden_size, 1)
def forward(self, input_ids, attention_mask):
outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
pooled_output = outputs[1] # Get the pooled output
pooled_output = self.dropout(pooled_output)
fake_news_output = self.fc_fake_news(pooled_output)
hate_speech_output = self.fc_hate_speech(pooled_output)
toxicity_output = self.fc_toxicity(pooled_output)
return fake_news_output, hate_speech_output, toxicity_output
# Function to load the model
def load_model(device):
model = MultiTaskModel().to(device)
model.load_state_dict(torch.load('path_to_your_model.pt'))
model.eval()
return model
# Example Usage
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = load_model(device)
text_input = "Mir fallen nur Steuervorteile durch Gender Pay gap ein."
encoding = tokenizer(text_input, return_tensors='pt', padding='max_length', truncation=True, max_length=128)
input_ids = encoding['input_ids'].to(device)
attention_mask = encoding['attention_mask'].to(device)
# Predict
with torch.no_grad():
outputs_fake_news, outputs_hate_speech, outputs_toxicity = model(input_ids, attention_mask)
preds_fake_news = torch.sigmoid(outputs_fake_news).squeeze().round().cpu().numpy()
preds_hate_speech = torch.sigmoid(outputs_hate_speech).squeeze().round().cpu().numpy()
preds_toxicity = torch.sigmoid(outputs_toxicity).squeeze().round().cpu().numpy()
print(f"Fake News Prediction: {preds_fake_news}")
print(f"Hate Speech Prediction: {preds_hate_speech}")
print(f"Toxicity Prediction: {preds_toxicity}")
Dataset
This model assumes fine-tuning on task-specific datasets for German text. Ensure the training datasets are labeled for fake news, hate speech, and toxicity.
Evaluation
The model is evaluated using:
- Binary classification metrics: F1-score, precision, recall, and accuracy.
- Task-specific benchmarks using separate test sets for each task.
Limitations and Bias
- Language Support: The model only supports German text.
- Dataset Bias: Predictions may reflect biases present in the training data.
- Task-Specific Limitations: Performance may degrade if the input text contains ambiguous or mixed classifications.
Citation
If you use this model, please cite:
@article{example2025,
title={Multi-Task BERT Model for Fake News, Hate Speech, and Toxicity Detection},
author={Shivang Sinha},
year={2025}
}
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.