File size: 2,731 Bytes
c50fb9a
 
2ce7670
 
 
 
 
c50fb9a
a01237a
c85b11a
 
2ce7670
 
 
a01237a
 
 
 
 
 
 
 
 
 
 
2ce7670
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: creativeml-openrail-m
pipeline_tag: image-classification
library_name: transformers
tags:
- deep-fake
- detectioon
---

![pipeline](dfd.jpg)


# **Image-Deep-Fake-Detector**

```
Classification report:

              precision    recall  f1-score   support

        Real     0.9933    0.9937    0.9935      4761
        Fake     0.9937    0.9933    0.9935      4760

    accuracy                         0.9935      9521
   macro avg     0.9935    0.9935    0.9935      9521
weighted avg     0.9935    0.9935    0.9935      9521
```

The **precision score** is a key metric to evaluate the performance of a deep fake detector. Precision is defined as:

\[
\text{Precision} = \frac{\text{True Positives}}{\text{True Positives + False Positives}}
\]

It indicates how well the model avoids false positives, which in the context of a deep fake detector means it measures how often the "Fake" label is correctly identified without mistakenly classifying real content as fake.

From the **classification report**, the precision values are:

- **Real:** 0.9933
- **Fake:** 0.9937
- **Macro average:** 0.9935
- **Weighted average:** 0.9935

### Key Observations:
1. **High precision (0.9933 for Real, 0.9937 for Fake):**  
   The model rarely misclassifies real content as fake and vice versa. This is critical for applications like deep fake detection, where false accusations (false positives) can have significant consequences.

2. **Macro and Weighted Averages (0.9935):**  
   The precision is evenly high across both classes, which shows that the model is well-balanced in its performance for detecting both real and fake content.

3. **Reliability of Predictions:**  
   With precision near 1.0, when the model predicts a video as fake (or real), it's highly likely to be correct. This is essential in reducing unnecessary manual verification in real-world applications like social media content moderation or fraud detection.

### ONNX Exchange

The ONNX model is converted using the following method, which directly writes the ONNX files to the repository using the Hugging Face write token.

🧪 : https://huggingface.co/spaces/prithivMLmods/convert-to-onnx-dir

![Screenshot 2025-01-27 at 19-03-01 ONNX - a Hugging Face Space by prithivMLmods.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/5T979tVYJ4jCKzlE6nOma.png)

### Conclusion:
The deep fake detector model demonstrates **excellent precision** for both the "Real" and "Fake" classes, indicating a highly reliable detection system with minimal false positives. Combined with similarly high recall and F1-score, the overall accuracy (99.35%) reflects that this is a robust and trustworthy model for identifying deep fakes.