codewithdark commited on
Commit
94d22eb
·
verified ·
1 Parent(s): 2c9cb33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -1
README.md CHANGED
@@ -11,4 +11,87 @@ library_name: transformers
11
  tags:
12
  - medical
13
  - biology
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  tags:
12
  - medical
13
  - biology
14
+ ---
15
+
16
+
17
+ # Chest X-ray Image Classifier
18
+
19
+ This repository contains a fine-tuned **Vision Transformer (ViT)** model for classifying chest X-ray images, utilizing the **CheXpert** dataset. The model is fine-tuned on the task of classifying various lung diseases from chest radiographs, achieving impressive accuracy in distinguishing between different conditions.
20
+
21
+ ## Model Overview
22
+
23
+ The fine-tuned model is based on the **Vision Transformer (ViT)** architecture, which excels in handling image-based tasks by leveraging attention mechanisms for efficient feature extraction. The model was trained on the **CheXpert dataset**, which consists of labeled chest X-ray images for detecting diseases such as pneumonia, cardiomegaly, and others.
24
+
25
+ ## Performance
26
+
27
+ - **Final Validation Accuracy**: 98.46%
28
+ - **Final Training Loss**: 0.1069
29
+ - **Final Validation Loss**: 0.0980
30
+
31
+ The model achieved a significant accuracy improvement during training, demonstrating its ability to generalize well to unseen chest X-ray images.
32
+
33
+ ## Dataset
34
+
35
+ The dataset used for fine-tuning the model is the **CheXpert** dataset, which includes chest X-ray images from various patients with multi-label annotations. The data includes frontal and lateral views of the chest for each patient, annotated with labels for various lung diseases.
36
+
37
+ For more details on the dataset, visit the [CheXpert official website](https://stanfordmlgroup.github.io/chexpert/).
38
+
39
+ ## Training Details
40
+
41
+ The model was fine-tuned using the following settings:
42
+
43
+ - **Optimizer**: AdamW
44
+ - **Learning Rate**: 3e-5
45
+ - **Batch Size**: 32
46
+ - **Epochs**: 10
47
+ - **Loss Function**: Binary Cross-Entropy with Logits
48
+ - **Precision**: Mixed precision (via `torch.amp`)
49
+
50
+ ## Usage
51
+
52
+ ### Inference
53
+
54
+ To use the fine-tuned model for inference, simply load the model from Hugging Face's Model Hub and input a chest X-ray image:
55
+
56
+ ```python
57
+ from transformers import ViTForImageClassification, ViTFeatureExtractor
58
+ import torch
59
+ from PIL import Image
60
+
61
+ # Load model and feature extractor
62
+ model = ViTForImageClassification.from_pretrained('codewithdark/chest-xray-classifier')
63
+ feature_extractor = ViTFeatureExtractor.from_pretrained('codewithdark/chest-xray-classifier')
64
+
65
+ # Prepare an image for prediction
66
+ image = Image.open('path_to_chest_xray_image.jpg')
67
+
68
+ # Preprocess the image and make predictions
69
+ inputs = feature_extractor(images=image, return_tensors="pt")
70
+ outputs = model(**inputs)
71
+ logits = outputs.logits
72
+ predictions = torch.sigmoid(logits).squeeze()
73
+
74
+ # Display predictions
75
+ print(predictions)
76
+ ```
77
+
78
+ ### Fine-Tuning
79
+
80
+ To fine-tune the model on your own dataset, you can follow the instructions in this repo to adapt the code to your dataset and training configuration.
81
+
82
+ ## Contributing
83
+
84
+ We welcome contributions! If you have suggestions, improvements, or bug fixes, feel free to fork the repository and open a pull request.
85
+
86
+ ## License
87
+
88
+ This model is available under the MIT License. See [LICENSE](LICENSE) for more details.
89
+
90
+ ## Acknowledgements
91
+
92
+ - [CheXpert Dataset](https://stanfordmlgroup.github.io/chexpert/)
93
+ - Hugging Face for providing the `transformers` library and Model Hub.
94
+
95
+ ---
96
+ Happy coding! 🚀
97
+