File size: 3,076 Bytes
ecd93b8
 
 
 
1a1a252
 
 
 
ecd93b8
 
1a1a252
 
 
 
 
ecd93b8
1a1a252
ecd93b8
1a1a252
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
tags:
- image-classification
- timm
- owkin
- biology
- cancer
- lung
library_name: timm
license: apache-2.0
datasets:
- 1aurent/LC25000
metrics:
- accuracy
pipeline_tag: image-classification
---

# Model card for vit_base_patch16_224.owkin_pancancer_ft_lc25000_lung

A Vision Transformer (ViT) image classification model. \
Trained by Owkin on 40M pan-cancer histology tiles from TCGA. \
Fine-tuned on LC25000's lung subset.

## Model Details

- **Model Type:** Image classification / feature backbone
- **Model Stats:**
  - Params (M): 85.8
  - Image size: 224 x 224 x 3
- **Papers:**
  - Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling: https://www.medrxiv.org/content/10.1101/2023.07.21.23292757v2
- **Dataset:** TGCA: https://portal.gdc.cancer.gov/
- **Original:** https://github.com/owkin/HistoSSLscaling/
- **License:** https://github.com/owkin/HistoSSLscaling/blob/main/LICENSE.txt

## Model Usage

### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(
  urlopen(
    "https://datasets-server.huggingface.co/assets/1aurent/LC25000/--/default/train/0/image/image.jpg"
  )
)

# load model from the hub
model = timm.create_model(
  model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_lung",
  pretrained=True,
).eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1
```

### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm

# get example histology image
img = Image.open(
  urlopen(
    "https://datasets-server.huggingface.co/assets/1aurent/LC25000/--/default/train/0/image/image.jpg"
  )
)

# load model from the hub
model = timm.create_model(
  model_name="hf-hub:1aurent/vit_base_patch16_224.owkin_pancancer_ft_lc25000_lung",
  pretrained=True,
  num_classes=0,
).eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # output is (batch_size, num_features) shaped tensor
```

## Citation
```bibtex
@article {Filiot2023.07.21.23292757,
  author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti},
  title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling},
  elocation-id = {2023.07.21.23292757},
  year = {2023},
  doi = {10.1101/2023.07.21.23292757},
  publisher = {Cold Spring Harbor Laboratory Press},
  URL = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757},
  eprint = {https://www.medrxiv.org/content/early/2023/09/14/2023.07.21.23292757.full.pdf},
  journal = {medRxiv}
}
```