jenhung commited on
Commit
80eac1a
·
1 Parent(s): 31ca045

Initial commit

Browse files
README.md CHANGED
@@ -1,12 +1,62 @@
1
- ---
2
- title: Afm Analysis Web
3
- emoji: 🌖
4
- colorFrom: gray
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 4.31.3
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **Stratum corneum nanotexture feature detection using deep learning and spatial analysis: a non-invasive tool for skin barrier assessment**
2
+
3
+ <img src="./source/Overview.png" alt="Data Processing" width="95%" />
4
+
5
+ This repository presents an automated approach for the data processing of Atomic Force Microscopy (AFM), enabling the construction of an extensive database for further academic investigation and visualization. The program seamlessly integrates critical steps, including the conversion of raw AFM data into PNG files, the utilization of computer vision techniques, and the implementation of state-of-the-art deep learning algorithms for accurate detection of circular nano objects (CNOs) and classification of various skin diseases. In addition, the algorithm incorporates the grid search method to determine the optimal hyperparameter settings, ensuring optimal performance and enhancing the reliability of the results.
6
+
7
+ ## **Dependencies**
8
+ - Python 3.9+
9
+ - matplotlib
10
+ - numpy
11
+ - opencv-python
12
+ - scipy
13
+ - scikit-image
14
+ - ultralytics
15
+ - customtkinter
16
+ - scikit-learn
17
+ - customtkinter
18
+
19
+ ## **Directories**
20
+ - `AD_Assessment_GUI.zip` contains a cross-platform executable GUI, sample data, and a tutorial video.
21
+ - Folder `corneocyte dataset` contains original corneocyte nanotexture images and annotated images for training AI models.
22
+ - Folder `models` contains our fine-tuned YOLOv8-{N,S,M,L,X} and YOLOv9-{C,E} models.
23
+
24
+ ## **Usage**
25
+ 1. Execution via cross-platform executable GUI
26
+ - Unzip `AD_Assessment_GUI.zip`
27
+ - Run `AD_Assessment_GUI.exe`
28
+ - Analysis results will be saved within the selected path in a folder titled `CNO_Detection`
29
+
30
+ 2. Execution via python script
31
+ - Install packages in terminal:
32
+ ```
33
+ pip install -r requirements.txt
34
+ ```
35
+ - Run `AD_Assessment_GUI.py`
36
+ - Analysis results will be saved within the selected path in a folder titled `CNO_Detection`
37
+
38
+ ## **Executable**
39
+
40
+ 1. Install PyInstaller in terminal:
41
+
42
+ ```
43
+ pip install pyinstaller
44
+ ```
45
+
46
+ 2. Run command in terminal:
47
+
48
+ ```
49
+ pyinstaller --onedir .\AD_Assessment_GUI.py
50
+ ```
51
+
52
+ ## **Contributions**
53
+
54
+ [1] Liao, H-S., Wang, J-H., Raun, E., Nørgaard, L. O., Dons, F. E., & Hwu, E. E-T. (2022). Atopic Dermatitis Severity Assessment using High-Speed Dermal Atomic Force Microscope. Abstract from AFM BioMed Conference 2022, Nagoya-Okazaki, Japan.
55
+
56
+ [2] Pereda, J., Liao, H-S., Werner, C., Wang, J-H., Huang, K-Y., Raun, E., Nørgaard, L. O., Dons, F. E., & Hwu, E. E. T. (2022). Hacking Consumer Electronics for Biomedical Imaging. Abstract from 5th Global Conference on Biomedical Engineering & Annual Meeting of TSBME, Taipei, Taiwan, Province of China.
57
+
58
+ [3] Liao, H. S., Akhtar, I., Werner, C., Slipets, R., Pereda, J., Wang, J. H., Raun, E., Nørgaard, L. O., Dons, F. E., & Hwu, E. E. T. (2022). Open-source controller for low-cost and high-speed atomic force microscopy imaging of skin corneocyte nanotextures. HardwareX, 12, [e00341]. https://doi.org/10.1016/j.ohx.2022.e00341
59
+
60
+ ----
61
+
62
+ ### Contact: [Jen-Hung Wang](mailto:[email protected]) / [Professor En-Te Hwu](mailto:[email protected])
models/YOLOv8-L_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94088cd0fcba7758323f2b26aca84cf8a5b368917c39282f9a5da873baefa3d0
3
+ size 87619390
models/YOLOv8-M_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:177d75ec783ccd7159d6c1c7d3eb3e066e37123afb7882b57f91042548a146b9
3
+ size 52000800
models/YOLOv8-N_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e9cfcf31a8160ee9eb7ec8f5b54531224af571b516c80bd567858a3af26f1a2
3
+ size 6260697
models/YOLOv8-S_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93627f63d6076b2ff6967495ebfb9f812bb52a0f071891d30e34952f4489a9e0
3
+ size 22529177
models/YOLOv8-X_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff2b1c64c355730334537d99b59576ff8596126ce6fc8d8f0274d48b49c0b7f4
3
+ size 136690238
models/YOLOv9-C_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2410809a30ec6a8acac2da28b4780aac25a1d97332648d815607c1427345737e
3
+ size 102767066
models/YOLOv9-E_CNO_Detection.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5da00cae99ae6ea6676556e8ea11ce06002a0aa01d4b59eebb54b24e2fa0e289
3
+ size 139940794
requirements.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio
2
+ matplotlib
3
+ numpy
4
+ pandas
5
+ Pillow
6
+ scikit_learn
7
+ scipy
8
+ skimage
9
+ ultralytics
10
+ opencv-python
utils/CNO_KDE_Integration.py ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 Jen-Hung Wang, IDUN Section, Department of Health Technology, Technical University of Denmark (DTU)
2
+
3
+ import time
4
+ import sys
5
+ import warnings
6
+ import csv
7
+ import cv2
8
+ import math
9
+ from pathlib import Path
10
+ from utils.growcut import *
11
+ from ultralytics import YOLO
12
+ from sklearn.neighbors import KernelDensity
13
+ from sklearn.model_selection import GridSearchCV
14
+
15
+ warnings.filterwarnings('ignore')
16
+ DIR_NAME = Path(os.path.dirname(__file__)).parent
17
+ np.set_printoptions(threshold=sys.maxsize)
18
+ # Use GPU
19
+ # torch.cuda.set_device(0) # Set to your desired GPU number
20
+
21
+ # Model Path
22
+ DETECTION_MODEL_n = os.path.join(DIR_NAME, 'models', 'YOLOv8-N_CNO_Detection.pt')
23
+ DETECTION_MODEL_s = os.path.join(DIR_NAME, 'models', 'YOLOv8-S_CNO_Detection.pt')
24
+ DETECTION_MODEL_m = os.path.join(DIR_NAME, 'models', 'YOLOv8-M_CNO_Detection.pt')
25
+ DETECTION_MODEL_l = os.path.join(DIR_NAME, 'models', 'YOLOv8-L_CNO_Detection.pt')
26
+ DETECTION_MODEL_x = os.path.join(DIR_NAME, 'models', 'YOLOv8-X_CNO_Detection.pt')
27
+ # DETECTION_MODEL_c = os.path.join(DIR_NAME, 'models', 'YOLOv9-C_CNO_Detection.pt')
28
+ # DETECTION_MODEL_e = os.path.join(DIR_NAME, 'models', 'YOLOv9-E_CNO_Detection.pt')
29
+
30
+ def numcat(arr):
31
+ arr_size = arr.shape[0]
32
+ arr_cat = np.empty([arr_size, 1], dtype=np.int32)
33
+ for i in range(arr.shape[0]):
34
+ arr_cat[i] = arr[i][0] * 1000 + arr[i][1]
35
+ return arr_cat
36
+
37
+
38
+ def cno_detection(source, kde_dir, conf, cno_model, file_list, model_type):
39
+
40
+ # Declare Parameters
41
+ cno_col = []
42
+ total_layer_area = []
43
+ total_layer_cno = []
44
+ total_layer_density = []
45
+ avg_area_col = []
46
+ total_area_col = []
47
+
48
+ detection_results = cno_model.predict(source, save=False, save_txt=False, iou=0.5, conf=conf, max_det=1200)
49
+
50
+ # CNO Analysis
51
+ for idx, result in enumerate(detection_results):
52
+ CNO = len(result.boxes)
53
+ single_layer_area = []
54
+ single_layer_cno = []
55
+ single_layer_density = []
56
+ total_area = 0
57
+ if CNO < 5:
58
+ avg_area_col.append(np.nan)
59
+ total_area_col.append(np.nan)
60
+ nan_arr = np.empty([25])
61
+ nan_arr[:] = np.nan
62
+ total_layer_area.append(nan_arr)
63
+ total_layer_cno.append(nan_arr)
64
+ total_layer_density.append(nan_arr)
65
+ else:
66
+ CNO_coor = np.empty([CNO, 2], dtype=int)
67
+ for j in range(CNO):
68
+ w = result.boxes.xywh[j][2]
69
+ h = result.boxes.xywh[j][3]
70
+ area = (math.pi * w * h / 4) * 20 * 20 / (512 * 512)
71
+ total_area += area
72
+ bbox_img = result.orig_img
73
+ x = round(result.boxes.xywh[j][0].item())
74
+ y = round(result.boxes.xywh[j][1].item())
75
+
76
+ x1 = round(result.boxes.xyxy[j][0].item())
77
+ y1 = round(result.boxes.xyxy[j][1].item())
78
+ x2 = round(result.boxes.xyxy[j][2].item())
79
+ y2 = round(result.boxes.xyxy[j][3].item())
80
+
81
+ CNO_coor[j] = [x, y]
82
+ bbox_img = cv2.rectangle(bbox_img,
83
+ (x1, y1),
84
+ (x2, y2),
85
+ (0, 255, 0), 1)
86
+
87
+ avg_area = total_area / CNO
88
+ avg_area_col.append(round(avg_area.item(), 4))
89
+ total_area_col.append(round(total_area.item(), 4))
90
+
91
+ cv2.imwrite(os.path.join(kde_dir, '{}_{}_{}_bbox.png'.format(file_list[idx], model_type, conf)),
92
+ bbox_img)
93
+
94
+ kde = KernelDensity(metric='euclidean', kernel='gaussian', algorithm='ball_tree')
95
+
96
+ # Finding Optimal Bandwidth
97
+ ti = time.time()
98
+ if CNO < 7:
99
+ fold = CNO
100
+ else:
101
+ fold = 7
102
+ gs = GridSearchCV(kde, {'bandwidth': np.linspace(20, 60, 41)}, cv=fold)
103
+ cv = gs.fit(CNO_coor)
104
+ bw = cv.best_params_['bandwidth']
105
+ tf = time.time()
106
+ print("Finding optimal bandwidth={:.2f} ({:n}-fold cross-validation): {:.2f} secs".format(bw, cv.cv,
107
+ (tf - ti)))
108
+ kde.bandwidth = bw
109
+ _ = kde.fit(CNO_coor)
110
+
111
+ xgrid = np.arange(0, bbox_img.shape[1], 1)
112
+ ygrid = np.arange(0, bbox_img.shape[0], 1)
113
+ xv, yv = np.meshgrid(xgrid, ygrid)
114
+ xys = np.vstack([xv.ravel(), yv.ravel()]).T
115
+ gdim = xv.shape
116
+ zi = np.arange(xys.shape[0])
117
+ zXY = xys
118
+ z = np.exp(kde.score_samples(zXY))
119
+ zg = -9999 + np.zeros(xys.shape[0])
120
+ zg[zi] = z
121
+
122
+ xyz = np.hstack((xys[:, :2], zg[:, None]))
123
+ x = xyz[:, 0].reshape(gdim)
124
+ y = xyz[:, 1].reshape(gdim)
125
+ z = xyz[:, 2].reshape(gdim)
126
+ levels = np.linspace(0, z.max(), 26)
127
+ print("levels", levels)
128
+
129
+ for j in range(len(levels) - 1):
130
+ area = np.argwhere(z >= levels[j])
131
+ area_concatenate = numcat(area)
132
+ CNO_concatenate = numcat(CNO_coor)
133
+ ecno = np.count_nonzero(np.isin(area_concatenate, CNO_concatenate))
134
+ layer_area = area.shape[0]
135
+ if layer_area == 0:
136
+ density = np.round(0.0, 4)
137
+ else:
138
+ density = np.round((ecno / layer_area) * 512 * 512 / 400, 4)
139
+ print("Level {}: Area={}, CNO={}, density={}".format(j, layer_area, ecno, density))
140
+ single_layer_area.append(layer_area)
141
+ single_layer_cno.append(ecno)
142
+ single_layer_density.append(density)
143
+
144
+ total_layer_area.append(single_layer_area)
145
+ total_layer_cno.append(single_layer_cno)
146
+ total_layer_density.append(single_layer_density)
147
+
148
+ # Plot CNO Distribution
149
+ plt.contourf(x, y, z, levels=levels, cmap=plt.cm.bone)
150
+ plt.axis('off')
151
+ # plt.gcf().set_size_inches(8, 8)
152
+ plt.gcf().set_size_inches(8 * (gdim[1] / gdim[0]), 8)
153
+ plt.gca().invert_yaxis()
154
+ plt.xlim(0, gdim[1] - 1)
155
+ plt.ylim(gdim[0] - 1, 0)
156
+ plt.savefig(os.path.join(kde_dir, '{}_{}_{}_KDE.png'.format(file_list[idx], model_type, conf)),
157
+ bbox_inches='tight', pad_inches=0)
158
+ plt.clf()
159
+
160
+ plt.scatter(CNO_coor[:, 0], CNO_coor[:, 1], s=10)
161
+ plt.xlim(0, gdim[1] - 1)
162
+ plt.ylim(0, gdim[0] - 1)
163
+ plt.axis('off')
164
+ plt.gcf().set_size_inches(8, 8)
165
+ plt.gcf().set_size_inches(8 * (gdim[1] / gdim[0]), 8)
166
+ plt.gca().invert_yaxis()
167
+ plt.savefig(os.path.join(kde_dir, '{}_{}_{}_Spatial.png'.format(file_list[idx], model_type, conf)),
168
+ bbox_inches='tight', pad_inches=0)
169
+ plt.clf()
170
+ cno_col.append(CNO)
171
+
172
+ return cno_col, avg_area_col, total_area_col, total_layer_area, total_layer_cno, total_layer_density
173
+
174
+
175
+ def cno_detect(folder_dir, model, conf):
176
+
177
+ if model == 'YOLOv8-N':
178
+ CNO_model = YOLO(DETECTION_MODEL_n)
179
+ elif model == 'YOLOv8-S':
180
+ CNO_model = YOLO(DETECTION_MODEL_s)
181
+ elif model == 'YOLOv8-M':
182
+ CNO_model = YOLO(DETECTION_MODEL_m)
183
+ elif model == 'YOLOv8-L':
184
+ CNO_model = YOLO(DETECTION_MODEL_l)
185
+ else:
186
+ CNO_model = YOLO(DETECTION_MODEL_x)
187
+ """
188
+ elif model == 'YOLOv9-C':
189
+ CNO_model = YOLO(DETECTION_MODEL_c)
190
+ else:
191
+ CNO_model = YOLO(DETECTION_MODEL_e)
192
+ """
193
+
194
+ # Search folder path
195
+ folder = folder_dir.split(os.sep)[-1]
196
+
197
+ print("Analyzing Folder", folder)
198
+
199
+ # Extract folder information
200
+ folder_info = folder.split('_')
201
+ if folder_info[2][0:2] == "TL":
202
+ Country = folder_info[0]
203
+ AD_severity = folder_info[1]
204
+ TLSS = int(folder_info[2].strip("TL"))
205
+ if TLSS == 0:
206
+ lesional = False
207
+ else:
208
+ lesional = True
209
+ Number = int(folder_info[-1].strip("No."))
210
+ AD_group = AD_severity.strip("G")
211
+ else:
212
+ Country = None
213
+ TLSS = None
214
+ lesional = None
215
+ Number = None
216
+ AD_group = None
217
+
218
+ run_growcut = True
219
+ timestr = time.strftime("%Y%m%d-%H%M%S")
220
+
221
+ CNO_list = []
222
+ Area_sum = []
223
+ Area_avg = []
224
+
225
+ file_list = []
226
+ growcut_list = []
227
+
228
+ growcut_path = os.path.join(folder_dir, "CNO_Detection", "GrowCut")
229
+ original_png_path = os.path.join(folder_dir, "CNO_Detection", "Image", "Original")
230
+ enhanced_png_path = os.path.join(folder_dir, "CNO_Detection", "Image", "Enhanced")
231
+ kde_png_path = os.path.join(folder_dir, "CNO_Detection", "Image", "KDE")
232
+ save_dir = os.path.join(folder_dir, "CNO_Detection", "Result")
233
+ print("Save Path:", save_dir)
234
+
235
+ try:
236
+ os.makedirs(growcut_path, exist_ok=True)
237
+ os.makedirs(original_png_path, exist_ok=True)
238
+ os.makedirs(enhanced_png_path, exist_ok=True)
239
+ os.makedirs(kde_png_path, exist_ok=True)
240
+ if not os.listdir(enhanced_png_path):
241
+ print("Directory is empty")
242
+ run_growcut = True
243
+ else:
244
+ print("Directory is not empty")
245
+ run_growcut = False
246
+ os.makedirs(save_dir, exist_ok=True)
247
+ except OSError as error:
248
+ print("Directory can not be created")
249
+
250
+ encyc = []
251
+ walk = os.walk(folder_dir)
252
+ for d, sd, files in walk:
253
+ directory = d.split(os.sep)[-1]
254
+ for fn in files:
255
+ if fn[0:2] != "._" and fn[-10:].lower() == '_trace.bcr' and directory == folder:
256
+ encyc.append(d + os.sep + fn)
257
+ encyc.sort()
258
+
259
+ # GrowCut Detection
260
+ if run_growcut:
261
+ for i, fn in enumerate(encyc):
262
+ file, gc_CNO = treat_one_image(fn, growcut_path, original_png_path, enhanced_png_path)
263
+ file_list.append(file)
264
+ growcut_list.append(gc_CNO)
265
+ print(i, end=' ')
266
+ else:
267
+ for i, fn in enumerate(encyc):
268
+ file_list.append(os.path.split(fn)[1][0:-10])
269
+
270
+ # CNO Detection & AD Classification
271
+ print("Model", model)
272
+ print("Conf", conf)
273
+
274
+ # Make Function
275
+ cno_col, avg_area_col, total_area_col, layer_area, layer_cno, layer_density = cno_detection(enhanced_png_path,
276
+ kde_png_path,
277
+ conf, CNO_model,
278
+ file_list, model)
279
+ CNO_list.append(cno_col)
280
+ Area_sum.append(total_area_col)
281
+ Area_avg.append(avg_area_col)
282
+
283
+ Layer_area = layer_area
284
+ Layer_cno = layer_cno
285
+ Layer_density = layer_density
286
+
287
+ # Write CSV
288
+ # open the file in the write mode
289
+ f = open(save_dir + os.sep + '{}_{}_{}_{}_.csv'.format(folder, timestr, model, conf), 'w')
290
+ header = ['File', 'Country', 'Group', 'No.', 'TLSS', 'Lesional',
291
+
292
+ 'Layer_Area_0', 'Layer_Area_1', 'Layer_Area_2', 'Layer_Area_3', 'Layer_Area_4',
293
+ 'Layer_Area_5', 'Layer_Area_6', 'Layer_Area_7', 'Layer_Area_8', 'Layer_Area_9',
294
+ 'Layer_Area_10', 'Layer_Area_11', 'Layer_Area_12', 'Layer_Area_13', 'Layer_Area_14',
295
+ 'Layer_Area_15', 'Layer_Area_16', 'Layer_Area_17', 'Layer_Area_18', 'Layer_Area_19',
296
+ 'Layer_Area_20', 'Layer_Area_21', 'Layer_Area_22', 'Layer_Area_23', 'Layer_Area_24',
297
+
298
+ 'Layer_CNO_0', 'Layer_CNO_1', 'Layer_CNO_2', 'Layer_CNO_3', 'Layer_CNO_4',
299
+ 'Layer_CNO_5', 'Layer_CNO_6', 'Layer_CNO_7', 'Layer_CNO_8', 'Layer_CNO_9',
300
+ 'Layer_CNO_10', 'Layer_CNO_11', 'Layer_CNO_12', 'Layer_CNO_13', 'Layer_CNO_14',
301
+ 'Layer_CNO_15', 'Layer_CNO_16', 'Layer_CNO_17', 'Layer_CNO_18', 'Layer_CNO_19',
302
+ 'Layer_CNO_20', 'Layer_CNO_21', 'Layer_CNO_22', 'Layer_CNO_23', 'Layer_CNO_24',
303
+
304
+ 'Layer_Density_0', 'Layer_Density_1', 'Layer_Density_2', 'Layer_Density_3',
305
+ 'Layer_Density_4', 'Layer_Density_5', 'Layer_Density_6', 'Layer_Density_7',
306
+ 'Layer_Density_8', 'Layer_Density_9', 'Layer_Density_10', 'Layer_Density_11',
307
+ 'Layer_Density_12', 'Layer_Density_13', 'Layer_Density_14', 'Layer_Density_15',
308
+ 'Layer_Density_16', 'Layer_Density_17', 'Layer_Density_18', 'Layer_Density_19',
309
+ 'Layer_Density_20', 'Layer_Density_21', 'Layer_Density_22', 'Layer_Density_23',
310
+ 'Layer_Density_24',
311
+
312
+ 'AVG_Area', 'AVG_Size']
313
+
314
+ writer = csv.writer(f)
315
+ writer.writerow(header)
316
+
317
+ for i in range(len(file_list)):
318
+ data = [file_list[i], Country, AD_group, Number, TLSS, lesional,
319
+
320
+ Layer_area[i][0], Layer_area[i][1], Layer_area[i][2], Layer_area[i][3], Layer_area[i][4],
321
+ Layer_area[i][5], Layer_area[i][6], Layer_area[i][7], Layer_area[i][8], Layer_area[i][9],
322
+ Layer_area[i][10], Layer_area[i][11], Layer_area[i][12], Layer_area[i][13],
323
+ Layer_area[i][14], Layer_area[i][15], Layer_area[i][16], Layer_area[i][17],
324
+ Layer_area[i][18], Layer_area[i][19], Layer_area[i][20], Layer_area[i][21],
325
+ Layer_area[i][22], Layer_area[i][23], Layer_area[i][24],
326
+
327
+ Layer_cno[i][0], Layer_cno[i][1], Layer_cno[i][2], Layer_cno[i][3], Layer_cno[i][4],
328
+ Layer_cno[i][5], Layer_cno[i][6], Layer_cno[i][7], Layer_cno[i][8], Layer_cno[i][9],
329
+ Layer_cno[i][10], Layer_cno[i][11], Layer_cno[i][12], Layer_cno[i][13], Layer_cno[i][14],
330
+ Layer_cno[i][15], Layer_cno[i][16], Layer_cno[i][17], Layer_cno[i][18], Layer_cno[i][19],
331
+ Layer_cno[i][20], Layer_cno[i][21], Layer_cno[i][22], Layer_cno[i][23], Layer_cno[i][24],
332
+
333
+ Layer_density[i][0], Layer_density[i][1], Layer_density[i][2], Layer_density[i][3],
334
+ Layer_density[i][4], Layer_density[i][5], Layer_density[i][6], Layer_density[i][7],
335
+ Layer_density[i][8], Layer_density[i][9], Layer_density[i][10], Layer_density[i][11],
336
+ Layer_density[i][12], Layer_density[i][13], Layer_density[i][14], Layer_density[i][15],
337
+ Layer_density[i][16], Layer_density[i][17], Layer_density[i][18], Layer_density[i][19],
338
+ Layer_density[i][20], Layer_density[i][21], Layer_density[i][22], Layer_density[i][23],
339
+ Layer_density[i][24],
340
+
341
+ Area_sum[0][i], Area_avg[0][i]]
342
+ writer.writerow(data)
343
+ f.close()
344
+
utils/growcut.py ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import numpy as np
4
+ import matplotlib.pyplot as plt
5
+ from matplotlib.widgets import MultiCursor
6
+ from scipy import ndimage
7
+ from matplotlib import cm
8
+ from PIL import Image, ImageDraw, ImageFont
9
+ from skimage import morphology
10
+ from skimage.measure import regionprops
11
+
12
+
13
+ def comp(*ims, figsize=(20, 10)):
14
+ N = len(ims)
15
+ ncols = {1: 1, 2: 2, 3: 3, 4: 2, 5: 3, 6: 3, 7: 4, 8: 4, 9: 3}
16
+ nrows = {1: 1, 2: 1, 3: 1, 4: 2, 5: 2, 6: 2, 7: 2, 8: 2, 9: 3}
17
+ fig, axes = plt.subplots(ncols=ncols[N], nrows=nrows[N], sharex=True, sharey=True, figsize=figsize)
18
+ fig.subplots_adjust(wspace=0.01, hspace=0.01)
19
+ axes = axes.ravel()
20
+ cursor = MultiCursor(fig.canvas, axes,
21
+ horizOn=True, vertOn=True, color='red', linewidth=1)
22
+ for i in range(N):
23
+ axes[i].imshow(ims[i])
24
+ return fig, axes, cursor
25
+
26
+
27
+ def load_im(fn):
28
+ f = open(fn, 'rb')
29
+ a = f.read()
30
+ f.close()
31
+ aa = str(a[:2048])
32
+ xpix = int(re.findall('xpixels\s?=\s?([0-9]*)', aa)[0])
33
+ ypix = int(re.findall('ypixels\s?=\s?([0-9]*)', aa)[0])
34
+ a = a[2048:]
35
+
36
+ words = [a[k * 2:k * 2 + 2] for k in range(xpix * ypix)]
37
+ arr = [int.from_bytes(words[k], byteorder='little', signed=True) for k in range(len(words))]
38
+ im = np.array(arr).reshape((ypix, xpix))
39
+
40
+ im = (im.T - np.mean(im, axis=1) +
41
+ np.mean(ndimage.gaussian_filter(im, 10), axis=1)).T # palliate horizontal artifact
42
+ im = im - np.min(im)
43
+ im = im / np.max(im) # normalize to 0.0-1.0
44
+ return im
45
+
46
+
47
+ def G(x):
48
+ return 1 - np.abs(x) ** .5
49
+
50
+
51
+ def growcut(land, labels, strength, maxiter=5):
52
+ Ni, Nj = land.shape
53
+ sidei, sidej = np.arange(Ni), np.arange(Nj)
54
+ ij = np.dstack(np.meshgrid(sidei, sidej))[:, :, ::-1]
55
+ iijj = np.tile(ij, (9, 1, 1, 1))
56
+ for i, k in enumerate(((0, 0), (1, 0), (1, 1), (0, 1), (-1, 1), (-1, 0), (-1, -1), (0, -1), (1, -1))):
57
+ iijj[i, :, :, :] += np.array(k)
58
+ iijj[i, :, :, 0] = iijj[i, :, :, 0].clip(0, land.shape[0] - 1)
59
+ iijj[i, :, :, 1] = iijj[i, :, :, 1].clip(0, land.shape[1] - 1)
60
+ neigh_slice = np.s_[iijj[:, :, :, 0], iijj[:, :, :, 1]]
61
+
62
+ this_labels = labels * 1
63
+ this_strength = strength * 1
64
+
65
+ neigh_val = land[neigh_slice]
66
+ jump_diff = land - neigh_val
67
+ g = G(jump_diff)
68
+
69
+ for i in range(maxiter):
70
+ # print(np.sum(this_labels), end=' ')
71
+ neigh_lab = this_labels[neigh_slice] * 1
72
+ neigh_str = this_strength[neigh_slice] * 1
73
+
74
+ attack_force = g * neigh_str
75
+
76
+ new_layer = np.argmax(attack_force, axis=0)
77
+ new_lab = neigh_lab[new_layer, iijj[0, :, :, 0], iijj[0, :, :, 1]] * 1
78
+ new_strength = attack_force[new_layer, iijj[0, :, :, 0], iijj[0, :, :, 1]] * 1
79
+
80
+ this_labels = new_lab
81
+ this_strength = new_strength
82
+
83
+ return this_labels, this_strength
84
+
85
+
86
+ def pyramid_contrast(im):
87
+ oom = []
88
+ ms = []
89
+ for d in (9, 15): # (9, 11, 13, 15, 17,25):#(3, 6, 9, 12, 15, 18, 21):
90
+ disk = morphology.disk(d)
91
+ m = ndimage.percentile_filter(im, 10, footprint=disk)
92
+ M = ndimage.percentile_filter(im, 90, footprint=disk)
93
+ om = (im - m) / (M - m)
94
+ om = np.nan_to_num(om).clip(0, 1)
95
+ oom.append(om)
96
+ ms.append(M - m)
97
+ oom = np.array(oom)
98
+ # ms = np.array(ms)
99
+ land = np.mean(oom, axis=0)
100
+ return land
101
+
102
+
103
+ def segmentate(land, alpha=0.7, beta=0.6):
104
+ if alpha < beta:
105
+ print("alpha must be greater than beta")
106
+ assert False
107
+ foreground = ndimage.binary_erosion(land > alpha, iterations=1)
108
+ background = land < beta
109
+
110
+ lab = ndimage.label(foreground)[0]
111
+ lab[lab > 0] += 1
112
+ lab[background] = 1
113
+
114
+ strength = (lab > 1) * 1. + (lab == 1) * 1.
115
+ this_labels, this_strength = growcut(land,
116
+ lab, strength, maxiter=25)
117
+ w = (this_labels != np.roll(this_labels, 1, axis=0)) + (this_labels != np.roll(this_labels, 1, axis=1))
118
+
119
+ b = w * 0
120
+ lab2 = ndimage.label(~w)[0]
121
+ for l in np.unique(lab2)[1:]:
122
+ if np.sum(foreground[l == lab2]) > 0:
123
+ b[l == lab2] = 1
124
+
125
+ lab2 = ndimage.label(ndimage.binary_dilation(this_labels > 1))[0]
126
+ return lab2
127
+
128
+
129
+ def filter_objects(lab2, max_eccentricity=0.93, min_size=10, max_size=200, min_convex_coverage=0.8):
130
+ props = regionprops(lab2) # object metrics
131
+ b = lab2 * 0.
132
+ for i in np.unique(lab2)[1:]:
133
+ ind = i - 1
134
+ e = props[ind].eccentricity
135
+ s = props[ind].area
136
+ c = s * 1 / props[ind].convex_area
137
+ # filter objects by eccentricity, size, ans convex hull coverage
138
+ if e < max_eccentricity and (min_size < s < max_size) and c > min_convex_coverage:
139
+ lev = 1
140
+ else:
141
+ lev = 2
142
+ b[lab2 == i] = lev
143
+ return b
144
+
145
+
146
+ def present(im, land, b):
147
+ original_im = cm.afmhot(im)[:, :, :3]
148
+ # Contrast Level 0.5
149
+ resim = 0.5 * land + (1 - 0.5) * im
150
+ enhanced_im = cm.afmhot(resim)[:, :, :3]
151
+ monochrome_land = np.tile(land, (3, 1, 1)).transpose((1, 2, 0))
152
+ detected = b == 1
153
+ detected = ndimage.binary_dilation(detected) # * (~ndimage.binary_erosion(detected))
154
+ filtered = b == 2
155
+ filtered = ndimage.binary_dilation(filtered) # * (~ndimage.binary_erosion(filtered))
156
+
157
+ monochrome_land[detected] *= np.array([.3, 1, .3])
158
+ monochrome_land[filtered] *= np.array([1, .6, .6])
159
+
160
+ newim = np.hstack((enhanced_im, monochrome_land))
161
+ newim = np.dstack((newim, newim[:, :, 0] * 0 + 1))
162
+
163
+ base = Image.fromarray((newim * 255).astype(np.uint8))
164
+ original_im = Image.fromarray((original_im * 255).astype(np.uint8))
165
+ enhanced_im = Image.fromarray((enhanced_im * 255).astype(np.uint8))
166
+
167
+ # make a blank image for the text, initialized to transparent text color
168
+ txt = Image.new("RGBA", base.size, (255, 255, 255, 0))
169
+
170
+ # get a font
171
+ # fnt = ImageFont.truetype("/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf", 30, encoding="unic")
172
+ fnt = ImageFont.load_default()
173
+ # get a drawing context
174
+ d = ImageDraw.Draw(txt)
175
+ ct = np.max(ndimage.label(b == 1)[0]) - 1
176
+ d.text((600, 10), "CNOs: {:d}".format(ct), font=fnt, fill=(255, 50, 50, 255))
177
+ out = Image.alpha_composite(base, txt)
178
+
179
+ return out, original_im, enhanced_im, ct
180
+
181
+
182
+ def treat_one_image(fn, growcut_path, original_png_path, enhanced_png_path):
183
+
184
+ file_list = []
185
+ growcut_list = []
186
+
187
+ # load data
188
+ im = load_im(fn)
189
+
190
+ # pyramid contrast
191
+ land = pyramid_contrast(im)
192
+
193
+ # detect objects
194
+ lab2 = segmentate(land, alpha=.75, beta=0.7)
195
+
196
+ # visualize
197
+ b = filter_objects(lab2, max_eccentricity=0.967, min_size=30, max_size=200, min_convex_coverage=0.5)
198
+ growcut_im, original_im, enhanced_im, ct = present(im, land, b)
199
+
200
+ file_name = os.path.split(fn)[1][0:-10]
201
+
202
+ original_im.save(os.path.join(original_png_path, file_name) + '.png')
203
+ enhanced_im.save(os.path.join(enhanced_png_path, file_name) + '.png')
204
+ growcut_im.save(os.path.join(growcut_path, file_name) + '.png')
205
+
206
+ return file_name, ct
web_app.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cv2
3
+ import pandas as pd
4
+ import PIL.Image as Image
5
+ import gradio as gr
6
+ import numpy as np
7
+ import math
8
+ from pathlib import Path
9
+ from ultralytics import ASSETS, YOLO
10
+
11
+ DIR_NAME = Path(os.path.dirname(__file__))
12
+ DETECTION_MODEL_n = os.path.join(DIR_NAME, 'models', 'YOLOv8-N_CNO_Detection.pt')
13
+ DETECTION_MODEL_s = os.path.join(DIR_NAME, 'models', 'YOLOv8-S_CNO_Detection.pt')
14
+ DETECTION_MODEL_m = os.path.join(DIR_NAME, 'models', 'YOLOv8-M_CNO_Detection.pt')
15
+ DETECTION_MODEL_l = os.path.join(DIR_NAME, 'models', 'YOLOv8-L_CNO_Detection.pt')
16
+ DETECTION_MODEL_x = os.path.join(DIR_NAME, 'models', 'YOLOv8-X_CNO_Detection.pt')
17
+
18
+ # MODEL = os.path.join(DIR_NAME, 'models', 'YOLOv8-M_CNO_Detection.pt')
19
+ # model = YOLO(MODEL)
20
+ # cno_df = pd.DataFrame()
21
+
22
+ def predict_image(name, model, img, conf_threshold, iou_threshold):
23
+ """Predicts and plots labeled objects in an image using YOLOv8 model with adjustable confidence and IOU thresholds."""
24
+ gr.Info("Starting process")
25
+ # gr.Warning("Name is empty")
26
+ if name == "":
27
+ gr.Warning("Name is empty")
28
+
29
+ if model == 'YOLOv8-N':
30
+ CNO_model = YOLO(DETECTION_MODEL_n)
31
+ elif model == 'YOLOv8-S':
32
+ CNO_model = YOLO(DETECTION_MODEL_s)
33
+ elif model == 'YOLOv8-M':
34
+ CNO_model = YOLO(DETECTION_MODEL_m)
35
+ elif model == 'YOLOv8-L':
36
+ CNO_model = YOLO(DETECTION_MODEL_l)
37
+ else:
38
+ CNO_model = YOLO(DETECTION_MODEL_x)
39
+
40
+ results = CNO_model.predict(
41
+ source=img,
42
+ conf=conf_threshold,
43
+ iou=iou_threshold,
44
+ show_labels=False,
45
+ show_conf=False,
46
+ imgsz=512,
47
+ max_det=1200
48
+ )
49
+
50
+ cno_count = []
51
+ cno_image = []
52
+ file_name = []
53
+
54
+ # print("deb", img)
55
+
56
+ for idx, result in enumerate(results):
57
+ cno = len(result.boxes)
58
+ cno_coor = np.empty([cno, 2], dtype=int)
59
+ file_label = img[idx].split(os.sep)[-1]
60
+ for j in range(cno):
61
+ # w = r.boxes.xywh[j][2]
62
+ # h = r.boxes.xywh[j][3]
63
+ # area = (math.pi * w * h / 4) * 20 * 20 / (512 * 512)
64
+ # total_area += area
65
+ # bbox_img = r.orig_img
66
+ x = round(result.boxes.xywh[j][0].item())
67
+ y = round(result.boxes.xywh[j][1].item())
68
+
69
+ x1 = round(result.boxes.xyxy[j][0].item())
70
+ y1 = round(result.boxes.xyxy[j][1].item())
71
+ x2 = round(result.boxes.xyxy[j][2].item())
72
+ y2 = round(result.boxes.xyxy[j][3].item())
73
+
74
+ cno_coor[j] = [x, y]
75
+ cv2.rectangle(result.orig_img, (x1, y1), (x2, y2), (0, 255, 0), 1)
76
+ im_array = result.orig_img
77
+ cno_image.append([Image.fromarray(im_array[..., ::-1]), file_label])
78
+ cno_count.append(cno)
79
+ file_name.append(file_label)
80
+
81
+ """
82
+ for r in results:
83
+ CNO = len(r.boxes)
84
+ CNO_coor = np.empty([CNO, 2], dtype=int)
85
+ for j in range(CNO):
86
+ # w = r.boxes.xywh[j][2]
87
+ # h = r.boxes.xywh[j][3]
88
+ # area = (math.pi * w * h / 4) * 20 * 20 / (512 * 512)
89
+ # total_area += area
90
+ # bbox_img = r.orig_img
91
+ x = round(r.boxes.xywh[j][0].item())
92
+ y = round(r.boxes.xywh[j][1].item())
93
+
94
+ x1 = round(r.boxes.xyxy[j][0].item())
95
+ y1 = round(r.boxes.xyxy[j][1].item())
96
+ x2 = round(r.boxes.xyxy[j][2].item())
97
+ y2 = round(r.boxes.xyxy[j][3].item())
98
+
99
+ CNO_coor[j] = [x, y]
100
+ cv2.rectangle(r.orig_img, (x1, y1), (x2, y2), (0, 255, 0), 1)
101
+ im_array = r.orig_img
102
+ im = Image.fromarray(im_array[..., ::-1])
103
+
104
+ CNO_count = "CNO Count: " + str(CNO)
105
+
106
+ test = []
107
+ for i in range(len(cno_image)):
108
+ test.append([cno_image[0], f"label {i}"])
109
+ """
110
+ data = {
111
+ "Files": file_name,
112
+ "CNO Count": cno_count,
113
+ }
114
+
115
+ # load data into a DataFrame object:
116
+ cno_df = pd.DataFrame(data)
117
+
118
+ return cno_df, cno_image
119
+
120
+ def highlight_max(s, props=''):
121
+ return np.where(s == np.nanmax(s.values), props, '')
122
+
123
+ def highlight_df(df, data: gr.SelectData):
124
+
125
+ styler = df.style.apply(lambda x: ['background: lightgreen'
126
+ if x.Files == data.value["caption"]
127
+ else None for i in x], axis=1)
128
+
129
+ # print("selected", data.value["caption"])
130
+ return data.value["caption"], styler
131
+
132
+ def reset():
133
+ name_textbox = ""
134
+ gender_radio = None
135
+ age_slider = 0
136
+ fitzpatrick = 1
137
+ history = []
138
+ model_radio = "YOLOv8-M"
139
+ input_files = []
140
+ conf_slider = 0.2
141
+ iou_slider = 0.5
142
+ analysis_results = []
143
+ cno_gallery = []
144
+ test_label = ""
145
+
146
+ return name_textbox, gender_radio, age_slider, fitzpatrick, history, model_radio, input_files, conf_slider, iou_slider, analysis_results, cno_gallery, test_label
147
+
148
+
149
+ with gr.Blocks(title="AFM AI Analysis", theme="default") as app:
150
+ with gr.Row():
151
+ with gr.Column():
152
+ # gr.Markdown("User Information")
153
+ with gr.Accordion("User Information", open=True):
154
+ name_textbox = gr.Textbox(label="Name")
155
+ with gr.Row():
156
+ gender_radio = gr.Radio(["Male", "Female"], label="Gender", interactive=True, scale=1)
157
+ age_slider = gr.Slider(minimum=0, maximum=100, step=1, value=0, label="Age", interactive=True, scale=2)
158
+ with gr.Group():
159
+ fitzpatrick = gr.Slider(minimum=1, maximum=6, step=1, value=1, label="Fitzpatrick", interactive=True)
160
+ history = gr.Checkboxgroup(["Familial Disease", "Allergic Rhinitis", "Asthma"], label="Medical History", interactive=True)
161
+
162
+ input_files = gr.File(file_types=["image"], file_count="multiple", label="Upload Image")
163
+ # gr.Markdown("Model Configuration")
164
+ with gr.Accordion("Model Configuration", open=False):
165
+ model_radio = gr.Radio(["YOLOv8-N", "YOLOv8-S", "YOLOv8-M", "YOLOv8-L", "YOLOv8-X"], label="Model Selection", value="YOLOv8-M")
166
+ conf_slider = gr.Slider(minimum=0, maximum=1, value=0.2, label="Confidence threshold")
167
+ iou_slider = gr.Slider(minimum=0, maximum=1, value=0.5, label="IoU threshold")
168
+ with gr.Row():
169
+ analyze_btn = gr.Button("Analyze")
170
+ clear_btn = gr.Button("Reset")
171
+ with gr.Column():
172
+ analysis_results = gr.Dataframe(headers=["Files", "CNO Count"], interactive=False)
173
+ # cno_label = gr.Label(label="Analysis Results")
174
+ cno_gallery = gr.Gallery(label="Result", show_label=True, columns=3, object_fit="contain")
175
+ test_label = gr.Label(label="Analysis Results")
176
+ # cno_img = gr.Image(type="pil", label="Result")
177
+
178
+ analyze_btn.click(
179
+ fn=predict_image,
180
+ inputs=[name_textbox, model_radio, input_files, conf_slider, iou_slider],
181
+ outputs=[analysis_results, cno_gallery]
182
+ )
183
+
184
+ clear_btn.click(reset, outputs=[name_textbox, gender_radio, age_slider, fitzpatrick, history, model_radio,
185
+ input_files, conf_slider, iou_slider, analysis_results, cno_gallery, test_label])
186
+
187
+ cno_gallery.select(highlight_df, inputs=analysis_results, outputs=[test_label, analysis_results])
188
+
189
+
190
+ """
191
+ iface = gr.Interface(
192
+ fn=predict_image,
193
+ inputs=[
194
+ gr.Textbox(label="User Name"),
195
+ gr.Radio(["YOLOv8-N", "YOLOv8-S", "YOLOv8-M", "YOLOv8-L", "YOLOv8-X"], value="YOLOv8-M"),
196
+ # gr.Image(type="filepath", label="Upload Image"),
197
+ gr.File(file_types=["image"], file_count="multiple", label="Upload Image"),
198
+ gr.Slider(minimum=0, maximum=1, value=0.2, label="Confidence threshold"),
199
+ gr.Slider(minimum=0, maximum=1, value=0.5, label="IoU threshold")
200
+ ],
201
+ outputs=[gr.Label(label="Analysis Results"), gr.Image(type="pil", label="Result")],
202
+ title="AFM AI Analysis",
203
+ description="Upload images for inference. The YOLOv8-M model is used by default.",
204
+ theme=gr.themes.Default()
205
+ )
206
+ """
207
+
208
+ if __name__ == '__main__':
209
+ # iface.launch()
210
+ app.launch(auth=('user', 'admin'), auth_message="Enter your username and password")
web_test.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cv2
3
+ import PIL.Image as Image
4
+ import gradio as gr
5
+ import numpy as np
6
+ import math
7
+ from pathlib import Path
8
+ from ultralytics import ASSETS, YOLO
9
+
10
+ DIR_NAME = Path(os.path.dirname(__file__))
11
+ DETECTION_MODEL_n = os.path.join(DIR_NAME, 'models', 'YOLOv8-N_CNO_Detection.pt')
12
+ DETECTION_MODEL_s = os.path.join(DIR_NAME, 'models', 'YOLOv8-S_CNO_Detection.pt')
13
+ DETECTION_MODEL_m = os.path.join(DIR_NAME, 'models', 'YOLOv8-M_CNO_Detection.pt')
14
+ DETECTION_MODEL_l = os.path.join(DIR_NAME, 'models', 'YOLOv8-L_CNO_Detection.pt')
15
+ DETECTION_MODEL_x = os.path.join(DIR_NAME, 'models', 'YOLOv8-X_CNO_Detection.pt')
16
+
17
+ # MODEL = os.path.join(DIR_NAME, 'models', 'YOLOv8-M_CNO_Detection.pt')
18
+ # model = YOLO(MODEL)
19
+
20
+
21
+ def predict_image(name, model, img, conf_threshold, iou_threshold):
22
+ """Predicts and plots labeled objects in an image using YOLOv8 model with adjustable confidence and IOU thresholds."""
23
+ gr.Info("Starting process")
24
+ # gr.Warning("Name is empty")
25
+ if name == "":
26
+ gr.Warning("Name is empty")
27
+
28
+ if model == 'YOLOv8-N':
29
+ CNO_model = YOLO(DETECTION_MODEL_n)
30
+ elif model == 'YOLOv8-S':
31
+ CNO_model = YOLO(DETECTION_MODEL_s)
32
+ elif model == 'YOLOv8-M':
33
+ CNO_model = YOLO(DETECTION_MODEL_m)
34
+ elif model == 'YOLOv8-L':
35
+ CNO_model = YOLO(DETECTION_MODEL_l)
36
+ else:
37
+ CNO_model = YOLO(DETECTION_MODEL_x)
38
+
39
+ results = CNO_model.predict(
40
+ source=img,
41
+ conf=conf_threshold,
42
+ iou=iou_threshold,
43
+ show_labels=False,
44
+ show_conf=False,
45
+ imgsz=512,
46
+ max_det=1200
47
+ )
48
+
49
+ for r in results:
50
+ CNO = len(r.boxes)
51
+ CNO_coor = np.empty([CNO, 2], dtype=int)
52
+ for j in range(CNO):
53
+ # w = r.boxes.xywh[j][2]
54
+ # h = r.boxes.xywh[j][3]
55
+ # area = (math.pi * w * h / 4) * 20 * 20 / (512 * 512)
56
+ # total_area += area
57
+ # bbox_img = r.orig_img
58
+ x = round(r.boxes.xywh[j][0].item())
59
+ y = round(r.boxes.xywh[j][1].item())
60
+
61
+ x1 = round(r.boxes.xyxy[j][0].item())
62
+ y1 = round(r.boxes.xyxy[j][1].item())
63
+ x2 = round(r.boxes.xyxy[j][2].item())
64
+ y2 = round(r.boxes.xyxy[j][3].item())
65
+
66
+ CNO_coor[j] = [x, y]
67
+ cv2.rectangle(r.orig_img, (x1, y1), (x2, y2), (0, 255, 0), 1)
68
+ im_array = r.orig_img
69
+ im = Image.fromarray(im_array[..., ::-1])
70
+
71
+ CNO_count = "CNO Count: " + str(CNO)
72
+
73
+ return CNO_count, im
74
+
75
+
76
+ iface = gr.Interface(
77
+ fn=predict_image,
78
+ inputs=[
79
+ gr.Textbox(label="User Name"),
80
+ gr.Radio(["YOLOv8-N", "YOLOv8-S", "YOLOv8-M", "YOLOv8-L", "YOLOv8-X"], value="YOLOv8-M"),
81
+ # gr.Image(type="filepath", label="Upload Image"),
82
+ gr.File(file_types=["image"], file_count="multiple", label="Upload Image"),
83
+ gr.Slider(minimum=0, maximum=1, value=0.2, label="Confidence threshold"),
84
+ gr.Slider(minimum=0, maximum=1, value=0.5, label="IoU threshold")
85
+ ],
86
+ outputs=[gr.Label(label="Analysis Results"), gr.Image(type="pil", label="Result")],
87
+ title="AFM AI Analysis",
88
+ description="Upload images for inference. The YOLOv8-M model is used by default.",
89
+ theme=gr.themes.Default()
90
+ )
91
+
92
+ if __name__ == '__main__':
93
+ iface.launch()