diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzlvr" "b/data_all_eng_slimpj/shuffled/split2/finalzlvr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzlvr" @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\nAs the adoption of digitized histopathologic slide images became widespread, the use of Artificial Intelligence (AI) methods in the digital pathology field increased. In particular, computer vision and deep learning methods are used to automate and improve various tasks such as image analysis, diagnostic decision-making, and disease monitoring \\citep{czyzewski2021machine, daniel2022deep, larey2022fron}. \n\nHowever, data limitations pose a major challenge in digital pathology and include issues related to data scarcity, variability, privacy, annotation, bias, quality, and labeling. Data scarcity and variability can make it difficult to train and evaluate computer algorithms for digital pathology, as there may not be enough data available for certain decision thresholds. Data bias is another concern, as digital pathology datasets may be biased toward certain populations, which can limit the generalizability of algorithms trained on them. Data labeling can also be subjective and dependent on the expertise of the labeler leading to inaccuracies. An emerging solution to these challenges is generating synthetic images.\n\nThe field of generating synthetic images became more popular during the last years after the Generative Adversarial Network (GAN) \\citep{goodfellow2020generative} was introduced. In this approach, a 'discriminator' model is designed to discriminate between real data to fake data. A different model coined 'generator' is trained to produce synthetic data that will be plugged into the 'discriminator' during training. On the one hand, the 'generator' is trained in a way where the discriminator doesn't distinguish between the real data and the generated synthetic data. On the other hand, the discriminator model is trained to discern between the two correctly. It means the generated data is challenging the discriminator to get the best results.\n\nIn the classic approach of image generation (coined Vanilla GAN), the input for the generator is sampled from a given distribution, and then is processed into a synthetic image. More advanced techniques called Conditional GAN (C-GAN) \\citep{mirza2014conditional} supply information about the required type of generated data and plug it into the different GAN models, to control the type of generated data. Some approaches such as pix2pix \\citep{isola2017image} took this technique further and supply the generator with more detailed information at the pixel level. In this approach, the generator receives a semantic label mask as an input, and each pixel is generated to belong to its corresponding label from the given semantic mask. This image translation approach has the advantage of yielding pairs of images and semantic labels, that could be used in different tasks that require these pairs (e.g. segmentation), unlike the classic approach where the synthetic images lack semantic information. Yet, in some cases, the scarcity of semantic masks will be caused due to their creation complexity.\n\nA special case is the generation of synthetic histology images, where the semantic masks consist of various tissue types and complicated patterns resulting from the complex nature of the tissue. A na\u00efve solution uses tissue masks extracted from real histology images in the image translation pipeline, but the dependency on limited real images during the generation process causes a limited number of semantic masks. Thus, in this case, image translation models will not be scalable since the semantic masks have an integral part in their pipeline, while Vanilla GANs are scalable due to their only dependency on the scalable sampling process.\n\nIn this study, we show how our dual-phase pipeline overcomes the tradeoff (Fig.~\\ref{f:f1}), and generates pairs of histology synthetic semantic masks and images in a scalable design. We introduce DEPAS, a generative model that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality for three different organs: skin, lung, and prostate. Moreover, we also show that these masks can be processed by pix2pixHD \\citep{wang2018high}, a generative image translation model that supports high-resolution images, to produce photorealistic RGB tissue images (Fig.~\\ref{f:f1}). We demonstrate it for two types of staining: H\\&E and immunohistochemistry. This pipeline, on the one hand, generates pairs of semantic masks and histology images, and on the other hand, is scalable since it does not require real masks during inference.\n\n\\begin{figure}[thpb]\n\\centering\n\t\\includegraphics[scale=0.6]{Figure1.png}\n\t\\caption{(A) Illustration of the tradeoff between Image Translation GANs to the Vanilla GANs. The former generates synthetic images based on their semantic labels. In this case, the scalability is bounded when the quantity of semantic labels is limited. On the Other hand, Vanilla GANs lack semantic information but can produce an unlimited number of synthetic images. (B) Our platform resolves this challenge in the histology domain with a dual-phase generative system. The first step includes generating semantic masks of tissue labels using a novel architecture of a Vanilla GAN coined DEPAS. Then, the generated masks are processed by a paired image translation GAN (such as pix2pixHD) to produce the synthetic histology RGB image.}\n\t\\label{f:f1}\n\\end{figure}\n\n\\section{RELATED WORK}\n\\subsection{Medical Synthetic Images}\nThe use of generative models to produce synthetic images was explored in numerous works in the medical field. Image translation frameworks are widely used, such as models that generate endoscopy images given binary semantic masks \\citep{adjei2020gan}, transform between radiological images \\citep{armanious2020medgan}, and convert between histology staining types \\citep{lysik2022he}. Another image translation work is DeepLIIF \\citep{ghahremani2022deepliif}, which provides for a given IHC image several outputs including stain deconvolution, segmentation masks, and different marker images. Other types of generative frameworks are common as well. DCGAN framework generates synthetic images from a sampled noise input and processes it through convolution-layers architecture. It was used in several applications that generate medical images such as MR images \\citep{divya2022medical}, eye diseases images \\citep{smaida2021dcgan}, X-ray images \\citep{puttagunta2022novel}, and breast cancer histological images \\citep{blanco2021medical}. PathologyGAN \\citep{quiros2019pathologygan} introduced a novel framework that generates high-quality pathology images in the size of 224X224 pixels. \\citep{guibas2017synthetic} introduced a two-step pipeline that is similar to ours. In the first step, they generate binary vessel segmentation masks using DCGAN, next they generate the RGB retinal image. Their pipeline provides synthetic images in the size of 512X512 pixels size. However, our pipeline provides higher-resolution (x2) synthetic images in the challenging field of histology. We focus on the first phase of learning the complex geometry structure that is reflected in the semantic label mask. We show that DCGAN is not sufficient for this task and introduce DEPAS as an improved architecture to overcome the challenges in the high-resolution histology domain.\n\n\\subsection{Discrete Predictions}\nThe first step of our pipeline requires predicting discrete semantic masks. In this work, we focus on the binary scenario where there are two labels in the semantic mask \u2013 tissue and air. However, the binary output of the generator should be obtained by a step-function, but this non-differentiable operation can break the backpropagation of the optimization objective's gradients through the discriminator to the generator. A reasonable solution is by replacing the discrete output operations with continuous relaxations such as Sigmoid during training, and applying the discrete operation only during the test \\citep{neff2017generative}. \\citep{bengio2013estimating} proposed to use binary neurons in machine learning models via straight-threw-estimators, where the binary operator is applied during training in the forward pass, but is ignored and treated as an identity function in the backward pass. \\citep{dong2018training} explored the generative use of a deterministic binary neuron and stochastic binary neuron and introduced the BinaryGAN. We investigated the different approaches and found that in the high-resolution histology domain, the best performance was achieved by the Annealing-Sigmoid. In this approach, the last layer of DEPAS's generator is a Sigmoid whose slope is increased gradually during training toward the step-function \\citep{chung2016hierarchical}. \n\n\\begin{figure}[t]\n\\centering\n\n \\includegraphics[ width=\\textwidth]{Figure2.png}\n\t\\caption{Architecture of DEPAS. (A) The generator decodes semantic masks from latent noise. It consists of five transpose convolution layers where each one of them followed by Batch Normalization and ReLU activation. Another element of stochasticity is added to the hidden layers in the spatial dimension after being scaled. Finally, the last feature-maps are processed by the Discrete Adaptive block that outputs a semantic mask in the two-label case, or multiple masks in the multi-label case. (B) For training, we use three discriminators that support different resolutions of images. Each one of them encodes the corresponding image into a scalar which represents the probability that the image is real. The encoding is processed by convolution layers where each one of them is followed by Batch-Normalization and LeakyReLU activation.}\n\t\\label{f:f2}\n\\end{figure}\n\n\\section{METHODS}\nOur pipeline includes two main phases. The main focus of this study is on the first step where we learn the internal geometry of the digitalized histology tissue. For this task, we designed a generative architecture coined DEPAS that captures the tissue's morphology and expresses it by a semantic mask. The second phase is an image translation task to transfer the discrete semantic mask to an RGB photorealistic image of the tissue.\n\n\\subsection{DEPAS Architecture}\nTo enhance scalability, the generative process of producing synthetic tissue masks is initialized by sampling noise from a given distribution and applying it to the model (Vanilla GAN). The mechanism is based on the DCGAN architecture that was used by \\citep{guibas2017synthetic} and consists of multiple convolution blocks in its generator and discriminator. In DEPAS we adjusted the DCGAN layers to the high-resolution size of 512X1024 pixels output and included three main extensions.\n\n(1) Discrete Adaptive Block. In our case, where the discriminator should obtain a binary mask during training, we require a binary output from the generator where every pixel indicates one of the two classes \u2013 Tissue or Air. Thus, we replaced the DCGAN's last block with this module (Fig.~\\ref{f:f2}A). Instead of the non-differential step function, we use a Sigmoid activation with a high slope to mimic the former and yield a pseudo-binary differential output. For optimal convergence, we initiate the Sigmoid with its base slope of $1$ and increase it gradually during training (Annealing-Sigmoid, AKA AS). That is, in every iteration, the generator produces a Bernoulli probability that becomes more deterministic during training in differentiating the two classes. Formally, at iteration $t$, the AS is:\n\\begin{equation}\n{AS}_{t} = \\frac{1}{1+e^{-\\delta_{t}*x}} \\label{eq_sig}\n\\end{equation}\n\nWhere $x$ is the input for the element-wise operation, and $\\delta_{t}$ determines the Sigmoid's slope at iteration $t$. To increase the slope, we require that $\\delta_{t+1}>\\delta_{t}$, and for initialization with the basic Sigmoid, we define $\\delta_{t=0}=1$. \nFurthermore, we extend this approach to cases where there are more than two labels in the desired semantic mask. For example, in the case where the tissue itself has several types of morphology (e.g. tumor tissue, non-tumor tissue, and air) we will use the multi-label approach. In this scenario, we generalize the binary distribution to the multinomial distribution by designing the 'Discrete adaptive Block' to produce a multi-channel feature map, where each channel represents a different class. The feature maps are then applied to an Annealing-Softmax-Temperature (AST) activation. Instead of the non-differential Argmax function, we use a channel-wise Softmax layer with a low temperature to mimic a deterministic decision of the generated class for each pixel. Similarly to the binary situation, we initiate the Softmax temperature with its base value of $1$, and decrease it gradually during training. I.e. every iteration, the generator produces for each pixel its classes probabilities that become more deterministic during training. Formally, at iteration $t$, the probability for class $c$ provided by the AST is:\n\\begin{equation}\n{AST}_{t,c} = \\frac{e^{\\frac{x_{c}}{T_{t}}}}{\\sum_{j}^{} e^{\\frac{x_{j}}{T_{t}}}} \\label{eq_temp}\n\\end{equation}\n\nWhere $x_i$ is the input for the element-wise operation of the channel that corresponds to class $i$, and $T_{t}$ determines the Softmax's temperature at iteration $t$. To increase determinism, we require that $T_{t+1}