diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzahqw" "b/data_all_eng_slimpj/shuffled/split2/finalzzahqw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzahqw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nImage classification and segmentation are fundamental tasks in many visual recognition applications involving natural and medical images. Given a large image dataset annotated with global image-level labels for classification or with pixel-level labels for segmentation, deep learning (DL) models achieve state-of-the-art performances for these tasks \\citep{dolz20183d,Goodfellow-et-al-2016,krizhevsky12,LITJENS201760,LongSDcvpr15,Ronneberger-unet-2015}. However, the impressive accuracy of such fully-supervised learning models comes at the expense of a considerable cost for collecting and annotating large image data sets. While the acquisition of global image-level annotation can be relatively inexpensive, pixel-wise annotation involves a laborious process, a difficulty further accrued by the requirement of domain expertise, as in medical imaging, which increases the annotation costs.\n\\begin{figure}[t!]\n\\centering\n \\centering\n \\includegraphics[scale=0.49]{proposal}\n \\caption{Proposed AL framework with weak annotator.}\n \\label{fig:fig-proposal}\n\\end{figure}\n\nWeakly-supervised learning (WSL) has recently emerged as a paradigm that relaxes the need for dense pixel-wise annotations \\citep{rony2019weak-loc-histo-survey,zhou2017brief}. WSL techniques depend on the type of application scenario and annotation, such as global image-level labels \\citep{belharbi2019weakly,kim2017two,pathak2015constrained,teh2016attention,wei2017object}, scribbles \\citep{Lin2016,ncloss:cvpr18}, points \\citep{bearman2016}, bounding boxes \\citep{dai2015boxsup,Khoreva2017} or global image statistics such as the target-region size \\citep{bateson2019constrained,jia2017constrained,kervadec2019curriculum,kervadec2019constrained}. This paper focuses on learning using only image-level labels, which enables to classify an image while yielding pixel-wise scores (i.e., segmentations), thereby localizing the regions of interest linked to the image-class predictions.\n\n\nSeveral CNN visualization and interpretation methods have recently been proposed, based on either perturbation, propagation or activation approaches, and allowing to localize the salient image regions responsible for a CNN's predictions \\citep{fu2020axiom}. In particular, WSL techniques \\citep{rony2019weak-loc-histo-survey} rely on activation-based methods like CAM and, more recently, Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, Ablation-CAM and Axiom-based Grad-CAM \\citep{fu2020axiom, lin2013network, PinheiroC15cvpr}. Trained with only global image-level annotations, these methods locate the regions of interest (ROIs) of the corresponding class in a relatively inexpensive and accurate way. However, while these WSL techniques can provide satisfying results in natural images, they typically yield poor segmentations in relatively more challenging scenarios, for instance, histology data in medical image analysis \\citep{rony2019weak-loc-histo-survey}.\nWe note two limitations associated with CAMs: (1) they are obtained in an unsupervised way (\\emph{i.e.}\\@\\xspace without pixel-level supervision under an ill-posed learning problem \\citep{choe2020evaluating}); and (2) they have low resolution. For instance, CAMs obtained from ResNet models \\citep{heZRS16} have a resolution of ${1\/32}$ of the input image. Interpolation is required to restore full image resolution. Both of these limitation with CAM-based methods lead to high false-positive rates, which may render them impractical \\citep{rony2019weak-loc-histo-survey}.\n\nEnhancing deep WSL models with pixel-wise annotation, as supported by a recent study in weakly-supervised object localization \\citep{choe2020evaluating}, can improve localization and segmentation accuracy, which is the central goal of this paper. To do so, we introduce a deep WSL model that allows supervised learning for classification, and active learning for segmentation, with the latter providing more accurate and high-resolution masks. We assume that the images in the training set are globally annotated with image-class labels. Relevant images are \\emph{gradually} labeled at the pixel level through an oracle that respects a low annotation-budget constraint. Consequently, this leads us to an active learning (AL) paradigm \\citep{settles2009active}, where an oracle is requested to annotate pixels in a subset of samples.\n\nDifferent sample-acquisition techniques have been successfully applied to deep AL for classification based on, e.g., certainty \\citep{ducoffe2015qbdc,gal2017deep,kirsch2019batchbald} or representativeness \\citep{kim2020task,sinha2019variational}. However, very few deep AL techniques were investigated in the context of segmentation \\citep{gaur2016membrane,gorriz2017active,lubrano2019deep}. Most AL techniques focus mainly on the sampling criterion (Fig.\\ref{fig:fig-proposal}, left) to populate the labeled pool using an oracle. During training, only the labeled pool is used, while the unlabeled pool is left dormant. Such an AL protocol may limit the accuracy of DL models under constrained oracle-supervision budget in real-world applications for multiple reasons:\n\n\\textbf{(1)} Standard AL protocols may be relevant to small\/shallow models that can learn and provide reliable queries using a few training samples. Since training accurate DL models typically depends on large training sets, large numbers of queries may be needed to build reliable DL models, which may incur a high annotation cost.\n\n\\textbf{(2)} In most AL work, the experimental protocol starts with a large labeled pool, and acquires a large number of queries for sufficient supervision, neglecting the workload placed on the oracle.\nThis typically reaches a plateau-performance of a DL quickly, hampering a reliable study of the impact of different AL selection techniques. Moreover, model-based sampling techniques are inconsistent \\citep{gaur2016membrane} in the sense that the model is used to query samples while it is still in an early learning stage.\n\n\\textbf{(3)} Segmentation and classification problems are associated with different properties and challenges, such as decision boundaries and uncertainty, which provide additional challenges to AL. For instance, the class boundaries defined by different classification methods \\citep{ducoffe2018adversarial,settles2009active,tong2001support} are not defined in the context of segmentation, making such a branch of methods inadequate for segmentation.\n\n\\textbf{(4)}\nIn critical fields such as medical imaging, acquiring a sample itself can be very expensive\\footnote{\nFor instance, prior to a diagnosis of breast cancer from a histological sample, a patient undergoes a bilateral diagnostic mammogram and breast ultrasound that are interpreted by a radiologist, one to several needle biopsies (with low risks under ${1\\%}$ of hematoma and wound infection) to further assess areas of concern, surgical consultation and pre-operative blood work, and surgical excision of the positive tissue for breast cancer cells. The biopsy and surgical tissues are processed (fixation, embedding in parraffin, H\\&E staining) and interpreted by a pathologist. Depending on the cancer stage, the patient may undergo additional procedures. Therefore, accounting for all the steps required for breast cancer diagnosis from histological samples, a rough estimation of the final cost associated with obtaining a Whole Slide Image (WSI) is about \\$400 (Canadian dollars, by 1999) \\citep{will1999diagnostic}. Moreover, some cancer types are rare \\citep{will1999diagnostic}, adding to the values of these samples. All these procedures are conducted by highly trained experts, with each procedure taking from a few minutes to an hour and requiring expensive specialized medical equipment.}.\nThe time and cost associated with each sample makes them valuable items. Such considerations may be overlooked for large-scale data sets with almost-free samples, as in the case of natural images. Given this high cost, keeping the unlabeled pool dormant during learning may be ineffective.\n\n\nBased on the aforementioned arguments, we advocate that focusing solely on the sample acquisition and supervision pool may not be an efficient way to build\nhigh-performing DL models in an AL framework for segmentation. Therefore, we consider augmenting the labeled pool using the model as a second source of annotation, in a self-learning fashion \\citep{mao2020survey} (Fig.\\ref{fig:fig-proposal}, right). This additional annotation might be less accurate (\\emph{i.e.}\\@\\xspace, weak\\footnote{Not to be confused with the weak annotation of data in weakly supervised learning frameworks.}) compared to the oracle that provides strong but expensive annotations. However, it is expected to fast-improve the model's performance \\citep{mao2020survey}, while using a few oracle-annotated samples, reducing the annotation cost.\n\nOur main contributions are the following.\n\\textbf{(1) Architecture design}: As an alternative to CAMs, we propose using a segmentation mask trained with pixel-level annotations, which yields more accurate and high-resolution ROIs. This is achieved through a convolutional architecture capable of simultaneously classifying and segmenting images, with the segmentation task trained using annotations acquired using an AL framework.\nAs illustrated in Fig.\\ref{fig:fig-archi}, the architecture combines well-known DL models for classification (ResNet \\citep{heZRS16}) and segmentation (U-Net \\citep{Ronneberger-unet-2015}), although other architectures could also be used.\n\\textbf{(2) Active learning}: We augment the size of the labeled pool by weak-annotating a large number of unlabeled samples based on predictions of the DL model itself, providing a second source of annotation (Fig.\\ref{fig:fig-proposal}). This enables rapid improvements of the segmentation accuracy, with less oracle-based annotation. Moreover, our method can be integrated on top of any sample-acquisition method.\n\\textbf{(3) Experimental study}: We conducted comprehensive experiments over two challenging benchmarks -- high-resolution medical images (histology GlaS data for colon cancer) and natural images (CUB-200-2011 for bird species). Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of the-art CAMs and AL methods, with an identical oracle-supervision budget.\n\n\n\\section{Related work}\n\\label{sec:related-work}\n\n\\noindent \\textbf{Deep active learning:}\nAL has been studied for a long time in machine learning, mainly for classification and regression, using linear models in particular \\citep{settles2009active}. Recently, there has been an effort to transfer such techniques to DL models for classification tasks by mimicking their intuition or by adapting them, taking into consideration model specificity and complexity. Such methods include, for instance, different mechanisms for uncertainty \\citep{beluch2018power,ducoffe2015qbdc,ducoffe2018adversarial,gal2017deep,kim2020task,kirsch2019batchbald,lakshminarayanan2017simple,wang2016cost,yoo2019learning} and representativeness estimation \\citep{kim2020task,sener2018coreset,sinha2019variational}. However, most deep AL techniques are validated on synthetic, simple or tiny data, which does not explore their full potential in real applications.\n\nWhile deep AL for classification is rapidly growing, deep AL models for segmentation are uncommon in the literature. In fact, the very few methods in the literature mostly focused on the direct application of deep AL classification methods. The limited research in this area may be explained by the fact that segmentation tasks bring challenges to AL, such as the additional spatial information and the fact that a segmentation mask lies in a much larger dimension than a classification prediction. In classification, AL often deals with one output that is used to drive queries \\citep{huang2010active}. The spatial information in segmentation does not naturally provide a direct scoring function that can indicate the overall quality or certainty of the output. Most of deep AL methods for segmentation consider pixels as classification instances, and apply standard AL techniques to each pixel.\n\nFor instance, the authors of \\citep{gaur2016membrane} exploit a variant of entropy-based acquisition at the pixel level, combined with a distribution-based term that encodes diversity using a complex hierarchical clustering algorithm over sliding windows, with application to microscopic membrane segmentation. Similarly, \\citep{gorriz2017active,lubrano2019deep} apply Monte-Carlo dropout uncertainty \\citep{gal2017deep} at the pixel level, with application to myelin segmentation using spinal cord and brain microscopic histology images. In \\citep{roels2019cost}, the authors experiment with five acquisition functions of classification for a segmentation task, including entropy-based, core-set \\citep{sener2018coreset}, k-mean and Bayesian \\citep{gal2017deep} sampling, with application to electron microscopy segmentation. Entropy-based methods seem to be dominant over multiple datasets. In \\citep{yang2017suggestive}, the authors combine two sampling terms for histology image segmentation. The first employs bootstrapping over fully convolutional networks (FCN) to estimate uncertainty, where a set of FCNs are trained on different subsets of samples. The second term is a representation-based term that selects the most representative samples. This is achieved by solving an optimization of a generalization version of the maximum cover set problem \\citep{feige1998threshold} using sample description extracted from an FCN. Despite the obtained promising results, this approach remains complex and impractical due to the use of bootstrapping over DL models and an optimization step. Moreover, the authors of \\citep{yang2017suggestive} do not provide a comparison to other acquisition functions. The work in \\citep{casanova2020} considers a specific case of AL using reinforcement learning for \\emph{region-based} AL for segmentation, where only a selected region of the image is labeled. This method is adequate for data sets with large and unbalanced classes, such as street-view images. While the method in \\citep{casanova2020} outperforms random and Bayesian \\citep{gal2017deep} selection, surprisingly, it performs close to entropy-based selection.\n\n\n\\noindent \\textbf{Weak annotators:}\nThe AL paradigm does not prohibit the use of unlabelled data for training \\citep{settles2009active}, but it mainly constrains the oracle-labeling budget. The standard AL experimental protocol (Fig.\\ref{fig:fig-proposal}, left) was inherited from AL of simple\/linear ML models, and adopted in subsequent works. Budget-constrained oracle annotation may not be sufficient to build effective DL models, due to the lack of labeled samples. Furthermore, several studies in AL for classification have successfully leveraged the unlabelled data to provide additional supervision \\citep{lin2017active,long2008graph,vijayanarasimhan2012active,wang2016cost,zhou2010active,zhualsslharmo2003}.\n\nFor instance, the authors of \\citep{lin2017active,wang2016cost} propose to pseudo-label a part of the unlabeled pool. The latter is selected using dynamic thresholding based on confidence, through the model itself, so as to learn a better embedding. Furthermore, a theoretical framework for AL using strong and weak annotators for classification task is introduced in \\citep{zhang2015active}. Their results suggest that using multiple annotators can reduce the cost of oracle annotation, in comparison to one annotator. Multiple sources of annotations that include both strong and weak annotators were used in AL, crowd-sourcing, self-paced learning and other interactive learning scenarios for classification to help reducing the number of requests for the strong annotator \\citep{kremer2018robust,malago2014online,mattsson2017active,murugesan2017active,urner2012learning,yan2016active,zhang2015active}.\nUsing the model itself for pseudo-annotation is motivated mainly by the success of deep self-supervised learning \\citep{mao2020survey}.\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n\\includegraphics[scale=0.65]{knn}\n\\caption{The $k$-nn method for selecting $\\mathbb{U}^{\\prime\\prime}$ subset to be speudo-labeled. Assumption to select $\\mathbb{U}^{\\prime\\prime}$: since $\\mathbb{U}^{\\prime\\prime}$ lives nearby supervised samples, it is more likely to be assigned good segmentation by the model.\nWe consider measuring $k$-nn for each \\textbf{unlabeled} sample. In this example, using $k=4$ allows ${\\abs{\\mathbb{U}^{\\prime\\prime}} = 14}$. If $k$-nn is considered for each \\textbf{labeled} sample: $\\abs{\\mathbb{U}^{\\prime\\prime}} = 8$. ${\\abs{\\cdot}}$ is the set cardinal. Note that ${k}$-nn is only considered between samples of the \\emph{same class}.}\n \\label{fig:fig-knn}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\n\\noindent \\textbf{Label Propagation (LP):}\nOur approach is also related to LP methods \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002} for classification, which aim to label unlabeled samples using knowledge from the labeled ones (Fig.\\ref{fig:fig-knn}). However, while LP propagates labels to unlabeled samples through an iterative process, our approach bypasses this using the model itself. In our case, the propagation is limited to the neighbors of labeled samples defined through $k$-nearest neighbors ($k$-nn) (Fig.\\ref{fig:fig-knn}). Using $k$-nn has been also studied to combine AL and domain adaptation \\citep{berlind2015active}, where the goal is to query samples from the target domain. Such an approach is connected to the recently developed core-set method for deep AL \\citep{sener2018coreset}. Our method intersects with \\citep{berlind2015active} only in the sense of predicting the labels to query samples using their labeled neighbors.\n\nIn contrast to state-of-the-art DL models for AL segmentation, we consider increasing the unlabeled pool through pseudo-annotated samples (Fig.\\ref{fig:fig-proposal}, right). To this end, the model is used for pseudo-labeling samples within the neighborhood of samples with strong supervision (Fig.\\ref{fig:fig-knn}). From a self-learning perspective, the works in \\citep{lin2017active,wang2016cost} on face recognition are the closest to ours. While both rely on pseudo-labeling, they mainly differ in the sample selection for pseudo-annotation. In \\citep{lin2017active,wang2016cost}, the authors considered model confidence, where samples with high confidence are pseudo-labeled, while low-confidence samples are queried. While this yields good results, it makes the overall method strongly dependent on model confidence. As we consider segmentation tasks, model-confidence is not well-defined. Moreover, using the expected pixel-wise values can be less representative for model confidence.\n\nOur approach relies on the spatial assumption in Fig.\\ref{fig:fig-knn}, where the samples to pseudo-label are selected to be near the labeled samples, and expected to have good pseudo-segmentations. This makes the oracle-querying technique independent from the pseudo-labeling method, giving more flexibility to the user. Our pseudo-labeled samples are added to the labeled pool, along with the samples annotated by the oracle. The underlying assumption is that, given a sample labeled by an oracle, the model is more likely to produce good segmentations of images located nearby that sample. Our assumption is verified empirically in the experimental section of this paper. This simple procedure enables to rapidly increase the number of pseudo-labeled samples, and helps improving segmentation performance under a limited amount of oracle-based supervision.\n\n\n\n\n\n\\section{Proposed approach}\n\\label{sec:proposal}\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n \\includegraphics[scale=0.75]{archi}\n \\caption{Our proposed DL architecture for classification and segmentation composed of: (1) a shared \\textbf{backbone} for feature extraction; (2) a \\textbf{classification head} for the classification task; (3) and a \\textbf{segmentation head} for the segmentation task with a U-Net style \\citep{Ronneberger-unet-2015}. The latter merges representations from the backbone, while gradually upscaling the feature maps to reach the full image resolution for the predicted mask, similarly to the U-Net model.}\n \\label{fig:fig-archi}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\nWe consider an AL framework for training deep WSL models, where all the training images have class-level annotations, but no pixel-level annotations. Due to their high cost, pixel annotations are gradually acquired for training through oracle queries. It propagate pixel-wise knowledge encoded in the model though the labeled images.\n\nActive learning training consists of sequential training rounds. At each round $r$, the total training set ${\\mathbb{D}}$ that contains $n$ samples with unlabeled and labeled subsets (Fig.\\ref{fig:fig-proposal}).\n\\textbf{(1) Unlabeled subset:} contains samples without pixel-wise annotation (unlabeled samples) ${\\mathbb{U} = \\{\\bm{x}_i, y_i, -\\-\\}_{i=1}^u}$ where ${\\bm{x} \\in \\mathcal{X}}$ is the input image, ${y}$ is its global label; and the pixel label is missing.\n\\textbf{(2) Labeled subset:} contains samples with full supervision ${\\mathbb{L}=\\{\\bm{x}_i, y_i, \\bm{m}_i\\}_{i=1}^l}$ where ${\\bm{m}}$ is the pixel-wise annotation of the sample. ${\\mathbb{L}}$ is initially empty. It is gradually populated from ${\\mathbb{U}}$ by querying the oracle using an acquisition function.\nLet ${f(\\cdot: \\bm{\\theta})}$ denotes a DL model that is able to classify and segment an image ${\\bm{x}}$ (Fig.\\ref{fig:fig-archi}). For clarity, and since we focus on the segmentation task, we omit the notation for the classification task (to simplify the presentation). Therefore, ${f(\\bm{x})}$ refers to the predicted segmentation mask.\nLet ${\\mathbb{U}^{\\prime} \\subseteq \\mathbb{U}}$ and ${\\mathbb{U}^{\\prime\\prime} \\subseteq \\mathbb{U}}$ denote two subsets (Fig.\\ref{fig:fig-proposal}), with ${ \\mathbb{U}^{\\prime} \\cap \\mathbb{U^{\\prime\\prime}} = \\varnothing}$.\nIn our method, we introduce ${\\mathbb{P}}$ as a subset holder for pseudo-labeled samples, which is initially empty and gradually replenished (Fig.\\ref{fig:fig-proposal}, right).\nTo express the dependency of each subset on round ${r}$, we introduce the following notations: ${\\mathbb{U}_r, \\mathbb{L}_r, \\mathbb{P}_r, \\mathbb{U}^{\\prime}_r, \\mathbb{U}^{\\prime\\prime}_r}$. The samples in ${\\mathbb{P}_r}$ are denoted as ${\\{\\bm{x}_i, y_i, \\hat{\\bm{m}}_i\\}}$. The following holds: ${\\forall r: \\mathbb{D} = \\mathbb{L}_r \\cup \\mathbb{U}_r \\cup \\mathbb{P}_r}$.\n\n\n\nAlg.\\ref{alg:alg-0} describes the overall AL process with our pseudo-annotation method.\nFirst, ${\\mathbb{U}^{\\prime}_r}$ is queried, then labeled by the oracle, and added to ${\\mathbb{L}_r}$.\nUsing $k$-nn, ${\\mathbb{U}^{\\prime\\prime}_r}$ is selected based on their proximity to ${\\mathbb{L}_r}$ (Fig.\\ref{fig:fig-knn}); and pseudo-labeled by the model, then added to ${\\mathbb{P}_r}$. To fast-increase the size of ${\\mathbb{L}_r}$, ${\\mathbb{P}_r}$ is protected from being queried for the oracle until it is inevitable. In the \\emph{latter case}, queried samples from ${\\mathbb{P}_r}$ are used to fill ${\\mathbb{U}^{\\prime}}$; and they are no longer considered pseudo-labeled since they will be assigned the oracle annotation.\n\nTo measure image similarity for the $k$-nn method, we used the color distribution to describe image content. This can be a flexible descriptor for highly unstructured images such as histology images. Note that the ${k}$-nn method is considered \\emph{only} for pairs of samples of the \\emph{same class}. The underlying assumption is that samples of the same class, with similar color distributions, are likely to contain relatively similar objects. Consequently, labeling representative samples could be a proxy for supervised learning based on the underlying data distribution. This can increase the likelihood of the model to provide relatively good segmentations of the other samples. The proximity between two images ${(\\bm{x}_i, \\bm{x}_j)}$ is measured using the Jensen-Shannon divergence between their respective color distributions (measured as normalized histograms). For an image with multiple color planes, the similarity is formulated as the sum of similarities, one for each plane.\n\nAt round $r$, the queried and pseudo-labeled samples are both used in training by optimizing the following loss function:\n\\begin{equation}\n\\label{eq:eq-1}\n \\min_{\\bm{\\theta}} \\sum_{\\bm{x}_i \\in \\mathbb{L}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\bm{m}_i) + \\lambda \\sum_{\\bm{x}_i \\in \\mathbb{P}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\hat{\\bm{m}}_i),\n\\end{equation}\nwhere ${\\mathcal{L}(\\cdot, \\cdot)}$ is a segmentation loss, and ${\\lambda}$ a positive scalar. Eq.(\\ref{eq:eq-1}) corresponds to training the model (Fig.\\ref{fig:fig-archi}) solely for the segmentation task. Simultaneous training for classification and segmentation in this AL setup is avoided due to the unbalance between the number of samples that are labeled globally and at the pixel level. Therefore, we consider training the model for classification first. Then, we freeze the classifier parameters. Training for the segmentation tasks is resumed later. This yields the best classification performance, and allows a better study of the impact of the queried samples on the segmentation task.\n\nConsidering the relation of our method and label propagation algorithm \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002}, we refer to our proposal as Label\\_prop.\n\n\n\\begin{center}\n\\begin{minipage}{0.6\\linewidth}\n\\IncMargin{0.04in}\n\\begin{algorithm}[H]\n \\SetKwInOut{Input}{Input}\n \n \\Input{\n ${\\mathbb{P}_0 = \\mathbb{L}_0 = \\varnothing}$\n \\\\\n ${\\bm{\\theta}^0}$: Initial parameters of ${f}$ trained on the classification task.\n \\\\\n ${\\texttt{maxr}: \\textrm{Maximum number of AL rounds}}$.\n }\n \\vspace{0.1in}\n Select ${\\mathbb{U}^{\\prime}_0}$ randomly and label them by an oracle. \\\\\n \\vspace{0.025in}\n $ \\mathbb{L}_0 \\ \\leftarrow\\ \\mathbb{U}^{\\prime}_0$. \\\\\n \\For{${r \\in 1 \\cdots \\texttt{maxr}}$} \n {\n \n ${\\bm{\\theta} \\ \\leftarrow\\ \\bm{\\theta}^0}$. \\\\\n \\vspace{0.03in}\n Train $f$ using ${\\mathbb{L}_{r-1}}$ \\colorbox{mybluelight}{${\\cup\\; \\mathbb{P}_{r-1}}$} and the loss in Eq. (\\ref{eq:eq-1}). \\\\\n \\vspace{0.03in}\n Select ${\\mathbb{U}^{\\prime}_r}$ and label them by an oracle. \\\\\n \\vspace{0.03in}\n $ \\mathbb{L}_r \\ \\leftarrow\\ \\mathbb{L}_{r-1} \\cup \\mathbb{U}^{\\prime}_r$. \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{Select ${\\mathbb{U}^{\\prime\\prime}_r}$.} \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{$ \\mathbb{P}_r \\ \\leftarrow\\ \\mathbb{P}_{r-1} \\cup \\mathbb{U}^{\\prime\\prime}_r$.} \\\\ \n \\vspace{0.03in}\n \\colorbox{mybluelight}{Pseudo-label ${\\mathbb{P}_r}$.}\n }\n \\vspace{0.1in}\n \\caption{Standard AL procedure and our proposal. The extra instructions associated with our method are indicated with a \\colorbox{mybluelight}{blue background}.\n }\n \\label{alg:alg-0}\n\\end{algorithm}\n\\DecMargin{0.04in}\n\\end{minipage}\n\\end{center}\n\n\n\n\\section{Results and discussion}\n\\label{sec:experiments}\n\n\n\n \\subsection{Experimental methodology:}\n\n \\begin{figure}[!b]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{samples-datasets}\n \\caption{\\textbf{Top row}: GlaS dataset \\citep{sirinukunwattana2017gland}. \\textbf{Bottom row}: CUB dataset \\citep{WahCUB2002011}. (Best visualized in color.)}\n \\label{fig:fig-datasets}\n\\end{figure}\n\n\\noindent \\textbf{a) Datasets.}\nFor evaluation, datasets should have global and pixel-wise annotation. We consider two public datasets including both medical (histology) and natural images (Fig.\\ref{fig:fig-datasets}).\n\\begin{inparaenum}[(1)]\n\n \\item \\textbf{GlaS dataset}: This dataset contains histology images for colon cancer diagnosis\\footnote{GlaS: \\href{https:\/\/warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}{warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}.} \\citep{sirinukunwattana2017gland}. It includes 165 images derived from 16 Hematoxylin and Eosin (H\\&E) histology sections of two grades (classes): benign and malignant. It is divided into 84 samples for training and 80 samples for testing.\n The ROIs to be segmented are the glandes.\n \n \n \\item \\textbf{CUB-200-2011 dataset (CUB)}\\footnote{CUB: \\href{http:\/\/www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}{www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}} \\citep{WahCUB2002011} is a dataset for bird species with ${11,788}$ samples ($5,994$ for training and $5,794$ for testing) and ${200}$ species. The ROIs to be segmented are the birds.\n \n \\end{inparaenum}\n %\n In GlaS and CUB datasets, we randomly select $80\\%$ of the training samples for effective training, and $20\\%$ for validation (with full supervision) to perform early stopping. The splits are identical to the ones used in \\citep{belharbi2019unimoconstraints,rony2019weak-loc-histo-survey} (split 0, fold 0), and are publicly available.\n\n\\noindent \\textbf{b) Active learning setup.}\nTo assess the performance of different AL acquisition methods, we consider a realistic scenario with respect to the number of samples to be labeled at each AL round, accounting for the load imposed on the oracle.\nTherefore, only a few samples are selected at each round for oracle annotation, and ${\\mathbb{L}}$ is slowly replenished. This allows better comparison between AL selection techniques since we spend more time\nin a phase where ${\\mathbb{L}}$ holds a few samples. Such a phase allows to better measure the impact of the selected samples. Filling ${\\mathbb{L}}$ quickly brings the model's performance to a plateau that hides the impact of newly selected samples.\nThe initial replenishment ($r=1$) is achieved by randomly selecting a few samples. The same samples are used for all AL techniques at round $r=1$ for a fair comparison. To avoid any bias from selecting unbalanced classes that can directly affect the segmentation performance, and hinder AL evaluation, the same number of samples is selected from each class (since the global annotation is known beforehand for all the samples). Note that the oracle is used only to provide pixel-wise annotation. Tab.\\ref{tab:tab0} provides the selection details.\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Number of samples selected for the oracle per round.}\n\\label{tab:tab0}\n\\centering\n \\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r=1$)} & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r > 1$)} & \\makecell[c]{Max AL rounds \\\\ (\\texttt{maxr} in Alg.\\ref{alg:alg-0})}\\\\\n \\toprule\n GlaS & $4$ & $1$ & $25$ \\\\\n CUB & $1$ & $1$ & $20$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\noindent \\textbf{c) Evaluation.}\nWe report the classification accuracy obtained by the classification head (Fig.\\ref{fig:fig-archi}). Average Dice index is used to measure the segmentation quality at each AL round forming a Dice index curve over all the rounds. To better assess the \\emph{dominance} of each method \\citep{settles2009active}, the Area Under the Dice index Curve is used (AUC). This provides a fair indicator of the dominant curve, but contrasts with standard AL works, where one or multiple specific operating points in the curve are selected (leading to a biased and less accurate protocol). The average and standard deviation of Dice index curve and AUC metric are reported based on 5 replications of a complete AL session, using a different seed for each session. An AL session across different methods uses the same seed.\n\nWhile our approach, referred to as (Label\\_prop), can operate on top of any AL selection criterion, we demonstrate its efficiency using simple random selection, which is often a baseline for AL experiments. Note that our pseudo-annotations are obtained from the segmentation head shown in Fig.\\ref{fig:fig-archi}. Our method is compared to three different AL selection approaches for segmentation:\n\\textbf{(1) random selection (Random)}: the samples are randomly selected;\n\\textbf{(2) entropy-based selection (Entropy)}: the scoring function per sample is the average entropy at the pixel level \\citep{gaur2016membrane}. Samples with high entropy are selected; and\n\\textbf{(3) Monte-Carlo dropout uncertainty (MC\\_Dropout)}: we use Monte-Carlo dropout \\citep{gorriz2017active,lubrano2019deep} at the pixel level to compute the uncertainty score per sample. Samples are forwarded ${50}$ times in the model, where dropout is set to ${0.2}$ \\citep{gorriz2017active,lubrano2019deep}. Then, the pixel-wise variance is estimated. Samples with high mean variance are selected.\n\n\n\\noindent \\textbf{Lower bound performance (WSL)}: We consider the segmentation performance obtained by WSL method as a lower bound. It is trained using only global annotation. CAMs are used to extract the segmentation mask. WILDCAT method is considered \\citep{durand2017wildcat} (Fig.\\ref{fig:fig-archi}) at the classification head to obtain the CAMs. For WSL method, a pre-trained model over ImageNet \\citep{imagenetcvpr09} is used to initialize the weights of the backbone, which is then fined-tuned.\nThe model is trained over the entire dataset, where samples are labeled globally only.\nThe obtained classifier using seed=0 is frozen and used as a backbone for \\emph{all} the other methods.\n\n\\noindent \\textbf{Upper bound performance (Full\\_sup)}: Fully supervised segmentation is considered as an upper bound on performance. The model in Fig.\\ref{fig:fig-archi} is trained for segmentation only using the entire dataset, where samples are labeled at the pixel level.\n\nFor a fair comparison, all the methods are trained using the same hyper-parameters over the same dataset. WSL and Full\\_sup methods have minor differences. Due to space limitation, all the hyper-parameters are presented in the supplementary material.\nIn Alg.\\ref{alg:alg-0}, notice that for our method, ${\\mathbb{P}_r}$ is not used at the current round $r$ but until the next round ${r+1}$. To take advantage of ${\\mathbb{P}_r}$ at round $r$, instructions from line-4 to line-10 are repeated twice in the provided results.\n\n\n\n\n\n\n\n\n\\subsection{Results}\n\\label{sub:results}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{glas-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-0}\n \\end{subfigure}%\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{Caltech-UCSD-Birds-200-2011-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-1}\n \\end{subfigure}%\n\\caption{Average Dice index of the proposed and baseline methods over test sets.\n(\\protect\\subref{fig:fig-al-0}) GlaS.\n(\\protect\\subref{fig:fig-al-1}) CUB.\n}\n\\label{fig:fig-al-results}\n\\end{figure}\n\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Classification accuracy over of the proposed deep WSL model on GlaS and CUB test datasets.}\n\\label{tab:tab-cl-acc}\n\\centering\n \\resizebox{.5\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB}\\\\\n \\toprule\n \\makecell{Classification \\\\ accuracy (\\%)} & $99.50 \\pm 0.61$\n & $73.22 \\pm 0.19$\n \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Average AUC and standard deviation (Fig.\\ref{fig:fig-al-results}) for Dice index performance over GlaS and CUB test sets.}\n\\label{tab:tab-seg-perf}\n\\centering\n \\resizebox{.6\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB} \\\\\n \\toprule\n \\toprule\n WSL & $66.44 \\pm 0.20$ & $39.22 \\pm 0.19$ \\\\\n \\midrule\n Random & $78.57 \\pm 0.93$ & $68.15 \\pm 0.61$ \\\\\n Entropy & $79.13 \\pm 0.26$ & $68.25 \\pm 0.29$ \\\\\n MC\\_Dropout & $77.92 \\pm 0.49$ & $67.69 \\pm 0.27$ \\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{81.48 \\pm 1.03}$ & $\\bm{71.73 \\pm 0.67}$ \\\\\n \\midrule\n Full\\_sup & $86.53 \\pm 0.31$ & $75.29 \\pm 1.50$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\\begin{table*}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Readings of Dice index (mean $\\pm$ standard deviation) from Fig.\\ref{fig:fig-al-results} over test set for the \\textbf{first 5 queries} formed by each method. We start from the second query since the first query is random but identical for all methods.}\n\\label{tab:tab-dice-q}\n\\centering\n \\resizebox{0.9\\linewidth}{!}{\n\\begin{tabular}{l|cccccc}\n \\textbf{Queries} & \\textbf{q2} & \\textbf{q3} & \\textbf{q4} & \\textbf{q5} & \\textbf{q6} \\\\\n \\toprule\n \\toprule\n \\multicolumn{6}{c}{GlaS} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$66.44 \\pm 0.20$} \\\\\n \\midrule\n Random & $70.26 \\pm 3.02$ & $71.58 \\pm 3.14$ & $71.43 \\pm 1.83$ & $74.05 \\pm 3.14$ & $75.36 \\pm 3.45$\\\\\n Entropy & $\\bm{72.75 \\pm 2.96}$ & $70.93 \\pm 3.58$ & $72.60 \\pm 1.44$ & $73.44 \\pm 1.38$ & $75.15 \\pm 1.63$\\\\\n MC\\_Dropout & $68.44 \\pm 2.89$ & $69.70 \\pm 1.96$ & $69.97 \\pm 1.95$ & $72.71 \\pm 2.21$ & $73.00 \\pm 1.04$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $71.02 \\pm 4.19$ & $\\bm{74.07 \\pm 3.93}$ & $ \\bm{76.52 \\pm 3.49}$ & $\\bm{77.63 \\pm 2.73}$ & $\\bm{78.41 \\pm 1.23}$\\\\\n \n Full\\_sup & \\multicolumn{4}{c}{$86.53 \\pm 0.31$}\\\\\n \\toprule \\toprule\n \\multicolumn{6}{c}{CUB} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$39.22 \\pm 0.18$} \\\\\n \\midrule\n Random & $56.86 \\pm 2.07$ & $61.39 \\pm 1.85$ & $62.97 \\pm 1.13$ & $63.56 \\pm 4.02$ & $66.56 \\pm 2.50$\\\\\n Entropy & $53.37 \\pm 2.06$ & $59.11 \\pm 2.50$ & $60.48 \\pm 3.56$ & $63.81 \\pm 2.75$ & $63.59 \\pm 2.34$\\\\\n MC\\_Dropout & $57.13 \\pm 0.83$ & $59.98 \\pm 2.06$ & $63.52 \\pm 2.26$ & $63.02 \\pm 2.68$ & $64.68 \\pm 1.41$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{62.58 \\pm 2.15}$ & $\\bm{66.32 \\pm 2.34}$ & $ \\bm{67.01 \\pm 2.85}$ & $\\bm{69.40 \\pm 3.40}$ & $\\bm{68.28 \\pm 1.60}$\\\\\n \\midrule\n Full\\_sup & \\multicolumn{4}{c}{$75.29 \\pm 1.50$}\\\\\n \\bottomrule\n \\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\nWe report the classification and segmentation performances following the training the proposed deep WSL model in Fig.\\ref{fig:fig-archi}. Tab.\\ref{tab:tab-cl-acc} reports the Classification accuracy of the classification head using WSL, which is\nclose to the results reported in \\citep{belharbi2019weakly,rony2019weak-loc-histo-survey}. The results of GlaS suggest that it is an easy dataset for classification.\n\nThe segmentation results are reported in Tabs. \\ref{tab:tab-dice-q} and \\ref{tab:tab-seg-perf}, and in Fig \\ref{fig:fig-al-results}.\n\nFig. \\ref{fig:fig-al-0} compares Dice accuracy on the \\textbf{GlaS dataset}. On the latter, we observe that adding more labels increases Dice index for all AL methods, yielding, as expected, better performance than the WSL method. Reading from Tab.\\ref{tab:tab-dice-q}, randomly labeling only 4 samples per class enables to easily outperform WSL. This means that using our approach in Fig.\\ref{fig:fig-archi}, with limited supervision, can lead to more accurate masks compared to using CAMs in the WSL method. From Fig.\\ref{fig:fig-al-0}, one can also observe that Random, Entropy, and MC\\_Dropout methods grow relatively in the same way, leading to the same overall performance, with the Entropy method slightly ahead. Considering the overall behavior of the curves, one may conclude that using advanced selection techniques such as MC\\_Dropout and Entropy provides an accuracy similar to simple random selection. On the one hand, since both methods have shown substantial improvements in AL for classification, and based on the results in Fig.\\ref{fig:fig-al-0}, one may conclude that all samples are equivalently informative for the model. Therefore, there is no better order to acquire them. On the other hand, using simply random selection and pseudo-labeled samples allowed our method to substantially improve the overall performance, demonstrating the benefits of self-learning.\n\nFig.\\ref{fig:fig-al-1} and Tab.\\ref{tab:tab-dice-q} compare Dice accuracy on the \\textbf{CUB dataset}, where labeling only one sample per class yielded a large improvement in Dice index, in comparison to WSL. Adding more samples increases the performance of all the methods. One can observe similar pattern as for GlaS: Random, Entropy and MC\\_Dropout methods yield similar curves, while the AUC performances of Random and Entropy methods are similar, and slightly ahead of MC\\_Dropout.\nSimilar to GlaS analysis, and based on the results of these three methods, one can conclude that none of the methods for ordering the samples is better than simple random selection. Using self-labeled samples in our method shows again its benefits. Simple random selection combined with self-annotation yields an overall best performance. Using two datasets, our empirical results suggest that self-learning, under limited oracle-annotation, has the potential to provide a reliable second source of annotation, which can efficiently enhance model performance, while using simple sample acquisition techniques.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=.7\\linewidth]{dice-pseudo-labeled}\n \\caption{Average Dice index over the pseudo-labeled samples of our method in \\textbf{each} AL round.}\n \\label{fig:fig-dice-pseudo-labeled}\n\\end{figure}\n\n\\noindent \\textbf{Pseudo-annotation performance}. Furthermore, the proposed approach is assessed on the pseudo-labeled samples at each AL round. Fig.\\ref{fig:fig-dice-pseudo-labeled} shows that the model provides good segmentations at the initial rounds. Then, the more supervision, the more accurate the pseudo-segmentation, as expected. This figure shows the interest and potential of self-learning in segmentation, and confirms our assumption that samples near the labeled ones are likely to achieve accurate pseudo-segmentation by the model.\n\n\\noindent \\textbf{Hyper-parameters}. Our approach requires two main hyper-parameters: ${ k \\ \\text{and } \\lambda}$. We conducted an ablation study over ${k}$ on GlaS dataset, and over ${\\lambda}$ on both datasets. Results, which are presented in the supplementary material, suggest that our method is less sensitive to ${k}$. ${\\lambda}$ plays an important role, and\nbased on our study, we recommend using small values of this weighting parameter. In our experiments, we used $\\lambda=0.1$ for Glas and $\\lambda = 0.001$ for CUB. We set ${k=40}$. We note that hyper-parameter tuning in AL is challenging due to the change of the size of the data set, which in turn changes the training dynamics. In all the experiments, we used fixed hyper-parameters across the AL rounds.\nFig.\\ref{fig:fig-dice-pseudo-labeled} suggests that a dynamic ${\\lambda(r)}$ that is increased through AL rounds could be more beneficial. However, this requires a principled update protocol for ${\\lambda}$, which was not explored in this work. Nonetheless, using a fixed value seems to yield promising results overall.\n\n\\noindent \\textbf{Supplementary material}. Due to space limitation, we deferred the hyper-parameters used in the experiments, results of the ablation study, visual results for the similarity measure and examples of predicted masks to the supplementary materials.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nDeep WSL models trained with global image-level annotations can play an important role in CNN visualization and interpretability. However, they are prone to high false-positive rates, especially for challenging images, leading to poor segmentations.\nTo alleviate this issue, we considered using pixel-wise supervision provided gradually through an AL framework. This annotation is integrated into training using an adequate deep convolutional model that allows supervised learning for both tasks: classification and segmentation. Through a few pixel-supervised samples, such a design is intended to provide full-resolution and more accurate masks compared to standard CAMs, which are trained without pixel supervision and often provide coarse resolution. Therefore, it enables a better CNN visualization and interpretation of CNN predictions.\nFurthermore, and unlike standard deep AL methods that focus solely on the acquisition function, we considered using self-learning as a second source of supervision to fast-improve the model segmentation.\nEvaluating our method using a realistic AL protocol over two challenging benchmarks, our results indicate that:\n(1) using a \\emph{few} supervised samples, the proposed architecture yielded more accurate segmentations compared to CAMs, with a large margin using different AL methods. Thus, it provides a solution to enhance pixel-wise predictions\nin real-world visual recognition applications.\n(2) using self-learning with random selection yielded substantial improvements. Self-learning under a limited oracle-budget can, therefore, provide a cost-effective alternative to standard AL protocols, where most of the effort is spent on the acquisition function.\n\n\\section*{Acknowledgment}\nThis research was supported in part by the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, Compute Canada, MITACS, and the Ericsson Global AI Accelerator Montreal.\n\n\\clearpage\n\\newpage\n\n\\appendices\n\n\n\n\\section{Supplementary material for the experiments}\nDue to space limitation, we provide in this supplementary material detailed hyper-parameters used in the experiments, results of the ablation study, visual results to the similarity measure, and examples of predicted masks.\n\n\\subsection{Training hyper-parameters}\n\\label{subsec:hyper-params}\nTab.\\ref{tab:tabx-tr-hyper-params} shows the used hyper-parameters in all our experiments.\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Training hyper-parameters.}\n\\label{tab:tabx-tr-hyper-params}\n\\centering\n\\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{lccc}\n Hyper-parameter & GlaS & CUB\\\\\n \\toprule\n Model backbone & \\multicolumn{2}{c}{ResNet-18 \\citep{heZRS16}}\\\\\n WILDCAT \\citep{durand2017wildcat}: && \\\\\n ${\\alpha}$ & \\multicolumn{2}{c}{${0.6}$} \\\\\n ${kmin}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n ${kmax}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n modalities & \\multicolumn{2}{c}{${5}$} \\\\\n \\midrule\n Optimizer & \\multicolumn{2}{c}{SGD}\\\\\n Nesterov acceleration & \\multicolumn{2}{c}{True}\\\\\n Momentum & \\multicolumn{2}{c}{$0.9$} \\\\\n Weight decay & \\multicolumn{2}{c}{$0.0001$}\\\\\n Learning rate (LR) & ${0.1}$ (WSL: $10^{-4}$) & ${0.1}$ (WSL: $10^{-2}$)\\\\\n LR decay & ${0.9}$ & ${0.95}$ (WSL: ${0.9}$) \\\\\n LR frequency decay & $100$ epochs & $10$ epochs \\\\\n Mini-batch size & ${20}$ & ${8}$ \\\\\n Learning epochs & ${1000}$ & ${30}$ (WSL: ${90}$) \\\\\n \\midrule\n Horizontal random flip & \\multicolumn{2}{c}{True} \\\\\n Vertical random flip & True & False \\\\\n \n Crop size & \\multicolumn{2}{c}{${416 \\times 416}$}\\\\\n \\midrule\n ${k}$ & \\multicolumn{2}{c}{${40}$}\\\\\n ${\\lambda}$ & ${0.1}$ & ${0.001}$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{knn-ablation-glas-Label_prop-best-k-40}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter $k$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{25 queries for one trial}.\n AUC average $\\pm$ standard deviation: ${81.49 \\pm 0.59}$.\n Best performance in red dot: $k=40, AUC=82.41\\%$.\n }\n \\label{fig:abl-glas-knn}\n\\end{figure}\n\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-glas-Label_prop-best-lambda-0-dot-1}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{15 queries for one trial}.\n Best performance in red dot: $\\lambda=0.1, AUC=79.15\\%$.\n }\n \\label{fig:abl-glas-lambda}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-Caltech-UCSD-Birds-200-2011-Label_prop-best-lambda-0-dot-001}\n \\caption{Ablation study over CUB dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{5 queries for one trial}.\n Best performance in red dot: $\\lambda=0.001, AUC=66.94\\%$.\n }\n \\label{fig:abl-cub-lambda}\n\\end{figure}\n\n\n\\subsection{Ablation study}\n\\label{sub:ablation}\n\nWe study the impact of $k$ and ${\\lambda}$ on our method. Results are presented in Fig.\\ref{fig:abl-glas-knn}, \\ref{fig:abl-glas-lambda} for GlaS over ${k, \\lambda}$; and in Fig.\\ref{fig:abl-cub-lambda} for CUB over ${\\lambda}$. Due to the expensive computation time required to perform AL experiments, we limited the experiments (${k, \\lambda}$, number of trials, and \\texttt{maxr}).\nThe obtained results of this study show that our method is less sensitive to ${k}$ (standard deviation of ${0.59}$ in Fig.\\ref{fig:abl-glas-knn}).\nIn other hand, the method shows sensitivity to ${\\lambda}$ as expected from penalty-based methods \\citep{bertsekas1999nonlinear}.\nHowever, the method seems more sensitive to ${\\lambda}$ in the case of CUB than GlaS. CUb dataset is more challenging leading to more potential erroneous pseudo-annotation. Using Large ${\\lambda}$ will systematically push the model to learn on the wrong annotation (Fig.\\ref{fig:abl-cub-lambda}) which leads to poor results. In the other hand, GlaS seems to allow obtaining good segmentation where using large values of ${\\lambda}$ did not hinder the performance quickly (\\ref{fig:abl-glas-lambda}).\nThe obtained results recommend using small values that lead to better and stable performance. Using high values, combined with the pseudo-annotation errors, push the network to learn erroneous annotation leading to overall poor performance.\n\n\n\n\n\n\\subsection{Similarity measure}\n\\label{subsec:similarity}\nIn this section, we present some samples with their nearest neighbors. Although, it is difficult to quantitatively evaluate the quality of such measure.\nFig.\\ref{fig:glas-sim} shows the case of GlaS. Overall, the similarity shows good behavior of capturing the general stain of the image which is what was intended for since the structure of such histology images is subject to high variation. Since the stain variation is one of the challenging aspects in histology images \\citep{rony2019weak-loc-histo-survey}, labeling a sample with a common stain can help the model in segmenting other samples with similar stain.\nThe case of CUB, presented in Fig.\\ref{fig:cub-sim}, is more difficult to judge the quality since the images contain always the same species within their natural habitat. Often, the similarity succeeds to capture the overall color, background which can help segmenting the object in the neighbors and also the background. In some cases, the similarity captures samples with large zoom-in where the bird color dominate the image.\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=1.\\linewidth]{glas}\n \\caption{Examples of ${k}$-nn over GlaS dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:glas-sim}\n\\end{figure*}\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=0.9\\linewidth]{Caltech-UCSD-Birds-200-2011}\n \\caption{Examples of ${k}$-nn over CUB dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:cub-sim}\n\\end{figure*}\n\n\n\\subsection{Predicted mask visualization}\n\\label{subsec:mask-vis}\nFig.\\ref{fig:cub-results-visu} shows several test examples of predicted masks of different methods over CUB test set at the first AL round (${r=1}$) where only one sample per class has been labeled by the oracle. This interesting functioning point shows that by labeling only one sample per class, the performance of the average Dice index can go from ${39.08 \\pm 08}$ for WSL method up to ${62.58 \\pm 2.15}$ for Label\\_prop and other AL methods. The figure shows that WSL tend to spot small part of the object in addition to the background leading high false positive. Using few supervision in combination with the proposed architecture, better segmentation is achieved by spotting large part of the object with less confusion with the background.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{sample-test-Caltech-UCSD-Birds-200-2011} \\\\\n \\includegraphics[width=0.95\\linewidth]{code}\n \\caption{Qualitative results (on several CUB test images) of the predicted binary mask for each method after being trained in the first round ${r=1}$ (\\emph{i.e.}\\@\\xspace after labeling 1 sample per class) using seed=0. The average Dice index over the test set of each method is: ${40.16\\%}$ (WSL), ${55.32\\%}$ (Random), ${55.41\\%}$ (Entropy), ${55.52\\%}$ (MC\\_Dropout), ${59.00\\%}$ (Label\\_prop), and ${75.29\\%}$ (Full\\_sup). (Best visualized in color.)}\n \\label{fig:cub-results-visu}\n\\end{figure*}\n\n\\FloatBarrier\n\n\n\n\n\n\n\n\\bibliographystyle{apalike}\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nImage classification and segmentation are fundamental tasks in many visual recognition applications involving natural and medical images. Given a large image dataset annotated with global image-level labels for classification or with pixel-level labels for segmentation, deep learning (DL) models achieve state-of-the-art performances for these tasks \\citep{dolz20183d,Goodfellow-et-al-2016,krizhevsky12,LITJENS201760,LongSDcvpr15,Ronneberger-unet-2015}. However, the impressive accuracy of such fully-supervised learning models comes at the expense of a considerable cost for collecting and annotating large image data sets. While the acquisition of global image-level annotation can be relatively inexpensive, pixel-wise annotation involves a laborious process, a difficulty further accrued by the requirement of domain expertise, as in medical imaging, which increases the annotation costs.\n\\begin{figure}[t!]\n\\centering\n \\centering\n \\includegraphics[scale=0.49]{proposal}\n \\caption{Proposed AL framework with weak annotator.}\n \\label{fig:fig-proposal}\n\\end{figure}\n\nWeakly-supervised learning (WSL) has recently emerged as a paradigm that relaxes the need for dense pixel-wise annotations \\citep{rony2019weak-loc-histo-survey,zhou2017brief}. WSL techniques depend on the type of application scenario and annotation, such as global image-level labels \\citep{belharbi2019weakly,kim2017two,pathak2015constrained,teh2016attention,wei2017object}, scribbles \\citep{Lin2016,ncloss:cvpr18}, points \\citep{bearman2016}, bounding boxes \\citep{dai2015boxsup,Khoreva2017} or global image statistics such as the target-region size \\citep{bateson2019constrained,jia2017constrained,kervadec2019curriculum,kervadec2019constrained}. This paper focuses on learning using only image-level labels, which enables to classify an image while yielding pixel-wise scores (i.e., segmentations), thereby localizing the regions of interest linked to the image-class predictions.\n\n\nSeveral CNN visualization and interpretation methods have recently been proposed, based on either perturbation, propagation or activation approaches, and allowing to localize the salient image regions responsible for a CNN's predictions \\citep{fu2020axiom}. In particular, WSL techniques \\citep{rony2019weak-loc-histo-survey} rely on activation-based methods like CAM and, more recently, Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, Ablation-CAM and Axiom-based Grad-CAM \\citep{fu2020axiom, lin2013network, PinheiroC15cvpr}. Trained with only global image-level annotations, these methods locate the regions of interest (ROIs) of the corresponding class in a relatively inexpensive and accurate way. However, while these WSL techniques can provide satisfying results in natural images, they typically yield poor segmentations in relatively more challenging scenarios, for instance, histology data in medical image analysis \\citep{rony2019weak-loc-histo-survey}.\nWe note two limitations associated with CAMs: (1) they are obtained in an unsupervised way (\\emph{i.e.}\\@\\xspace without pixel-level supervision under an ill-posed learning problem \\citep{choe2020evaluating}); and (2) they have low resolution. For instance, CAMs obtained from ResNet models \\citep{heZRS16} have a resolution of ${1\/32}$ of the input image. Interpolation is required to restore full image resolution. Both of these limitation with CAM-based methods lead to high false-positive rates, which may render them impractical \\citep{rony2019weak-loc-histo-survey}.\n\nEnhancing deep WSL models with pixel-wise annotation, as supported by a recent study in weakly-supervised object localization \\citep{choe2020evaluating}, can improve localization and segmentation accuracy, which is the central goal of this paper. To do so, we introduce a deep WSL model that allows supervised learning for classification, and active learning for segmentation, with the latter providing more accurate and high-resolution masks. We assume that the images in the training set are globally annotated with image-class labels. Relevant images are \\emph{gradually} labeled at the pixel level through an oracle that respects a low annotation-budget constraint. Consequently, this leads us to an active learning (AL) paradigm \\citep{settles2009active}, where an oracle is requested to annotate pixels in a subset of samples.\n\nDifferent sample-acquisition techniques have been successfully applied to deep AL for classification based on, e.g., certainty \\citep{ducoffe2015qbdc,gal2017deep,kirsch2019batchbald} or representativeness \\citep{kim2020task,sinha2019variational}. However, very few deep AL techniques were investigated in the context of segmentation \\citep{gaur2016membrane,gorriz2017active,lubrano2019deep}. Most AL techniques focus mainly on the sampling criterion (Fig.\\ref{fig:fig-proposal}, left) to populate the labeled pool using an oracle. During training, only the labeled pool is used, while the unlabeled pool is left dormant. Such an AL protocol may limit the accuracy of DL models under constrained oracle-supervision budget in real-world applications for multiple reasons:\n\n\\textbf{(1)} Standard AL protocols may be relevant to small\/shallow models that can learn and provide reliable queries using a few training samples. Since training accurate DL models typically depends on large training sets, large numbers of queries may be needed to build reliable DL models, which may incur a high annotation cost.\n\n\\textbf{(2)} In most AL work, the experimental protocol starts with a large labeled pool, and acquires a large number of queries for sufficient supervision, neglecting the workload placed on the oracle.\nThis typically reaches a plateau-performance of a DL quickly, hampering a reliable study of the impact of different AL selection techniques. Moreover, model-based sampling techniques are inconsistent \\citep{gaur2016membrane} in the sense that the model is used to query samples while it is still in an early learning stage.\n\n\\textbf{(3)} Segmentation and classification problems are associated with different properties and challenges, such as decision boundaries and uncertainty, which provide additional challenges to AL. For instance, the class boundaries defined by different classification methods \\citep{ducoffe2018adversarial,settles2009active,tong2001support} are not defined in the context of segmentation, making such a branch of methods inadequate for segmentation.\n\n\\textbf{(4)}\nIn critical fields such as medical imaging, acquiring a sample itself can be very expensive\\footnote{\nFor instance, prior to a diagnosis of breast cancer from a histological sample, a patient undergoes a bilateral diagnostic mammogram and breast ultrasound that are interpreted by a radiologist, one to several needle biopsies (with low risks under ${1\\%}$ of hematoma and wound infection) to further assess areas of concern, surgical consultation and pre-operative blood work, and surgical excision of the positive tissue for breast cancer cells. The biopsy and surgical tissues are processed (fixation, embedding in parraffin, H\\&E staining) and interpreted by a pathologist. Depending on the cancer stage, the patient may undergo additional procedures. Therefore, accounting for all the steps required for breast cancer diagnosis from histological samples, a rough estimation of the final cost associated with obtaining a Whole Slide Image (WSI) is about \\$400 (Canadian dollars, by 1999) \\citep{will1999diagnostic}. Moreover, some cancer types are rare \\citep{will1999diagnostic}, adding to the values of these samples. All these procedures are conducted by highly trained experts, with each procedure taking from a few minutes to an hour and requiring expensive specialized medical equipment.}.\nThe time and cost associated with each sample makes them valuable items. Such considerations may be overlooked for large-scale data sets with almost-free samples, as in the case of natural images. Given this high cost, keeping the unlabeled pool dormant during learning may be ineffective.\n\n\nBased on the aforementioned arguments, we advocate that focusing solely on the sample acquisition and supervision pool may not be an efficient way to build\nhigh-performing DL models in an AL framework for segmentation. Therefore, we consider augmenting the labeled pool using the model as a second source of annotation, in a self-learning fashion \\citep{mao2020survey} (Fig.\\ref{fig:fig-proposal}, right). This additional annotation might be less accurate (\\emph{i.e.}\\@\\xspace, weak\\footnote{Not to be confused with the weak annotation of data in weakly supervised learning frameworks.}) compared to the oracle that provides strong but expensive annotations. However, it is expected to fast-improve the model's performance \\citep{mao2020survey}, while using a few oracle-annotated samples, reducing the annotation cost.\n\nOur main contributions are the following.\n\\textbf{(1) Architecture design}: As an alternative to CAMs, we propose using a segmentation mask trained with pixel-level annotations, which yields more accurate and high-resolution ROIs. This is achieved through a convolutional architecture capable of simultaneously classifying and segmenting images, with the segmentation task trained using annotations acquired using an AL framework.\nAs illustrated in Fig.\\ref{fig:fig-archi}, the architecture combines well-known DL models for classification (ResNet \\citep{heZRS16}) and segmentation (U-Net \\citep{Ronneberger-unet-2015}), although other architectures could also be used.\n\\textbf{(2) Active learning}: We augment the size of the labeled pool by weak-annotating a large number of unlabeled samples based on predictions of the DL model itself, providing a second source of annotation (Fig.\\ref{fig:fig-proposal}). This enables rapid improvements of the segmentation accuracy, with less oracle-based annotation. Moreover, our method can be integrated on top of any sample-acquisition method.\n\\textbf{(3) Experimental study}: We conducted comprehensive experiments over two challenging benchmarks -- high-resolution medical images (histology GlaS data for colon cancer) and natural images (CUB-200-2011 for bird species). Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of the-art CAMs and AL methods, with an identical oracle-supervision budget.\n\n\n\\section{Related work}\n\\label{sec:related-work}\n\n\\noindent \\textbf{Deep active learning:}\nAL has been studied for a long time in machine learning, mainly for classification and regression, using linear models in particular \\citep{settles2009active}. Recently, there has been an effort to transfer such techniques to DL models for classification tasks by mimicking their intuition or by adapting them, taking into consideration model specificity and complexity. Such methods include, for instance, different mechanisms for uncertainty \\citep{beluch2018power,ducoffe2015qbdc,ducoffe2018adversarial,gal2017deep,kim2020task,kirsch2019batchbald,lakshminarayanan2017simple,wang2016cost,yoo2019learning} and representativeness estimation \\citep{kim2020task,sener2018coreset,sinha2019variational}. However, most deep AL techniques are validated on synthetic, simple or tiny data, which does not explore their full potential in real applications.\n\nWhile deep AL for classification is rapidly growing, deep AL models for segmentation are uncommon in the literature. In fact, the very few methods in the literature mostly focused on the direct application of deep AL classification methods. The limited research in this area may be explained by the fact that segmentation tasks bring challenges to AL, such as the additional spatial information and the fact that a segmentation mask lies in a much larger dimension than a classification prediction. In classification, AL often deals with one output that is used to drive queries \\citep{huang2010active}. The spatial information in segmentation does not naturally provide a direct scoring function that can indicate the overall quality or certainty of the output. Most of deep AL methods for segmentation consider pixels as classification instances, and apply standard AL techniques to each pixel.\n\nFor instance, the authors of \\citep{gaur2016membrane} exploit a variant of entropy-based acquisition at the pixel level, combined with a distribution-based term that encodes diversity using a complex hierarchical clustering algorithm over sliding windows, with application to microscopic membrane segmentation. Similarly, \\citep{gorriz2017active,lubrano2019deep} apply Monte-Carlo dropout uncertainty \\citep{gal2017deep} at the pixel level, with application to myelin segmentation using spinal cord and brain microscopic histology images. In \\citep{roels2019cost}, the authors experiment with five acquisition functions of classification for a segmentation task, including entropy-based, core-set \\citep{sener2018coreset}, k-mean and Bayesian \\citep{gal2017deep} sampling, with application to electron microscopy segmentation. Entropy-based methods seem to be dominant over multiple datasets. In \\citep{yang2017suggestive}, the authors combine two sampling terms for histology image segmentation. The first employs bootstrapping over fully convolutional networks (FCN) to estimate uncertainty, where a set of FCNs are trained on different subsets of samples. The second term is a representation-based term that selects the most representative samples. This is achieved by solving an optimization of a generalization version of the maximum cover set problem \\citep{feige1998threshold} using sample description extracted from an FCN. Despite the obtained promising results, this approach remains complex and impractical due to the use of bootstrapping over DL models and an optimization step. Moreover, the authors of \\citep{yang2017suggestive} do not provide a comparison to other acquisition functions. The work in \\citep{casanova2020} considers a specific case of AL using reinforcement learning for \\emph{region-based} AL for segmentation, where only a selected region of the image is labeled. This method is adequate for data sets with large and unbalanced classes, such as street-view images. While the method in \\citep{casanova2020} outperforms random and Bayesian \\citep{gal2017deep} selection, surprisingly, it performs close to entropy-based selection.\n\n\n\\noindent \\textbf{Weak annotators:}\nThe AL paradigm does not prohibit the use of unlabelled data for training \\citep{settles2009active}, but it mainly constrains the oracle-labeling budget. The standard AL experimental protocol (Fig.\\ref{fig:fig-proposal}, left) was inherited from AL of simple\/linear ML models, and adopted in subsequent works. Budget-constrained oracle annotation may not be sufficient to build effective DL models, due to the lack of labeled samples. Furthermore, several studies in AL for classification have successfully leveraged the unlabelled data to provide additional supervision \\citep{lin2017active,long2008graph,vijayanarasimhan2012active,wang2016cost,zhou2010active,zhualsslharmo2003}.\n\nFor instance, the authors of \\citep{lin2017active,wang2016cost} propose to pseudo-label a part of the unlabeled pool. The latter is selected using dynamic thresholding based on confidence, through the model itself, so as to learn a better embedding. Furthermore, a theoretical framework for AL using strong and weak annotators for classification task is introduced in \\citep{zhang2015active}. Their results suggest that using multiple annotators can reduce the cost of oracle annotation, in comparison to one annotator. Multiple sources of annotations that include both strong and weak annotators were used in AL, crowd-sourcing, self-paced learning and other interactive learning scenarios for classification to help reducing the number of requests for the strong annotator \\citep{kremer2018robust,malago2014online,mattsson2017active,murugesan2017active,urner2012learning,yan2016active,zhang2015active}.\nUsing the model itself for pseudo-annotation is motivated mainly by the success of deep self-supervised learning \\citep{mao2020survey}.\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n\\includegraphics[scale=0.65]{knn}\n\\caption{The $k$-nn method for selecting $\\mathbb{U}^{\\prime\\prime}$ subset to be speudo-labeled. Assumption to select $\\mathbb{U}^{\\prime\\prime}$: since $\\mathbb{U}^{\\prime\\prime}$ lives nearby supervised samples, it is more likely to be assigned good segmentation by the model.\nWe consider measuring $k$-nn for each \\textbf{unlabeled} sample. In this example, using $k=4$ allows ${\\abs{\\mathbb{U}^{\\prime\\prime}} = 14}$. If $k$-nn is considered for each \\textbf{labeled} sample: $\\abs{\\mathbb{U}^{\\prime\\prime}} = 8$. ${\\abs{\\cdot}}$ is the set cardinal. Note that ${k}$-nn is only considered between samples of the \\emph{same class}.}\n \\label{fig:fig-knn}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\n\\noindent \\textbf{Label Propagation (LP):}\nOur approach is also related to LP methods \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002} for classification, which aim to label unlabeled samples using knowledge from the labeled ones (Fig.\\ref{fig:fig-knn}). However, while LP propagates labels to unlabeled samples through an iterative process, our approach bypasses this using the model itself. In our case, the propagation is limited to the neighbors of labeled samples defined through $k$-nearest neighbors ($k$-nn) (Fig.\\ref{fig:fig-knn}). Using $k$-nn has been also studied to combine AL and domain adaptation \\citep{berlind2015active}, where the goal is to query samples from the target domain. Such an approach is connected to the recently developed core-set method for deep AL \\citep{sener2018coreset}. Our method intersects with \\citep{berlind2015active} only in the sense of predicting the labels to query samples using their labeled neighbors.\n\nIn contrast to state-of-the-art DL models for AL segmentation, we consider increasing the unlabeled pool through pseudo-annotated samples (Fig.\\ref{fig:fig-proposal}, right). To this end, the model is used for pseudo-labeling samples within the neighborhood of samples with strong supervision (Fig.\\ref{fig:fig-knn}). From a self-learning perspective, the works in \\citep{lin2017active,wang2016cost} on face recognition are the closest to ours. While both rely on pseudo-labeling, they mainly differ in the sample selection for pseudo-annotation. In \\citep{lin2017active,wang2016cost}, the authors considered model confidence, where samples with high confidence are pseudo-labeled, while low-confidence samples are queried. While this yields good results, it makes the overall method strongly dependent on model confidence. As we consider segmentation tasks, model-confidence is not well-defined. Moreover, using the expected pixel-wise values can be less representative for model confidence.\n\nOur approach relies on the spatial assumption in Fig.\\ref{fig:fig-knn}, where the samples to pseudo-label are selected to be near the labeled samples, and expected to have good pseudo-segmentations. This makes the oracle-querying technique independent from the pseudo-labeling method, giving more flexibility to the user. Our pseudo-labeled samples are added to the labeled pool, along with the samples annotated by the oracle. The underlying assumption is that, given a sample labeled by an oracle, the model is more likely to produce good segmentations of images located nearby that sample. Our assumption is verified empirically in the experimental section of this paper. This simple procedure enables to rapidly increase the number of pseudo-labeled samples, and helps improving segmentation performance under a limited amount of oracle-based supervision.\n\n\n\n\n\n\\section{Proposed approach}\n\\label{sec:proposal}\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n \\includegraphics[scale=0.75]{archi}\n \\caption{Our proposed DL architecture for classification and segmentation composed of: (1) a shared \\textbf{backbone} for feature extraction; (2) a \\textbf{classification head} for the classification task; (3) and a \\textbf{segmentation head} for the segmentation task with a U-Net style \\citep{Ronneberger-unet-2015}. The latter merges representations from the backbone, while gradually upscaling the feature maps to reach the full image resolution for the predicted mask, similarly to the U-Net model.}\n \\label{fig:fig-archi}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\nWe consider an AL framework for training deep WSL models, where all the training images have class-level annotations, but no pixel-level annotations. Due to their high cost, pixel annotations are gradually acquired for training through oracle queries. It propagate pixel-wise knowledge encoded in the model though the labeled images.\n\nActive learning training consists of sequential training rounds. At each round $r$, the total training set ${\\mathbb{D}}$ that contains $n$ samples with unlabeled and labeled subsets (Fig.\\ref{fig:fig-proposal}).\n\\textbf{(1) Unlabeled subset:} contains samples without pixel-wise annotation (unlabeled samples) ${\\mathbb{U} = \\{\\bm{x}_i, y_i, -\\-\\}_{i=1}^u}$ where ${\\bm{x} \\in \\mathcal{X}}$ is the input image, ${y}$ is its global label; and the pixel label is missing.\n\\textbf{(2) Labeled subset:} contains samples with full supervision ${\\mathbb{L}=\\{\\bm{x}_i, y_i, \\bm{m}_i\\}_{i=1}^l}$ where ${\\bm{m}}$ is the pixel-wise annotation of the sample. ${\\mathbb{L}}$ is initially empty. It is gradually populated from ${\\mathbb{U}}$ by querying the oracle using an acquisition function.\nLet ${f(\\cdot: \\bm{\\theta})}$ denotes a DL model that is able to classify and segment an image ${\\bm{x}}$ (Fig.\\ref{fig:fig-archi}). For clarity, and since we focus on the segmentation task, we omit the notation for the classification task (to simplify the presentation). Therefore, ${f(\\bm{x})}$ refers to the predicted segmentation mask.\nLet ${\\mathbb{U}^{\\prime} \\subseteq \\mathbb{U}}$ and ${\\mathbb{U}^{\\prime\\prime} \\subseteq \\mathbb{U}}$ denote two subsets (Fig.\\ref{fig:fig-proposal}), with ${ \\mathbb{U}^{\\prime} \\cap \\mathbb{U^{\\prime\\prime}} = \\varnothing}$.\nIn our method, we introduce ${\\mathbb{P}}$ as a subset holder for pseudo-labeled samples, which is initially empty and gradually replenished (Fig.\\ref{fig:fig-proposal}, right).\nTo express the dependency of each subset on round ${r}$, we introduce the following notations: ${\\mathbb{U}_r, \\mathbb{L}_r, \\mathbb{P}_r, \\mathbb{U}^{\\prime}_r, \\mathbb{U}^{\\prime\\prime}_r}$. The samples in ${\\mathbb{P}_r}$ are denoted as ${\\{\\bm{x}_i, y_i, \\hat{\\bm{m}}_i\\}}$. The following holds: ${\\forall r: \\mathbb{D} = \\mathbb{L}_r \\cup \\mathbb{U}_r \\cup \\mathbb{P}_r}$.\n\n\n\nAlg.\\ref{alg:alg-0} describes the overall AL process with our pseudo-annotation method.\nFirst, ${\\mathbb{U}^{\\prime}_r}$ is queried, then labeled by the oracle, and added to ${\\mathbb{L}_r}$.\nUsing $k$-nn, ${\\mathbb{U}^{\\prime\\prime}_r}$ is selected based on their proximity to ${\\mathbb{L}_r}$ (Fig.\\ref{fig:fig-knn}); and pseudo-labeled by the model, then added to ${\\mathbb{P}_r}$. To fast-increase the size of ${\\mathbb{L}_r}$, ${\\mathbb{P}_r}$ is protected from being queried for the oracle until it is inevitable. In the \\emph{latter case}, queried samples from ${\\mathbb{P}_r}$ are used to fill ${\\mathbb{U}^{\\prime}}$; and they are no longer considered pseudo-labeled since they will be assigned the oracle annotation.\n\nTo measure image similarity for the $k$-nn method, we used the color distribution to describe image content. This can be a flexible descriptor for highly unstructured images such as histology images. Note that the ${k}$-nn method is considered \\emph{only} for pairs of samples of the \\emph{same class}. The underlying assumption is that samples of the same class, with similar color distributions, are likely to contain relatively similar objects. Consequently, labeling representative samples could be a proxy for supervised learning based on the underlying data distribution. This can increase the likelihood of the model to provide relatively good segmentations of the other samples. The proximity between two images ${(\\bm{x}_i, \\bm{x}_j)}$ is measured using the Jensen-Shannon divergence between their respective color distributions (measured as normalized histograms). For an image with multiple color planes, the similarity is formulated as the sum of similarities, one for each plane.\n\nAt round $r$, the queried and pseudo-labeled samples are both used in training by optimizing the following loss function:\n\\begin{equation}\n\\label{eq:eq-1}\n \\min_{\\bm{\\theta}} \\sum_{\\bm{x}_i \\in \\mathbb{L}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\bm{m}_i) + \\lambda \\sum_{\\bm{x}_i \\in \\mathbb{P}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\hat{\\bm{m}}_i),\n\\end{equation}\nwhere ${\\mathcal{L}(\\cdot, \\cdot)}$ is a segmentation loss, and ${\\lambda}$ a positive scalar. Eq.(\\ref{eq:eq-1}) corresponds to training the model (Fig.\\ref{fig:fig-archi}) solely for the segmentation task. Simultaneous training for classification and segmentation in this AL setup is avoided due to the unbalance between the number of samples that are labeled globally and at the pixel level. Therefore, we consider training the model for classification first. Then, we freeze the classifier parameters. Training for the segmentation tasks is resumed later. This yields the best classification performance, and allows a better study of the impact of the queried samples on the segmentation task.\n\nConsidering the relation of our method and label propagation algorithm \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002}, we refer to our proposal as Label\\_prop.\n\n\n\\begin{center}\n\\begin{minipage}{0.6\\linewidth}\n\\IncMargin{0.04in}\n\\begin{algorithm}[H]\n \\SetKwInOut{Input}{Input}\n \n \\Input{\n ${\\mathbb{P}_0 = \\mathbb{L}_0 = \\varnothing}$\n \\\\\n ${\\bm{\\theta}^0}$: Initial parameters of ${f}$ trained on the classification task.\n \\\\\n ${\\texttt{maxr}: \\textrm{Maximum number of AL rounds}}$.\n }\n \\vspace{0.1in}\n Select ${\\mathbb{U}^{\\prime}_0}$ randomly and label them by an oracle. \\\\\n \\vspace{0.025in}\n $ \\mathbb{L}_0 \\ \\leftarrow\\ \\mathbb{U}^{\\prime}_0$. \\\\\n \\For{${r \\in 1 \\cdots \\texttt{maxr}}$} \n {\n \n ${\\bm{\\theta} \\ \\leftarrow\\ \\bm{\\theta}^0}$. \\\\\n \\vspace{0.03in}\n Train $f$ using ${\\mathbb{L}_{r-1}}$ \\colorbox{mybluelight}{${\\cup\\; \\mathbb{P}_{r-1}}$} and the loss in Eq. (\\ref{eq:eq-1}). \\\\\n \\vspace{0.03in}\n Select ${\\mathbb{U}^{\\prime}_r}$ and label them by an oracle. \\\\\n \\vspace{0.03in}\n $ \\mathbb{L}_r \\ \\leftarrow\\ \\mathbb{L}_{r-1} \\cup \\mathbb{U}^{\\prime}_r$. \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{Select ${\\mathbb{U}^{\\prime\\prime}_r}$.} \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{$ \\mathbb{P}_r \\ \\leftarrow\\ \\mathbb{P}_{r-1} \\cup \\mathbb{U}^{\\prime\\prime}_r$.} \\\\ \n \\vspace{0.03in}\n \\colorbox{mybluelight}{Pseudo-label ${\\mathbb{P}_r}$.}\n }\n \\vspace{0.1in}\n \\caption{Standard AL procedure and our proposal. The extra instructions associated with our method are indicated with a \\colorbox{mybluelight}{blue background}.\n }\n \\label{alg:alg-0}\n\\end{algorithm}\n\\DecMargin{0.04in}\n\\end{minipage}\n\\end{center}\n\n\n\n\\section{Results and discussion}\n\\label{sec:experiments}\n\n\n\n \\subsection{Experimental methodology:}\n\n \\begin{figure}[!b]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{samples-datasets}\n \\caption{\\textbf{Top row}: GlaS dataset \\citep{sirinukunwattana2017gland}. \\textbf{Bottom row}: CUB dataset \\citep{WahCUB2002011}. (Best visualized in color.)}\n \\label{fig:fig-datasets}\n\\end{figure}\n\n\\noindent \\textbf{a) Datasets.}\nFor evaluation, datasets should have global and pixel-wise annotation. We consider two public datasets including both medical (histology) and natural images (Fig.\\ref{fig:fig-datasets}).\n\\begin{inparaenum}[(1)]\n\n \\item \\textbf{GlaS dataset}: This dataset contains histology images for colon cancer diagnosis\\footnote{GlaS: \\href{https:\/\/warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}{warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}.} \\citep{sirinukunwattana2017gland}. It includes 165 images derived from 16 Hematoxylin and Eosin (H\\&E) histology sections of two grades (classes): benign and malignant. It is divided into 84 samples for training and 80 samples for testing.\n The ROIs to be segmented are the glandes.\n \n \n \\item \\textbf{CUB-200-2011 dataset (CUB)}\\footnote{CUB: \\href{http:\/\/www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}{www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}} \\citep{WahCUB2002011} is a dataset for bird species with ${11,788}$ samples ($5,994$ for training and $5,794$ for testing) and ${200}$ species. The ROIs to be segmented are the birds.\n \n \\end{inparaenum}\n %\n In GlaS and CUB datasets, we randomly select $80\\%$ of the training samples for effective training, and $20\\%$ for validation (with full supervision) to perform early stopping. The splits are identical to the ones used in \\citep{belharbi2019unimoconstraints,rony2019weak-loc-histo-survey} (split 0, fold 0), and are publicly available.\n\n\\noindent \\textbf{b) Active learning setup.}\nTo assess the performance of different AL acquisition methods, we consider a realistic scenario with respect to the number of samples to be labeled at each AL round, accounting for the load imposed on the oracle.\nTherefore, only a few samples are selected at each round for oracle annotation, and ${\\mathbb{L}}$ is slowly replenished. This allows better comparison between AL selection techniques since we spend more time\nin a phase where ${\\mathbb{L}}$ holds a few samples. Such a phase allows to better measure the impact of the selected samples. Filling ${\\mathbb{L}}$ quickly brings the model's performance to a plateau that hides the impact of newly selected samples.\nThe initial replenishment ($r=1$) is achieved by randomly selecting a few samples. The same samples are used for all AL techniques at round $r=1$ for a fair comparison. To avoid any bias from selecting unbalanced classes that can directly affect the segmentation performance, and hinder AL evaluation, the same number of samples is selected from each class (since the global annotation is known beforehand for all the samples). Note that the oracle is used only to provide pixel-wise annotation. Tab.\\ref{tab:tab0} provides the selection details.\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Number of samples selected for the oracle per round.}\n\\label{tab:tab0}\n\\centering\n \\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r=1$)} & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r > 1$)} & \\makecell[c]{Max AL rounds \\\\ (\\texttt{maxr} in Alg.\\ref{alg:alg-0})}\\\\\n \\toprule\n GlaS & $4$ & $1$ & $25$ \\\\\n CUB & $1$ & $1$ & $20$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\noindent \\textbf{c) Evaluation.}\nWe report the classification accuracy obtained by the classification head (Fig.\\ref{fig:fig-archi}). Average Dice index is used to measure the segmentation quality at each AL round forming a Dice index curve over all the rounds. To better assess the \\emph{dominance} of each method \\citep{settles2009active}, the Area Under the Dice index Curve is used (AUC). This provides a fair indicator of the dominant curve, but contrasts with standard AL works, where one or multiple specific operating points in the curve are selected (leading to a biased and less accurate protocol). The average and standard deviation of Dice index curve and AUC metric are reported based on 5 replications of a complete AL session, using a different seed for each session. An AL session across different methods uses the same seed.\n\nWhile our approach, referred to as (Label\\_prop), can operate on top of any AL selection criterion, we demonstrate its efficiency using simple random selection, which is often a baseline for AL experiments. Note that our pseudo-annotations are obtained from the segmentation head shown in Fig.\\ref{fig:fig-archi}. Our method is compared to three different AL selection approaches for segmentation:\n\\textbf{(1) random selection (Random)}: the samples are randomly selected;\n\\textbf{(2) entropy-based selection (Entropy)}: the scoring function per sample is the average entropy at the pixel level \\citep{gaur2016membrane}. Samples with high entropy are selected; and\n\\textbf{(3) Monte-Carlo dropout uncertainty (MC\\_Dropout)}: we use Monte-Carlo dropout \\citep{gorriz2017active,lubrano2019deep} at the pixel level to compute the uncertainty score per sample. Samples are forwarded ${50}$ times in the model, where dropout is set to ${0.2}$ \\citep{gorriz2017active,lubrano2019deep}. Then, the pixel-wise variance is estimated. Samples with high mean variance are selected.\n\n\n\\noindent \\textbf{Lower bound performance (WSL)}: We consider the segmentation performance obtained by WSL method as a lower bound. It is trained using only global annotation. CAMs are used to extract the segmentation mask. WILDCAT method is considered \\citep{durand2017wildcat} (Fig.\\ref{fig:fig-archi}) at the classification head to obtain the CAMs. For WSL method, a pre-trained model over ImageNet \\citep{imagenetcvpr09} is used to initialize the weights of the backbone, which is then fined-tuned.\nThe model is trained over the entire dataset, where samples are labeled globally only.\nThe obtained classifier using seed=0 is frozen and used as a backbone for \\emph{all} the other methods.\n\n\\noindent \\textbf{Upper bound performance (Full\\_sup)}: Fully supervised segmentation is considered as an upper bound on performance. The model in Fig.\\ref{fig:fig-archi} is trained for segmentation only using the entire dataset, where samples are labeled at the pixel level.\n\nFor a fair comparison, all the methods are trained using the same hyper-parameters over the same dataset. WSL and Full\\_sup methods have minor differences. Due to space limitation, all the hyper-parameters are presented in the supplementary material.\nIn Alg.\\ref{alg:alg-0}, notice that for our method, ${\\mathbb{P}_r}$ is not used at the current round $r$ but until the next round ${r+1}$. To take advantage of ${\\mathbb{P}_r}$ at round $r$, instructions from line-4 to line-10 are repeated twice in the provided results.\n\n\n\n\n\n\n\n\n\\subsection{Results}\n\\label{sub:results}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{glas-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-0}\n \\end{subfigure}%\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{Caltech-UCSD-Birds-200-2011-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-1}\n \\end{subfigure}%\n\\caption{Average Dice index of the proposed and baseline methods over test sets.\n(\\protect\\subref{fig:fig-al-0}) GlaS.\n(\\protect\\subref{fig:fig-al-1}) CUB.\n}\n\\label{fig:fig-al-results}\n\\end{figure}\n\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Classification accuracy over of the proposed deep WSL model on GlaS and CUB test datasets.}\n\\label{tab:tab-cl-acc}\n\\centering\n \\resizebox{.5\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB}\\\\\n \\toprule\n \\makecell{Classification \\\\ accuracy (\\%)} & $99.50 \\pm 0.61$\n & $73.22 \\pm 0.19$\n \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Average AUC and standard deviation (Fig.\\ref{fig:fig-al-results}) for Dice index performance over GlaS and CUB test sets.}\n\\label{tab:tab-seg-perf}\n\\centering\n \\resizebox{.6\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB} \\\\\n \\toprule\n \\toprule\n WSL & $66.44 \\pm 0.20$ & $39.22 \\pm 0.19$ \\\\\n \\midrule\n Random & $78.57 \\pm 0.93$ & $68.15 \\pm 0.61$ \\\\\n Entropy & $79.13 \\pm 0.26$ & $68.25 \\pm 0.29$ \\\\\n MC\\_Dropout & $77.92 \\pm 0.49$ & $67.69 \\pm 0.27$ \\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{81.48 \\pm 1.03}$ & $\\bm{71.73 \\pm 0.67}$ \\\\\n \\midrule\n Full\\_sup & $86.53 \\pm 0.31$ & $75.29 \\pm 1.50$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\\begin{table*}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Readings of Dice index (mean $\\pm$ standard deviation) from Fig.\\ref{fig:fig-al-results} over test set for the \\textbf{first 5 queries} formed by each method. We start from the second query since the first query is random but identical for all methods.}\n\\label{tab:tab-dice-q}\n\\centering\n \\resizebox{0.9\\linewidth}{!}{\n\\begin{tabular}{l|cccccc}\n \\textbf{Queries} & \\textbf{q2} & \\textbf{q3} & \\textbf{q4} & \\textbf{q5} & \\textbf{q6} \\\\\n \\toprule\n \\toprule\n \\multicolumn{6}{c}{GlaS} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$66.44 \\pm 0.20$} \\\\\n \\midrule\n Random & $70.26 \\pm 3.02$ & $71.58 \\pm 3.14$ & $71.43 \\pm 1.83$ & $74.05 \\pm 3.14$ & $75.36 \\pm 3.45$\\\\\n Entropy & $\\bm{72.75 \\pm 2.96}$ & $70.93 \\pm 3.58$ & $72.60 \\pm 1.44$ & $73.44 \\pm 1.38$ & $75.15 \\pm 1.63$\\\\\n MC\\_Dropout & $68.44 \\pm 2.89$ & $69.70 \\pm 1.96$ & $69.97 \\pm 1.95$ & $72.71 \\pm 2.21$ & $73.00 \\pm 1.04$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $71.02 \\pm 4.19$ & $\\bm{74.07 \\pm 3.93}$ & $ \\bm{76.52 \\pm 3.49}$ & $\\bm{77.63 \\pm 2.73}$ & $\\bm{78.41 \\pm 1.23}$\\\\\n \n Full\\_sup & \\multicolumn{4}{c}{$86.53 \\pm 0.31$}\\\\\n \\toprule \\toprule\n \\multicolumn{6}{c}{CUB} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$39.22 \\pm 0.18$} \\\\\n \\midrule\n Random & $56.86 \\pm 2.07$ & $61.39 \\pm 1.85$ & $62.97 \\pm 1.13$ & $63.56 \\pm 4.02$ & $66.56 \\pm 2.50$\\\\\n Entropy & $53.37 \\pm 2.06$ & $59.11 \\pm 2.50$ & $60.48 \\pm 3.56$ & $63.81 \\pm 2.75$ & $63.59 \\pm 2.34$\\\\\n MC\\_Dropout & $57.13 \\pm 0.83$ & $59.98 \\pm 2.06$ & $63.52 \\pm 2.26$ & $63.02 \\pm 2.68$ & $64.68 \\pm 1.41$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{62.58 \\pm 2.15}$ & $\\bm{66.32 \\pm 2.34}$ & $ \\bm{67.01 \\pm 2.85}$ & $\\bm{69.40 \\pm 3.40}$ & $\\bm{68.28 \\pm 1.60}$\\\\\n \\midrule\n Full\\_sup & \\multicolumn{4}{c}{$75.29 \\pm 1.50$}\\\\\n \\bottomrule\n \\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\nWe report the classification and segmentation performances following the training the proposed deep WSL model in Fig.\\ref{fig:fig-archi}. Tab.\\ref{tab:tab-cl-acc} reports the Classification accuracy of the classification head using WSL, which is\nclose to the results reported in \\citep{belharbi2019weakly,rony2019weak-loc-histo-survey}. The results of GlaS suggest that it is an easy dataset for classification.\n\nThe segmentation results are reported in Tabs. \\ref{tab:tab-dice-q} and \\ref{tab:tab-seg-perf}, and in Fig \\ref{fig:fig-al-results}.\n\nFig. \\ref{fig:fig-al-0} compares Dice accuracy on the \\textbf{GlaS dataset}. On the latter, we observe that adding more labels increases Dice index for all AL methods, yielding, as expected, better performance than the WSL method. Reading from Tab.\\ref{tab:tab-dice-q}, randomly labeling only 4 samples per class enables to easily outperform WSL. This means that using our approach in Fig.\\ref{fig:fig-archi}, with limited supervision, can lead to more accurate masks compared to using CAMs in the WSL method. From Fig.\\ref{fig:fig-al-0}, one can also observe that Random, Entropy, and MC\\_Dropout methods grow relatively in the same way, leading to the same overall performance, with the Entropy method slightly ahead. Considering the overall behavior of the curves, one may conclude that using advanced selection techniques such as MC\\_Dropout and Entropy provides an accuracy similar to simple random selection. On the one hand, since both methods have shown substantial improvements in AL for classification, and based on the results in Fig.\\ref{fig:fig-al-0}, one may conclude that all samples are equivalently informative for the model. Therefore, there is no better order to acquire them. On the other hand, using simply random selection and pseudo-labeled samples allowed our method to substantially improve the overall performance, demonstrating the benefits of self-learning.\n\nFig.\\ref{fig:fig-al-1} and Tab.\\ref{tab:tab-dice-q} compare Dice accuracy on the \\textbf{CUB dataset}, where labeling only one sample per class yielded a large improvement in Dice index, in comparison to WSL. Adding more samples increases the performance of all the methods. One can observe similar pattern as for GlaS: Random, Entropy and MC\\_Dropout methods yield similar curves, while the AUC performances of Random and Entropy methods are similar, and slightly ahead of MC\\_Dropout.\nSimilar to GlaS analysis, and based on the results of these three methods, one can conclude that none of the methods for ordering the samples is better than simple random selection. Using self-labeled samples in our method shows again its benefits. Simple random selection combined with self-annotation yields an overall best performance. Using two datasets, our empirical results suggest that self-learning, under limited oracle-annotation, has the potential to provide a reliable second source of annotation, which can efficiently enhance model performance, while using simple sample acquisition techniques.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=.7\\linewidth]{dice-pseudo-labeled}\n \\caption{Average Dice index over the pseudo-labeled samples of our method in \\textbf{each} AL round.}\n \\label{fig:fig-dice-pseudo-labeled}\n\\end{figure}\n\n\\noindent \\textbf{Pseudo-annotation performance}. Furthermore, the proposed approach is assessed on the pseudo-labeled samples at each AL round. Fig.\\ref{fig:fig-dice-pseudo-labeled} shows that the model provides good segmentations at the initial rounds. Then, the more supervision, the more accurate the pseudo-segmentation, as expected. This figure shows the interest and potential of self-learning in segmentation, and confirms our assumption that samples near the labeled ones are likely to achieve accurate pseudo-segmentation by the model.\n\n\\noindent \\textbf{Hyper-parameters}. Our approach requires two main hyper-parameters: ${ k \\ \\text{and } \\lambda}$. We conducted an ablation study over ${k}$ on GlaS dataset, and over ${\\lambda}$ on both datasets. Results, which are presented in the supplementary material, suggest that our method is less sensitive to ${k}$. ${\\lambda}$ plays an important role, and\nbased on our study, we recommend using small values of this weighting parameter. In our experiments, we used $\\lambda=0.1$ for Glas and $\\lambda = 0.001$ for CUB. We set ${k=40}$. We note that hyper-parameter tuning in AL is challenging due to the change of the size of the data set, which in turn changes the training dynamics. In all the experiments, we used fixed hyper-parameters across the AL rounds.\nFig.\\ref{fig:fig-dice-pseudo-labeled} suggests that a dynamic ${\\lambda(r)}$ that is increased through AL rounds could be more beneficial. However, this requires a principled update protocol for ${\\lambda}$, which was not explored in this work. Nonetheless, using a fixed value seems to yield promising results overall.\n\n\\noindent \\textbf{Supplementary material}. Due to space limitation, we deferred the hyper-parameters used in the experiments, results of the ablation study, visual results for the similarity measure and examples of predicted masks to the supplementary materials.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nDeep WSL models trained with global image-level annotations can play an important role in CNN visualization and interpretability. However, they are prone to high false-positive rates, especially for challenging images, leading to poor segmentations.\nTo alleviate this issue, we considered using pixel-wise supervision provided gradually through an AL framework. This annotation is integrated into training using an adequate deep convolutional model that allows supervised learning for both tasks: classification and segmentation. Through a few pixel-supervised samples, such a design is intended to provide full-resolution and more accurate masks compared to standard CAMs, which are trained without pixel supervision and often provide coarse resolution. Therefore, it enables a better CNN visualization and interpretation of CNN predictions.\nFurthermore, and unlike standard deep AL methods that focus solely on the acquisition function, we considered using self-learning as a second source of supervision to fast-improve the model segmentation.\nEvaluating our method using a realistic AL protocol over two challenging benchmarks, our results indicate that:\n(1) using a \\emph{few} supervised samples, the proposed architecture yielded more accurate segmentations compared to CAMs, with a large margin using different AL methods. Thus, it provides a solution to enhance pixel-wise predictions\nin real-world visual recognition applications.\n(2) using self-learning with random selection yielded substantial improvements. Self-learning under a limited oracle-budget can, therefore, provide a cost-effective alternative to standard AL protocols, where most of the effort is spent on the acquisition function.\n\n\\section*{Acknowledgment}\nThis research was supported in part by the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, Compute Canada, MITACS, and the Ericsson Global AI Accelerator Montreal.\n\n\\clearpage\n\\newpage\n\n\\appendices\n\n\n\n\\section{Supplementary material for the experiments}\nDue to space limitation, we provide in this supplementary material detailed hyper-parameters used in the experiments, results of the ablation study, visual results to the similarity measure, and examples of predicted masks.\n\n\\subsection{Training hyper-parameters}\n\\label{subsec:hyper-params}\nTab.\\ref{tab:tabx-tr-hyper-params} shows the used hyper-parameters in all our experiments.\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Training hyper-parameters.}\n\\label{tab:tabx-tr-hyper-params}\n\\centering\n\\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{lccc}\n Hyper-parameter & GlaS & CUB\\\\\n \\toprule\n Model backbone & \\multicolumn{2}{c}{ResNet-18 \\citep{heZRS16}}\\\\\n WILDCAT \\citep{durand2017wildcat}: && \\\\\n ${\\alpha}$ & \\multicolumn{2}{c}{${0.6}$} \\\\\n ${kmin}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n ${kmax}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n modalities & \\multicolumn{2}{c}{${5}$} \\\\\n \\midrule\n Optimizer & \\multicolumn{2}{c}{SGD}\\\\\n Nesterov acceleration & \\multicolumn{2}{c}{True}\\\\\n Momentum & \\multicolumn{2}{c}{$0.9$} \\\\\n Weight decay & \\multicolumn{2}{c}{$0.0001$}\\\\\n Learning rate (LR) & ${0.1}$ (WSL: $10^{-4}$) & ${0.1}$ (WSL: $10^{-2}$)\\\\\n LR decay & ${0.9}$ & ${0.95}$ (WSL: ${0.9}$) \\\\\n LR frequency decay & $100$ epochs & $10$ epochs \\\\\n Mini-batch size & ${20}$ & ${8}$ \\\\\n Learning epochs & ${1000}$ & ${30}$ (WSL: ${90}$) \\\\\n \\midrule\n Horizontal random flip & \\multicolumn{2}{c}{True} \\\\\n Vertical random flip & True & False \\\\\n \n Crop size & \\multicolumn{2}{c}{${416 \\times 416}$}\\\\\n \\midrule\n ${k}$ & \\multicolumn{2}{c}{${40}$}\\\\\n ${\\lambda}$ & ${0.1}$ & ${0.001}$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{knn-ablation-glas-Label_prop-best-k-40}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter $k$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{25 queries for one trial}.\n AUC average $\\pm$ standard deviation: ${81.49 \\pm 0.59}$.\n Best performance in red dot: $k=40, AUC=82.41\\%$.\n }\n \\label{fig:abl-glas-knn}\n\\end{figure}\n\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-glas-Label_prop-best-lambda-0-dot-1}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{15 queries for one trial}.\n Best performance in red dot: $\\lambda=0.1, AUC=79.15\\%$.\n }\n \\label{fig:abl-glas-lambda}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-Caltech-UCSD-Birds-200-2011-Label_prop-best-lambda-0-dot-001}\n \\caption{Ablation study over CUB dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{5 queries for one trial}.\n Best performance in red dot: $\\lambda=0.001, AUC=66.94\\%$.\n }\n \\label{fig:abl-cub-lambda}\n\\end{figure}\n\n\n\\subsection{Ablation study}\n\\label{sub:ablation}\n\nWe study the impact of $k$ and ${\\lambda}$ on our method. Results are presented in Fig.\\ref{fig:abl-glas-knn}, \\ref{fig:abl-glas-lambda} for GlaS over ${k, \\lambda}$; and in Fig.\\ref{fig:abl-cub-lambda} for CUB over ${\\lambda}$. Due to the expensive computation time required to perform AL experiments, we limited the experiments (${k, \\lambda}$, number of trials, and \\texttt{maxr}).\nThe obtained results of this study show that our method is less sensitive to ${k}$ (standard deviation of ${0.59}$ in Fig.\\ref{fig:abl-glas-knn}).\nIn other hand, the method shows sensitivity to ${\\lambda}$ as expected from penalty-based methods \\citep{bertsekas1999nonlinear}.\nHowever, the method seems more sensitive to ${\\lambda}$ in the case of CUB than GlaS. CUb dataset is more challenging leading to more potential erroneous pseudo-annotation. Using Large ${\\lambda}$ will systematically push the model to learn on the wrong annotation (Fig.\\ref{fig:abl-cub-lambda}) which leads to poor results. In the other hand, GlaS seems to allow obtaining good segmentation where using large values of ${\\lambda}$ did not hinder the performance quickly (\\ref{fig:abl-glas-lambda}).\nThe obtained results recommend using small values that lead to better and stable performance. Using high values, combined with the pseudo-annotation errors, push the network to learn erroneous annotation leading to overall poor performance.\n\n\n\n\n\n\\subsection{Similarity measure}\n\\label{subsec:similarity}\nIn this section, we present some samples with their nearest neighbors. Although, it is difficult to quantitatively evaluate the quality of such measure.\nFig.\\ref{fig:glas-sim} shows the case of GlaS. Overall, the similarity shows good behavior of capturing the general stain of the image which is what was intended for since the structure of such histology images is subject to high variation. Since the stain variation is one of the challenging aspects in histology images \\citep{rony2019weak-loc-histo-survey}, labeling a sample with a common stain can help the model in segmenting other samples with similar stain.\nThe case of CUB, presented in Fig.\\ref{fig:cub-sim}, is more difficult to judge the quality since the images contain always the same species within their natural habitat. Often, the similarity succeeds to capture the overall color, background which can help segmenting the object in the neighbors and also the background. In some cases, the similarity captures samples with large zoom-in where the bird color dominate the image.\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=1.\\linewidth]{glas}\n \\caption{Examples of ${k}$-nn over GlaS dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:glas-sim}\n\\end{figure*}\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=0.9\\linewidth]{Caltech-UCSD-Birds-200-2011}\n \\caption{Examples of ${k}$-nn over CUB dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:cub-sim}\n\\end{figure*}\n\n\n\\subsection{Predicted mask visualization}\n\\label{subsec:mask-vis}\nFig.\\ref{fig:cub-results-visu} shows several test examples of predicted masks of different methods over CUB test set at the first AL round (${r=1}$) where only one sample per class has been labeled by the oracle. This interesting functioning point shows that by labeling only one sample per class, the performance of the average Dice index can go from ${39.08 \\pm 08}$ for WSL method up to ${62.58 \\pm 2.15}$ for Label\\_prop and other AL methods. The figure shows that WSL tend to spot small part of the object in addition to the background leading high false positive. Using few supervision in combination with the proposed architecture, better segmentation is achieved by spotting large part of the object with less confusion with the background.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{sample-test-Caltech-UCSD-Birds-200-2011} \\\\\n \\includegraphics[width=0.95\\linewidth]{code}\n \\caption{Qualitative results (on several CUB test images) of the predicted binary mask for each method after being trained in the first round ${r=1}$ (\\emph{i.e.}\\@\\xspace after labeling 1 sample per class) using seed=0. The average Dice index over the test set of each method is: ${40.16\\%}$ (WSL), ${55.32\\%}$ (Random), ${55.41\\%}$ (Entropy), ${55.52\\%}$ (MC\\_Dropout), ${59.00\\%}$ (Label\\_prop), and ${75.29\\%}$ (Full\\_sup). (Best visualized in color.)}\n \\label{fig:cub-results-visu}\n\\end{figure*}\n\n\\FloatBarrier\n\n\n\n\n\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\n\\label{sec1}\nStar formation in galaxies is maintained through the continued accretion of gas from the intergalactic medium. In turn, the chemical properties of the interstellar medium within galaxies are modulated by the processes of stellar evolution, stellar feedback, and by the abundances of elements within the accreted material. Studies of large samples of galaxies have demonstrated correlations between the gas-phase oxygen abundance in the interstellar medium of galaxies and the total stellar mass \\citep{Tremonti2004}, star formation rate \\citep{Mannucci2010} and gravitational potential \\citep{D'Eugenio2018}. \n\nThese empirical correlations between global galaxy properties and metallicity hint at three processes that can influence the metal content of galaxies. With the buildup of stellar mass through star formation, heavy elements are synthesised in the cores of stars and deposited in the interstellar medium through stellar winds and the terminal phases of the stellar life cycle \\citep[see e.g.][]{Maiolino2019}. Assuming that this enriched material mixes efficiently in to the ISM, the metallicity will then depend on the the metal yield from the stars and the amount of gas into which these metals are dispersed. In other words, the metallicity depends on the degree to which the nucleosynthetic products of stellar evolution are diluted in the ISM \\citep[e.g.][]{Larson1972,Moran2012,Bothwell2013}. Meanwhile, winds driven by the energy or momentum injected into the ISM through stellar feedback have been shown to remove enriched material from galaxies \\citep{Heckman1990}. The magnitude of the effect of metal loss from galaxies due to outflows on setting their metallicity is not known precisely. This is determined by both the mass loading factor, $\\lambda$, which quantifies the total mass lost for a given rate of star formation, and also the metal loading factor, which describes the metal content of the outflow. Recent work by \\cite{Chisholm2018} found that the the metallicity of outflowing material is independent of the stellar mass of a galaxy, and therefore of the ISM metallicity. However, it is often assumed that the expelled gas has the same metallicity as the ISM \\citep{Erb2008,Finlator2008}.\n\nThe tradeoff between these different galactic processes has been captured by a number of different attempts to model the chemical evolution of galaxies \\citep[e.g.][]{Finlator2008,Peeples2011,Lilly2013}. The \\cite{Lilly2013} model makes the simplifying assumption that the evolution of a galaxy`s star formation and metallicity are determined almost entirely by the total gas content. Making simple assumptions about the flow of gas into and out of galaxies, these models are able to reproduce the global mass-metallicity relation, along with the star formation rate - stellar mass relation.\n\nGiven the success of analytic models in describing the global properties of galaxies, there has been recent effort to extend the global scaling relations to kpc scales within galaxies \\citep[e.g.][]{RosalesOrtega2012,BarreraBallesteros2016,CanoDiaz2016,Medling2018}. These studies have been enabled by the introduction of large-scale spatially-resolved spectroscopic surveys such as the Calar Alto Legacy Integral Field Area Survey \\citep[CALIFA][]{Sanchez2012}, the Sydney-AAO Multi-object Integral Field Spectrograph \\citep[SAMI][]{Croom2012,Bryant2015} and Mapping Nearby Galaxies at Apache Point Observatory \\citep[MaNGA][]{Bundy2015}. \\cite{CanoDiaz2016} found that the gas phase metallicity in kpc-sized regions of galaxies is tightly correlated with the stellar mass surface density, $\\Sigma_{*}$. However, \\cite{BarreraBallesteros2016} showed that this correlation is also dependent on the total stellar mass of the galaxy. That is, at fixed $\\Sigma_{*}$ the gas-phase metallicity is correlated with the integrated stellar mass. In direct analogy the argument connecting the global MZR to the increasing depth of the gravitational potential well in more massive galaxies, \\cite{BarreraBallesteros2018} showed that the local metallicity also correlates with the local escape velocity. This may explain the connection of the local $\\Sigma_{*}$-metallicity relation to the integrated stellar mass.\n\n\nThe existence of local scaling relations are an indicator that some kind of regulatory process is at play, and that processes occurring on local scales may be able to explain the results seen in single-fibre spectroscopic surveys \\citep{BarreraBallesteros2016}. Indeed work by \\cite{Carton2015} and \\cite{BarreraBallesteros2018} found that the \\cite{Lilly2013} gas regulator model is able to fit the metallicity given a reasonable estimate of the local gas fraction and mass loading factors.\n\nThe ability of gas regulated models to achieve an equilibrium is in part determined by the rate at which gas is accreted and the metallicity of that gas. Historically it was often assumed that the gas fueling star formation is accreted in a chemically pristine state \\citep[e.g.][]{Larson1972,Quirk1973,Finlator2008,Mannucci2010}. However, there is mounting evidence that this is not the case \\citep{Oppenheimer2010,Rubin2012,Brook2014,Kacprzak2016,Gupta2018}. While work such as that done by \\cite{Peng2014} relies on the inference of the properties of gas in the intergalactic medium from modeling, there is a growing body of observations that are able to directly probe the metallicity of gas outside of galaxies by measuring the absorption of background quasar light by extragalactic clouds \\citep{Lehner2013,Prochaska2017}. These clouds are observed to be cool and metal rich, and are expected to be accreted onto their host galaxies in the future. Indeed, hydrodynamic simulations \\citep{Oppenheimer2010} have shown that below $z \\sim 1$ the dominant source of accreted gas onto galaxies is material the was previously ejected.\n\nIn addition to internal processes such as star formation and the expulsion of gas by feedback, there are a number of ways in which the gas content of a galaxy could be diminished by external environmental processes. Starvation \\citep{Larson1980}, which can occur when a galaxy enters a dense environment and the acquisition of new gas is prevented. In this instance, the existing gas reservoir is consumed by star formation over several Gyr and the metallicity of the ISM increases. Starvation can also be initiated through the heating of gasses in the intergalactic medium by galactic feedback \\citep[e.g.][]{Fielding2017}, or by the shock heating of accreted material \\citep[e.g.][]{Birnboim2003}. Alternatively, a kinetic interaction between the interstellar medium of a galaxy and the intergalactic medium can result in the ram pressure stripping of gas from a galaxy \\citep{Gunn1972}. Occurring on relatively short ($<100 \\, \\mathrm{Myr}$) timescales, ram pressure stripping is not expected to alter the chemical abundances in a galaxy before it is fully quenched. \n\nStudies of the environmental effect on galaxy metallicities generally find a small, but significant dependence. For example, \\cite{Cooper2008} find that approximately $15 \\%$ of the scatter in the mass metallicity relation is attributable to an increase in the metallicity of galaxies in high-density environments. Observationally, there is consensus that the environment has the largest effect on low-mass galaxies \\citep{Pasquali2012,Peng2014,Wu2017}. However, the interpretation of this fact is contentious. \\cite{Wu2017} attribute the elevated metallicity at fixed stellar mass at greater local galaxy overdensity to a reduction in the gas accretion rate. However, \\cite{Peng2014} showed that even when the star formation rates of satellite galaxies in different environments is kept constant, implying no change in the total accretion rate, the metallicity offset is still evident. Their observations and modeling led them to the conclusion that satellite galaxies in dense environments must acquire more enriched gas from their surroundings.\n\nIn this paper we make use of the broad range of galaxy environments and stellar masses covered by the MaNGA survey to explore how the local metallicity scaling relations are affected by the environment for satellite and central galaxies. With MaNGA's wide wavelength coverage and spatial resolution we are able to estimate local gas-phase metallicities, gas mass fractions and escape velocities to compare the observations to a model for the gas regulation of the metallicity, and from this modeling infer environmental trends for the metallicity of gas inflows.\n\nIn Section \\ref{Methods}, we present our data and analysis techniques. In Section \\ref{Results} we investigate how galaxy environments change the local metallicity scaling relations, and in Section \\ref{Discussion}, we discuss our observations in the context of the \\cite{Lilly2013} gas regulator model. Throughout this paper we assume a flat $\\Lambda$CDM cosmology, with $H_{0}=70 \\mathrm{km \\, s^{-1} \\, Mpc^{-1}}$, $\\Omega_{\\Lambda} = 0.7$ and $\\Omega_{m} = 0.3$. Unless otherwise stated we assume a \\cite{Chabrier2003} stellar initial mass function. We will make use of two oxygen abundance indicators, which each have different absolute abundance scales. The O3N2 \\citep{Pettini2004} assumes $12+\\log(\\mathrm{O\/H})_{\\odot} = 8.69$ and the N2S2H$\\alpha$ indicator \\citep{Dopita2016} assumes $12+\\log(\\mathrm{O\/H})_{\\odot} = 8.77$.\n\n\n\\section{Methods}\\label{Methods}\nTo investigate the trends of the spatially-resolved metal distribution in galaxies with environment, we make use of the $8$th MaNGA product launch (MPL-8) internal data release, which is similar to the SDSS DR15 \\citep{Aguado2018}, but includes data from of $6507\\,$ unique galaxies. In this Section we describe the data, sample selection and methods used to perform our analysis on these data.\n\n\\subsection{MaNGA data}\nThe MaNGA survey is the largest optical integral field spectroscopic survey of galaxies to date. Run on the $2.5 \\, \\mathrm{m}$ SDSS telescope \\citep{Gunn2006} at Apache point observatory, the MaNGA survey aims to observe approximately $10,000$ galaxies. This sample comprises a primary and a secondary subsample that were selected to have approximately flat distributions in the $i$-band absolute magnitude, $M_{i}$. These provide coverage of galaxies out to $1.5$ and $2.5 \\, R_{e}$ respectively \\citep{Wake2017} and median physical resolutions of $1.37$ and $2.5 \\, \\mathrm{kpc}$.\n\nObservations of each galaxy are made with one of $17$ hexagonal optical fibre hexabundles, each comprising between $19$ and $127$ $2\\arcsec$ optical fibres, subtending between $12\\arcsec$ and $32\\arcsec$ on the sky. The fibre faces fill the bundle with $56\\%$ efficiency, so each target is observed with a three-point dither pattern with $15$-minute exposures per pointing. This pattern of observations is repeated until a median S\/N of $20 \\, fibre^{-1} \\, pixel^{-1}$ is achieved in the $g$-band, which is typically $2-3$ hours in total \\citep{Law2015,Yan2016}. Light from each hexabundle is taken from the fibres to the BOSS spectrograph \\citep{Smee2013} where it is split by a dichroic at $\\sim 6000 \\, \\mathrm{\\AA}$ into red and blue channels, then dispersed at $R \\approx 2000$. The resulting spectra are then mapped onto a regular grid with $0.5\\arcsec$ square spaxels, with continuous wavelength coverage between $3600 \\, \\mathrm{\\AA}$ and $10300 \\, \\mathrm{\\AA}$. For an in-depth discussion of the MaNGA data reduction pipeline, see \\cite{Law2016}.\n\nThe reduced data are analysed by the MaNGA Data Analysis Pipeline \\citep[\\DAP;][]{Belfiore2019,Westfall2019}, which extracts stellar kinematics, measures the strengths of continuum features, and extracts emission line fluxes, equivalent widths and kinematics for each galaxy. For this work, we make use of the emission line fluxes from the \\DAP's hybrid binning scheme. In this scheme, the data cubes are Voronoi binned \\citep{Cappellari2003} to a S\/N of at least $10$ in the continuum. Within each of these bins, {\\tt pPXF} ~\\citep{Cappellari2004} is used to fit an optimal continuum template which is made up of a linear combination of hierarchically-clustered templates from the MILES stellar library \\citep{SanchezBlazquez2006,FalconBarroso2011} as well as an $8$th degree multiplicative Legendre polynomial. This optimal continuum template constrains the stellar populations within the Voronoi bin, and is fitted by {\\tt pPXF} ~in conjunction with a set of Gaussian emission line templates to each individual spaxel in the bin. For a full description of the \\DAP ~fitting process, see \\cite{Westfall2019}, and for a discussion of the robustness of the emission line measurements, see \\cite{Belfiore2019}.\n\n\\subsection{Sample Selection}\\label{sample_selection}\nGalaxies in the MaNGA survey are selected such that the full sample has a roughly flat distribution of stellar masses. However, this parent sample contains galaxies with a wide range of morphologies and star formation rates. Our goal with this work is to make spatially-resolved measurements of the properties of the gas in galaxies as a function of the local environment. The metallicity indicators mentioned in Section \\ref{Calibrators} are only calibrated for HII regions, and therefore cannot be applied to a fraction of the spaxels in MaNGA. To make this determination, we compare the $\\mathrm{[NII]\\lambda 6584 \/ H\\alpha}$ and $\\mathrm{[OIII]\\lambda 5007 \/ H\\beta}$ emission line ratios on a \\cite{Baldwin1981} (BPT) diagram. Only spaxels with emission line ratios that satisfy both the \\cite{Kauffmann2003} and \\cite{Kewley2001} criteria for excitation of the gas by a young stellar population are included in our analysis. We further exclude spaxels for which the S\/N ratio in the emission lines used for the metallicity and determination of star formation are less than $3$. \\cite{Belfiore2019} showed that the fluxes of lines above this threshold in MaNGA are relatively robust to systematic effects. From these constraints we calculate the fraction of spaxels for a data cube for which we are able to reliably determine a metallicity. In computing this fraction, we include only spaxels where the $g$-band flux from the data cube is detected at a S\/N of $2$ or greater. This condition is imposed so that galaxies that do not fully fill the IFU field of view are not unduly excluded. Galaxies for which the fraction of spaxels with a measurable metallicity is larger than $60\\%$ are retained for our analysis.\n\nWe make a further cut on the galaxies in our sample based on our ability to robustly measure their metallicity gradients. Galaxies with an elliptical minor to major axis ratio ($b\/a$) less than $0.6$, as determined by the NASA-Sloan Atlas \\citep[NSA;][]{Blanton2011} Elliptical Petrosian photometry, were excluded to give a sample of face-on galaxies. We made a further restriction on the measured $r$-band effective radius, $R_{e}$. Galaxies with $R_{e}<4\\arcsec$ are also rejected. These criteria are motivated by the analysis performed by \\cite{Belfiore2017}, who showed that beam-smearing effects are non-negligible for highly inclined systems, or for galaxies that are small relative to the MaNGA point spread function. Similarly, \\cite{Pellegrini2019} used a set of realistic simulations to show that light-weighted quantities, such as dust attenuation, are systematically overestimated when observed in highly inclined systems.\n\n\nOur final sample consists of $1008 \\,$ galaxies, with stellar masses in the range $7.8< \\log(M_{*}\/M_{\\odot})<11.4$. For the majority of our analysis we restrict the stellar masses considered to $9< \\log(M_{*}\/M_{\\odot})<11$, and in this range our sample comprises $967$ galaxies. The distribution of galaxies in our sample the colour-mass plane is shown in Figure \\ref{M_star_cmd}.\n\n\\begin{figure}\n\\includegraphics{M_star_cmd.pdf}\n\\caption{In panel $a)$ we show the distribution of stellar mass, $\\mathrm{M_{*}}$ for the input sample (\\textit{grey}) and for the final sample (\\textit{red}). The attrition of sources occurs preferentially at high stellar mass, which is consistent with the rising fraction of passive galaxies. In $b)$ the positions of galaxies in the input sample (\\textit{grey}) and final sample (\\textit{red}) on the $u-r$ colour-mass diagram is shown. Galaxies that satisfy our selection criteria are predominantly in the blue cloud and forming stars.}\\label{M_star_cmd}\n\\end{figure}\n\n\\subsection{Gas-phase metallicities}\\label{Calibrators}\nTo probe the chemical evolution of the galaxies in our sample, we use the \\DAP ~emission line measurements to estimate the gas-phase oxygen abundances in these systems. For convenience we will use the terms `gas-phase oxygen abundance' and `metallicity' interchangeably throughout this work.\nThe estimation of gas-phase oxygen abundances with optical spectroscopy is often achieved by measuring a set of emission line ratios that vary with the conditions of the gas. The metallicity can be calculated by comparing the measured line ratios to theoretical models for HII regions \\citep[e.g.][]{Blanc2015}. Alternatively it can be estimated by using relationships that are empirically calibrated by comparing these line ratios to the spectra of HII regions for which the metallicity has been measured directly using temperature-sensitive emission line ratios, such as $\\mathrm{[OIII] \\lambda 4363 \/ [OIII] \\lambda 5007}$. While it is generally accepted that the direct method of the oxygen abundance determination is more robust than theoretical modeling or using empirical calibrations, it is generally not possible with datasets such as MaNGA as the $\\mathrm{[OIII] \\lambda 4363}$ line is typically $\\sim 100$ times fainter than the $\\mathrm{[OIII] \\lambda 5007}$ line.\n\nMany of the empirical calibrations suffer from systematics, being biased either high or low due to variations in the ionisation parameter in the gas, or contamination from diffuse ionised gas or light from an AGN. To account for this fact, we will make use of two different metallicity calibrations. \n\\subsubsection{O3N2}\nFor consistency with \\cite{BarreraBallesteros2018}, we use the \\cite{Pettini2004} $O3N2$ oxygen abundance diagnostic. This method makes use of the $\\mathrm{[OIII] \\lambda5007\/H\\beta}$ and $\\mathrm{[NII]\\lambda6584 \/ H\\alpha}$ emission line ratios, and was calibrated against a set of $137$ extragalactic HII regions for which a metallicity had been determined either by the direct $T_{e}$ method or by photoionisation modeling of their spectra. Taking $O3N2 = \\mathrm{\\log([OIII]\\lambda 5007 \/ H\\beta) - \\log([NII]\\lambda6584 \/ H\\alpha)}$, the metallicity of an HII region can be calculated as\n\\begin{equation}\n12+\\log(\\mathrm{O\/H}) = 8.73 - 0.32 \\times O3N2,\n\\end{equation} \nover the range $8.1 < 12+\\log(\\mathrm{O\/H}) < 9.05$. It should be noted that this calibration suffers from some degeneracy with variation in the ionisation parameter within the gas. For this reason it cannot be used in spectra that include significant contamination from diffuse ionised gas, or active galactic nuclei, and may also be biased by variations in $q$ between star-forming regions within a galaxy \\citep[see e.g.][]{Poetrodjojo2018}.\n\n\\subsubsection{N2S2H$\\alpha$}\nAn alternative method for calculating the metallicity of HII regions based on the relative intensities of the $\\mathrm{[NII]\\lambda 6584}$, $\\mathrm{[SII]\\lambda 6717,6731}$ and H$\\alpha$ lines was presented by \\cite{Dopita2016}. Assuming a simple relationship between N\/O and O\/H, and modeling theoretical HII regions with a variety of gas pressures and ionisation parameters using the {\\tt MAPPINGS 5.0} software, they found\n\\begin{equation}\n\\begin{aligned}\n12+\\mathrm{\\log(O\/H)} = & 8.77 + \\log(\\mathrm{[NII]\\lambda6584 \/[SII]\\lambda6717,6731}) \\\\\n\t& + 0.264 \\log(\\mathrm{[NII]\\lambda6584 \/H\\alpha}),\n\\end{aligned}\n\\end{equation}\nwhich they showed has very little dependence on the ionisation parameter and is valid over the range $8.0 \\lesssim12 + \\log(\\mathrm{O\/H}) < 9.05$. There is some evidence that the relationship between N\/O and O\/H varies with the total stellar mass of a galaxy \\citep{Belfiore2017}, but this should be negligible if analysis is carried out within narrow bins of stellar mass.\n\n\nEach of the two metallicity calibrations outlined above have different absolute abundance scalings. While it is not possible to directly compare the metallicities of galaxies derived with different calibrations, relative differences between two measurements made with the same indicator are likely to reflect real differences in the chemical composition of the galaxies in question.\n\n\n\\subsection{Gas density}\nFollowing the methodology of \\cite{BarreraBallesteros2018} we estimate the local neutral gas surface density from the dust attenuation derived from the observed Balmer line ratios. Under the assumption of a fixed gas-to-dust ratio, \\cite{BarreraBallesteros2018} utilised the observation that the gas surface density is related to the V-band attenuation via $\\Sigma_{gas} = 30 \\times \\mathrm{A_{V}} \\, \\mathrm{pc^{-2}}$. We apply a small correction to this to account for the variation in the dust-to-gas ratio with the gas phase metallicity using the relation given by \\cite{Wuyts2011},\n\\begin{equation}\n\\Sigma_{gas} = 30 \\times A_{V} \\times \\left( \\frac{Z}{Z_{\\odot}} \\right)^{-1} \\, \\mathrm{pc^{-2}},\n\\end{equation}\nfor $Z2$.\n\nWe have rescaled the estimated stellar masses from a \\cite{Kroupa2001} to a \\cite{Chabrier2003} initial mass function by dividing by $1.06$ \\citep{Zahid2012}. To calculate stellar mass surface density ($\\Sigma_{*}$) in a spaxel, we take these values and divide them by the projected area of a $0\\farcs 5$ square spaxel at the systemic redshift the host galaxy. A small correction for inclination of the galaxy is applied by multiplying these surface densities by the elliptical Petrosian minor-to-major axis ratio.\n\n\n\n\\subsection{Estimating the local escape velocity}\nWe estimate the local escape velocity from the halo assuming a spherically-symmetric dark matter halo that is described by a \\cite[NFW]{Navarro1997} profile using the same procedure as \\cite{BarreraBallesteros2018}. This method assumes that the star-forming gas is confined to a thin disk coplanar with the optical disk, and is in a circular orbit around the centre of the galaxy. We extract a rotation curve by taking the maximum and minimum measured line-of-sight velocity within a $30^{\\circ}$ wedge along the photometric major axis of the galaxy, similar to \\cite{BarreraBallesteros2014}. This rotation curve is corrected for the galaxies inclination to our line of site using the $r$-band elliptical Petrosian major to minor axis ratio. We fit the resulting rotation curve using the \\cite{Bohm2004} parametrisation,\n\n\\begin{equation}\nV(r_{depro}) = V_{max} \\frac{r_{depro}}{\\left(R_{turn}^{\\alpha} + r_{depro}^{\\alpha} \\right)^{1\/\\alpha}},\n\\end{equation}\nwhere $V_{max}$ is the maximum velocity of rotation, $r_{depro}$ is the deprojected radius, $R_{turn}$ is the radius at which the rotation curve flattens, and $\\alpha$ is a parameter that determines the shape of the rotation curve. This formulation is a special case of the phenomenological model presented by \\cite{Courteau1997}. The fitted parameters $V_{max}$ and $R_{turn}$ are then used to derive the local escape velocity using the following formula:\n\\begin{equation}\n V_{esc}^{2} = \n\t\\begin{cases}\n\t\tV_{esc,in}^{2} + V_{esc,out}^{2} & r_{depro} < R_{turn} \\\\\n\t\tV_{esc,out}^{2} & r_{depro} > R_{turn}\n\t\\end{cases} \n\\end{equation}\nwhere\n\\begin{equation*}\nV_{esc,in} = \\left(V_{max}\/R_{turn}\\right)^{2} \\left(R_{turn} - r_{depro} \\right)^{2} + V_{esc,out}^{2}\n\\end{equation*}\nand\n\\begin{equation*}\nV_{esc,out}^{2} = 2V_{max}^{2} \\ln \\left( R_{vir}\/r_{depro} \\right) + 2V_{max}^2.\n\\end{equation*}\nIn this relation, $R_{vir}$ is the virial radius of the galaxy's halo, which we obtain by estimating the galaxy's total halo mass from its stellar mass using the relation derived by \\cite{Behroozi2010}. This computation of the local escape velocity assumes the galaxy potential to be spherically symmetric. \\cite{BarreraBallesteros2018} tested how a more complicated two-component halo, that includes a contribution to the gravitational field from the baryons in galaxy disk and found that this causes a deviation in the estimated escape velocity of only $\\sim 5\\%$ from the simpler spherical case.\n\n\n\n\\subsection{Environment}\nThe aim of our current work is to explore how the local environments that galaxies are in today impact their chemical evolution. While there are many different ways of characterising environment, each capable of tracing a variety of different physical processes that can occur during a galaxy's lifetime, we will use the satellite\/central classification of our sample as the primary metric for environment. This has been shown to be a good predictor of the star-forming properties of galaxies at fixed stellar mass \\citep[e.g.][]{Peng2012}. We make use of the \\cite{Tempel2017} catalogue, which uses a friends-of-friends algorithm to provide estimates of group membership, group richness, and dark matter halo mass for galaxies in SDSS DR12 \\citep{Eisenstein2011, Alam2015}. Of the $6507\\,$ galaxies in MPL-8, $5333 \\,$ were associated with groups in the \\cite{Tempel2017} catalogue, of which $3447$ are identified as the centrals of their halo and $1886$ are satellites. This sample comprises a wide range of group masses, $M_{200}$, which is the mass contained within $R_{200}$ of the group centre, the radius at which the density of an NFW profile drops to $200$ times the average density of the Universe. Groups within this catalogue contain as few as two galaxies and up to $254$ members for the most massive halo. We show the distribution of halo masses for our final sample in Figure \\ref{M_200_dist}.\n\\begin{figure}\n\\includegraphics{M_200_dist.pdf}\n\\caption{The distribution of group halo mass, $M_{200}$ for the input sample (\\textit{grey}) and for the final sample (\\textit{red}). The loss of sources from the input sample at higher halo mass is more severe than in low mass halos due to the higher fraction of passive galaxies in the most dense environments. }\\label{M_200_dist}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics{RP_vs_mass.pdf}\n\\end{center}\n\\caption{The median metallicity radial profile for galaxies in the stellar mass ranges indicated by the legend at the top of panel $a)$. The solid curves represent the median profiles, while the shaded regions of the same colour represent the $1\\sigma$ error range on the median. In panel $a)$ we show the metallicity median profiles made using the \\cite{Pettini2004} O3N2 indicator and in panel $b)$ we show the results from the \\cite{Dopita2016} $\\mathrm{N2S2H\\alpha}$ indicator. Note that each metallicity indicator has a different abundance scaling on the y-axis, and in each panel we mark the assumed solar abundance with a grey dashed line.}\\label{RP_vs_mass}\n\\end{figure*}\n\n\\subsection{Calculating metallicity radial profiles}\nTo characterise the radial dependence of metallicity in the galaxies in our sample, we construct the radial profile in the following way. In each galaxy, the deprojected distance from the centre of the galaxy has been calculated by the \\DAP based on the $r$-band surface brightness distribution in the SDSS imaging, and assuming each galaxy to be a tilted thin disk. We measure the metallicity as a function of radius and take the average value for spaxels in $0.5\\arcsec$-wide bins. The metallicity measurements are included only if they are classified as star-forming on the BPT diagram, if they have $\\mathrm{S\/N} >3$ in all emission lines utilised, and if the $\\mathrm{H\\alpha}$ equivalent width is greater than $6 \\, \\mathrm{\\AA}$ in emission. This $\\mathrm{H}\\alpha$ equivalent width criterion is consistent with that chosen by \\cite{BarreraBallesteros2018} to minimize contamination of the emission line fluxes from diffuse ionised gas.\n\nTo understand the behaviour of an ensemble of galaxies we measure what we will call the `median profile' for the metallicity. For this, we take the median of the individual radial profiles within radial bins that are $0.2 \\, R_{e}$ wide. Once the median has been calculated we perform a bootstrap resampling of the galaxies, recalculating the median profile for $1000$ realisations of the sample. At each radius, the uncertainty on the sample median is estimated to be the standard deviation of the bootstrapped median profiles. \n\n\\section{Results}\\label{Results}\n\n\n\n\n\\subsection{Metallicity profiles}\nFollowing \\cite{Belfiore2017} we calculate the median metallicity profiles in narrow bins of stellar mass using two different oxygen abundance indicators. The median profiles for all galaxies in our sample are shown in Figure \\ref{RP_vs_mass}. In panel $a)$ of this figure, we show the \\cite{Pettini2004} O3N2 oxygen abundance median profiles in $0.5 \\, \\mathrm{dex}$-wide bins of stellar mass. The profiles shown here are consistent with those shown in Figure 3 of \\cite{Belfiore2017}, in particular with the steepening of the metallicity gradient at higher stellar mass. Panel $b)$ shows the profiles for the same galaxies derived using the \\cite{Dopita2016} $\\mathrm{N2S2H\\alpha}$ indicator. This indicator shows qualitatively different results to O3N2 in the centres of galaxies above $\\log(M\/M_{\\odot}) \\sim 10.25$. While the $\\mathrm{N2S2H\\alpha}$ metallicities both continue to rise in the centres of massive galaxies, the O3N2 indicator shows a flattening. The origin of the mismatch in the behaviour of the metallicity profiles in the centres of massive galaxies between different abundance calibrations may be due to the fact that the \\cite{Dopita2016} metallicity calibration is strongly tied to the N\/O ratio. \\cite{Belfiore2017} showed N\/O to increase towards the centres of massive galaxies, while O\/H does not.\n\nAgain, we caution that the interpretation of metallicity radial profiles from datasets with kpc-scale physical resolution such as MaNGA is subject to the flattening of gradients by the observational point spread function \\citep{Yuan2013,Mast2014}. \\cite{Carton2017} presented a method to account for this effect, however they reported that it was not always robust in the presence of clumpy star formation distributions. Our galaxy size and inclination selection criteria that were outlined in Section \\ref{sample_selection} should mitigate the most severe resolution effects \\citep{Belfiore2017}, but we note that the most accurate determinations of metallicity gradients require observations with finer spatial resolution. The core conclusions of this work, particularly those based on local scaling relations, will be only minimally affected by this issue. \n\n\n\\subsubsection{Satellites vs. centrals}\nTo investigate the environmental dependence of the radial distribution of the oxygen abundance, we split our sample into satellites and centrals, then re-calculate the median profiles with each metallicity indicator. We show these radial profiles in Figure \\ref{sat_cen_rp_all}. At fixed stellar mass there are minor qualitative and quantitative differences between the radial distributions of the oxygen abundance. At all radii, the absolute differences in the median metallicity at fixed stellar mass are less than $\\sim 0.05 \\, \\mathrm{dex}$ for O3N2, and less than $\\sim 0.1 \\, \\mathrm{dex}$ for $\\mathrm{N2S2H\\alpha}$. We note that in the highest stellar mass bins, there are very small numbers of star-forming satellites, and their distribution is biased towards the lower stellar masses. For this reason the differences for the most massive galaxies are not robust in this sample. At lower stellar masses, however, the stellar mass distributions of satellites and centrals are similar, the sample sizes are larger and a fairer comparison can be made.\n\nSince each metallicity indicator has different systematics and biases, we only deem a difference to be real if it is reflected in both the O3N2 and $\\mathrm{N2S2H\\alpha}$ data. For galaxies in the mass range $9.4 < \\log(M_{*}\/M_{\\odot})<10.2$, there is a systematic offset in the metallicity, with satellite galaxies being more metal rich at all radii sampled, but with no significant difference in the gradient. In the lowest mass range, the O3N2 indicator shows a change in the metallicity gradient, however this is not evident in the $\\mathrm{N2S2H\\alpha}$ metallicity. \n\n\n\\begin{figure*}\n\\includegraphics{sat_cen_rp_all.pdf}\n\\caption{The median metallicity radial profiles in bins of stellar mass split into satellites and centrals. In the upper row, we show the profiles for the O3N2 indicator, while in the lower row the results for $\\mathrm{N2S2H\\alpha}$ are shown. The profiles for central galaxies are shown in the left column, and for the satellite galaxies in the middle column. On the right we show the difference between the satellites and centrals. Satellite galaxies in the range $9.4 < \\log(M_{*}\/M_{\\odot})<10.2$ are systematically more metal rich than centrals of the same mass in both metallicity indicators.}\\label{sat_cen_rp_all}\n\\end{figure*}\n\n\\subsection{Local scaling relations}\\label{local_scaling_relations}\nIn addition to the global scaling relations relating a global metallicity measurement to the integrated stellar mass of a galaxy, there are also local correlations between the stellar mass surface density \\citep{Moran2012,RosalesOrtega2012} and the local gas-phase metallicity. These local scaling relations capture the response of the chemical abundance of gas to processes occuring on local $\\sim \\mathrm{kpc}$ scales. \n\n\\begin{figure*}\n\\includegraphics{sig_mass_met_PP04.pdf}\n\\includegraphics{sig_mass_met_D16.pdf}\n\\caption{The relationship between local stellar mass surface density and metallicity in bins of total stellar mass for satellite and central galaxies. The greyscale background represents the density of data points for central galaxies, while the red contours represent the distribution of data points from satellite galaxies. For clarity, the distribution for satellite galaxies was smoothed to make the contours less subject to noise. Blue points represent the median values of metallicity in the central galaxies at a fixed $\\Sigma_{*}$, and the red points are the medians for satellite galaxies. These are only calculated where there are sufficient data. We include bootstrapped standard errors of the median, but these uncertainties are often smaller than the data points. In the top row we show the results for the \\cite{Pettini2004} O3N2 indicator, while in the bottom row we show the result for the \\cite{Dopita2016} N2S2H$\\alpha$ indicator. The metallicity of satellite galaxies at fixed stellar mass surface density is slightly higher $(\\sim 0.01 \\, \\mathrm{dex})$ than for central galaxies.}\\label{sig_mass_met}\n\\end{figure*}\n\n\\subsubsection{Metallicity and Stellar Density}\nAs more stars form and the stellar surface density ($\\Sigma_{*}$) increases, the amount of enrichment of the ISM also increases. We explore this relation in Figure \\ref{sig_mass_met}, where we show the relationship between $\\Sigma_{*}$ and metallicity for satellite and central galaxies. \\cite{Hwang2019} showed that in addition to the relationship between the local $\\Sigma_{*}$ and $12+\\log(\\mathrm{O\/H})$, there is a secondary dependence on the total stellar mass. For this reason we have split our analysis into $0.5 \\, \\mathrm{dex}$-wide bins of integrated $M_{*}$. Using both the O3N2 and N2S2H$\\alpha$, there is a small ($\\sim 0.01 \\, \\mathrm{dex}$) difference between the metallicity at fixed $\\Sigma_{*}$ between satellites and centrals, particularly in the $9.5 < \\log(\\mathrm{M_{*}\/M_{\\odot}})<10$ interval. While the formal uncertainties on the medians indicate that these differences are statistically significant, they are a factor of ten smaller than the standard deviations of the metallicity distributions. We note that for systems of low stellar mass, a satellite galaxy may occupy a group with a wide range of possible halo masses, corresponding to very different environments.\n\nGiven that the largest differences in the metallicity between satellites and centrals occurs at the lowest stellar masses, we show the impact of varying the stellar mass of the central in Figure \\ref{sig_mass_met_cen_mass}. Choosing satellite galaxies between $9<\\log(\\mathrm{M_{*}\/M_{\\odot}})<10$, we find a large systematic offset in metallicity for satellites of more massive central galaxies, corresponding to more massive group halos. Satellite galaxies associated with centrals more massive than $\\log(M_{*}\/\\mathrm{M_{\\odot}}) = 10.5$ have metallicities that are on average $0.08 \\pm 0.009 \\, \\mathrm{dex}$ higher than those galaxies which are satellites of centrals with $\\log(M_{*}\/\\mathrm{M_{\\odot}}) < 10$. To eliminate the possibility that a different distribution of total stellar masses for the targeted galaxies within the central stellar mass bin is responsible for the discrepancy, we perform a two-sample Kolmogorov-Smirnov test \\citep{Smirnov1939}. This test returns a statistic of $D=0.17$ with $p=0.71$, indicating no statistically significant difference in the total stellar masses.\n\n\n\\begin{figure}\n\\includegraphics{sig_mass_met_cen_mass_PP04.pdf}\n\\includegraphics{sig_mass_met_cen_mass_D16.pdf}\n\\caption{The local $\\Sigma_{*}-\\mathrm{O\/H}$ relation for satellite galaxies with $9<\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$ split by the mass of the galaxy which is central to their halo. In the upper panel, we show the results for the PP04 indicator, and in the lower panel we show the results for the D16 indicator. Blue points show the median metallicity at a given $\\Sigma_{*}$ for satellites of low-mass centrals ($\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$). These points trace the median of the grey-shaded distribution. Red points are the median metallicity as a function of $\\Sigma_{*}$ for satellites of high-mass centrals, shown by the red contours. The oxygen abundance is systematically higher for satellites of more massive centrals. }\\label{sig_mass_met_cen_mass}\n\\end{figure}\n\n\\subsubsection{Metallicity and gas fraction}\nModels predict \\citep[e.g.][]{Lilly2013}, and observations show \\citep{Mannucci2010, Moran2012}, that if low-metallicity gas is accreted onto the galaxy and the local gas fraction rises, then the metal content is diluted and the total metallicity of the gas will decrease. This relationship is investigated in Figure \\ref{mu_met}, where we show the gas-phase metallicity as a function of the local gas fraction, $\\mu$ in intervals of total stellar mass. In narrow bins of stellar mass, we find a tight correlation between the local gas fraction and the metallicity of the ISM. Once again, there is a small difference in the metallicities between satellites and centrals, with the difference being largest in galaxies between $10^{9.5}$ and $10^{10} \\, \\mathrm{M_{\\odot}}$. \n\n\n\n\\begin{figure*}\n\\includegraphics{mu_met_PP04.pdf}\\\\\n\\includegraphics{mu_met_D16.pdf}\n\\caption{The dependence of $12+\\log(\\mathrm{O\/H})$ on the local gas fraction, $\\mu$ in different bins of stellar mass. The contours, greyscale and coloured points are the same as in Figure \\ref{sig_mass_met}. The difference in metallicity at fixed $\\mu$ between satellites and centrals is largest in the range $9.5 < \\log(M_{*}\/\\mathrm{M_{\\odot}}) < 10$, where it reaches $\\sim 0.015 \\, \\mathrm{dex}$. }\\label{mu_met}\n\\end{figure*}\n\n\nFocussing again on the lower-mass satellite galaxies in our sample, we see in Figure \\ref{mu_met_cen_mass} that the satellites of massive galaxies are more enriched at fixed $\\mu$ than the satellites of less massive centrals. Comparing the contours of spaxels in the $\\mu$-metallicity plane, we see that on average, the satellites of more massive centrals have a lower inferred gas fraction. Nevertheless, at fixed $\\mu$ the offset in O\/H remains.\n\n\n\\begin{figure}\n\\includegraphics{mu_met_cen_mass_PP04.pdf}\\\\\n\\includegraphics{mu_met_cen_mass_D16.pdf}\n\\caption{The relationship between $\\mu$ and O\/H for satellites of low-mass centrals (grey background with blue points indicating the median) and high-mass (red contours with red points indicating the medians) for PP04 O3N2 (top) and D16 N2S2H$\\alpha$ (bottom). At fixed gas fraction, the median metallicity is $\\sim 0.1 \\, \\mathrm{dex}$ higher for the satellites of massive centrals.}\\label{mu_met_cen_mass}\n\\end{figure}\n\n\n\n\\section{Discussion}\\label{Discussion}\n\\subsection{The impact of environment on local scaling relations}\nWe have shown that the metallicity versus stellar mass, local escape velocity and gas fraction local scaling relations vary with the environment that galaxies inhabit. In this study we utilised the mass of the largest galaxy in the group as our estimate of environment. This quantity is correlated with the total mass of the halo \\citep{Behroozi2010}, though does not suffer from the large uncertainties involved in estimating halo dynamical masses from spectroscopy \\citep[see][for an excellent discussion of this point]{Robotham2011}. For satellite galaxies, the magnitude of the difference in metallicity appears to be a function of the stellar mass of the galaxy that is central to the group. In Figures \\ref{sig_mass_met_cen_mass} and \\ref{mu_met_cen_mass} we showed that the satellites of central galaxies more massive than $\\log{\\mathrm{M_{*}}\/M_{\\odot}}>10.5$ have local metallicities that are enhanced by $\\sim 0.1 \\, \\mathrm{dex}$ over similar galaxies which are satellites of less massive ($\\log{\\mathrm{M_{*}}\/M_{\\odot}}<10$) centrals. This enhancement appears to be independent of the local gas fraction and escape velocity. \n\n\n\\subsection{Accounting for outflows with the gas-regulator model}\nWhile differences in the metallicity of satellite galaxies at fixed $\\Sigma_{*}$ and $\\mu$ may be suggestive of some intrinsic difference between the chemical evolution of satellites in different mass halos, these simple scaling relations taken individually are unable to account for all factors that may influence the oxygen abundance. Neither of these scaling relations explicitly accounts for the loss of metals and corresponding reduction in oxygen abundance through outflows. To control for all of these factors at once, we fit the gas regulator model of \\cite{Lilly2013} to the data. \n\nThe gas regulator model for galaxy evolution makes the simple assumption that a galaxy's current star formation and rate and metallicity are largely determined by the present day gas fraction. While it was originally devised to apply to entire galaxies as a whole, some authors have recently showed that it can be applied to galaxies locally on $\\sim \\mathrm{kpc}$ scales \\citep{Carton2015, BarreraBallesteros2018}. In their derivation of this model, \\cite{Lilly2013} showed that the metal content of galaxies will reach an equilibrium on timescales shorter than the time it takes for their total gas content to be depleted. At equilibrium, the metallicity is\n\n\\begin{equation}\\label{gas_regulator}\nZ_{eq} = Z_{0} + \\frac{y}{1 + r_{gas} + (1-R)^{-1} \\left(\\lambda + \\epsilon^{-1}\\frac{d \\ln(r_{gas})}{dt} \\right)},\n\\end{equation}\nwhere $r_{gas}$ is the ratio of gas to stellar mass, $R$ is the fraction of gas returned from stars to the ISM by stellar evolution, and $\\epsilon$ is the star formation efficiency. While this equation contains several unknown quantities, we can fix these to sensible values based on previous estimates from the literature. We adopt a value of $R = 0.4$, which is consistent with the predictions of stellar population synthesis models \\citep{Bruzual2003}, and is in line with the assumptions underlying previous work on this topic \\citep{Lilly2013,Carton2015,BarreraBallesteros2018}. Further, based on fitting the mass-metallicity relation for SDSS galaxies, \\cite{Lilly2013} were able to constrain the product $ \\epsilon^{-1}\\frac{d\\ln( r_{gas})}{dt} = -0.25$. The nucleosynthetic yield, $y$, is also not well known. The yield per stellar generation (and gas return fraction, $R$) is dependent on the stellar initial mass function, which some suggest may not be universal \\citep[e.g.][]{Gunawardhana2011,Parikh2018}. \\cite{Finlator2008} estimate the yield to be in the range $0.008 \\leq y \\leq 0.023$, but we assume the value near the middle of this range of $0.014$. This is the value calculated by \\cite{BarreraBallesteros2018} based on both theoretical modeling using { \\tt STARBURST99} \\citep{Leitherer2014} and by closed-box modeling of galaxy cluster data \\citep{Renzini2014}. We assume that this value is constant and valid throughout our entire sample. For a rigorous discussion of the impact of variations of the assumed yield on the calibration and interpretation of metallicities, see \\cite{Vincenzo2016}.\n\n\n\n\nIn the gas regulator model, the outflows are described by the mass-loading factor, $\\lambda$, which is the ratio of the star formation rate to the rate of mass loss due to stellar feedback and winds. We parametrized the mass-loading factor similar to the formulation of \\cite{Peeples2011}, and assuming the metallicity of outflows is the same as the metallicity in the ISM of the galaxy\n\n\\begin{equation}\\label{lambda}\n\\lambda = \\left( \\frac{v_{0}}{v_{esc}(r)}\\right)^{\\alpha}.\n\\end{equation}\n\n\\cite{Peeples2011} suggest either $\\alpha=1$ or $2$, however we note that the they parametrise $\\lambda$ in terms of the virial velocity of galaxy halos. The relationship between the virial velocity and the local escape velocity is complicated, and so to account for this we allow $\\alpha$ to vary freely. \\cite{BarreraBallesteros2018} also include an additive constant in their parametrisation of $\\lambda$, which imposes a minimum level of outflows from even the deepest potential well. For a reasonable choice of yield, we find that this has the effect of limiting the maximum metallicity that a gas-regulated system can achieve.\n\n\n\\subsubsection{Fitting the \\cite{Pettini2004} metallicity}\nIn order to constrain the values of $v_{0}$ and $\\alpha$ for our model, we fit equation \\ref{gas_regulator} to the metallicity, gas fraction and local escape velocities inferred from the MaNGA spaxel data for all galaxies in our sample. We find $v_{0}=368 \\, \\mathrm{km \\, s^{-1}}$, and $\\alpha = 0.52$. In their work on deriving the $v_{esc}$-dependence of $\\lambda$, \\cite{BarreraBallesteros2018} noted that the gas regulator model does not necessarily provide a good fit to the data, though it was preferred to the leaky-box model of \\cite{Zhu2017} on the grounds that it provided more realistic estimates of $\\lambda$. In our fits of the gas-regulator model to the MaNGA data on kpc scales, we find smaller residuals at low gas fraction. This is a direct result of our choice not to include an additive constant in our parametrisation of $\\lambda$. In the analysis that follows we fix the dependence of $\\lambda$ on the escape velocity, reducing this problem to fitting only one variable, the metallicity of accreted gas, $Z_{0}$.\n\nWe perform a least-squares fit of the gas regulator model to the O3N2-based gas-phase metallicities, fitting for $Z_{0}$ in the sub-populations where the largest difference in metallicity is seen. This is for satellite galaxies with stellar masses in the range $9<\\log (M_{*}\/\\mathrm{M_{\\odot}})<10$, split based on the stellar mass of the corresponding central galaxy. For satellites of low-mass centrals ($\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$), the metallicity of the gas precipitating onto their discs inferred from the modeling is $Z_{0} = (4.68 \\pm 0.11) \\times 10^{-4}$, corresponding to $12+\\log(\\mathrm{O\/H})=7 .46 \\pm 0.01$. For the gas being accreted onto the $9<\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$ satellites of high-mass central galaxies, we derive a metallicity of $Z_{0}= (1.17 \\pm 0.001) \\times 10^{-3}$, or $12+\\log(\\mathrm{O\/H})=7.87 \\pm 0.003$. \n\n\n\\subsubsection{Fitting the \\cite{Dopita2016} metallicity}\nThe metallicities derived from the \\cite{Dopita2016} N2S2H$\\alpha$ calibration have a different absolute abundance scaling and cover larger range for the same set of spectra. This can be seen by comparing the $y$-axes in Figure \\ref{sig_mass_met}. Using the same values of the yield and gas return fraction as were used to fit the O3N2 metallicities, the data favours a negative value of $Z_{0}$, which is unphysical. With this indicator, the values of $\\lambda$ allowed by this parametrisation that also gives an appropriate shape to the distribution of modeled data in the $\\mu -Z $ plane, are too large. Using the $\\lambda$ parametrisation of \\cite{BarreraBallesteros2018}, $\\lambda=\\left(v_{0}\/v_{esc} \\right)^{\\alpha} + \\lambda_{0}$, we find $Z_{0} = (3.0 \\pm 0.12 )\\times 10^{-4}$ or $12+\\log(\\mathrm{O\/H})=7.27 \\pm 0.01$ for the satellites of low-mass centrals. For the satellites of high-mass centrals we find $Z_{0} = (8.6 \\pm 0.1 )\\times 10^{-4}$ or $12+\\log(\\mathrm{O\/H})=7.86 \\pm 0.003$. \n\n\n\nWe note that the absolute value for these inferred quantities is correlated with a number of unconstrained parameters including the nucleosynthetic yield of oxygen, $y$, the gas return fraction from stars, $R$, and the precise form of the mass loading factor, $\\lambda$. Nevertheless, we argue that the assumption that these parameters do not vary between star-forming galaxies in a relatively narrow mass range is reasonable, and that the choice to fix them for this comparison is justified. With this limitation it is not possible to derive an absolute abundance for the accreted gas, but the existence of a difference is robust. As was shown in Figures \\ref{sig_mass_met_cen_mass} and \\ref{mu_met_cen_mass}, the difference in local metallicity scaling relations is significant between these two galaxy subpopulations. The gas regulator model provides an interpretive framework to describe these differences in terms of the variation of the metallicity of the intergalactic medium in different environments, while controlling for small differences in the estimated local gas fraction and escape velocity.\n\n\n\n\\subsection{Can starvation explain our results?}\\label{Starvation}\nOur results are analogous to those seen by previous studies of the environmental dependence of the global mass-metallicity relation \\citep[e.g.][]{Cooper2008,Pasquali2012,Peng2014,Wu2017}. While different studies have found qualitatively similar results, there is considerable disagreement in the interpretation. \\cite{Peng2014}, argue that the primary driver of this trend must be the elevation of the metallicity of gas being accreted onto galaxies in dense environments. This argument hinges on their observation that the distribution of star-formation rates in their sample is independent of environment.\nThis conclusion contrasts starkly with the interpretation of \\cite{Wu2017}, who suggested that the environmental variation of the mass-metallicity relation can be explained by the reduction in the gas-fractions of galaxies with the local galaxy overdensity. \n\n\\begin{figure}\n\\includegraphics{sig_M_sig_SFR_cen_mass.pdf}\n\\caption{The star formation rate surface density as a function of stellar mass surface density for galaxies with $9<\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$. The greyscale shows the distribution of measurements from satellites of low-mass centrals ($\\log(M_{*,cen}\/\\mathrm{M_{\\odot}})<10$), with the median of this distribution shown by blue points, and the $16$th and $84$th percentiles shown by blue lines. The red contours indicate the $\\Sigma_{*} - \\Sigma_{SFR}$ distribution for satellites of massive galaxies ($\\log(M_{*,cen}\/\\mathrm{M_{\\odot}})>10.5$), with the red points showing the median and the red lines marking the $16$th and $84$th percentile of the distribution. There is very little difference in the two distributions for the vast majority of spaxels in the two samples.}\\label{sig_M_sig_SFR_cen_mass}\n\\end{figure}\n\n\nStarvation, whereby the accretion of gas onto galaxies is curtailed and the gas reservoir is not replenished following star formation \\citep{Larson1980}, will have the effect of increasing the gas-phase metallicity of a galaxy or a region of the galaxy. This is a natural consequence of maintaining a constant metal yield from stellar evolution, while reducing the replenishment of the reservoir with relatively low-metallicity gas. Within the framework of gas-regulated galaxy evolution, this implies an anti-correllaton between the gas surface density or star formation rate surface density and the metallicity in the gas. Starvation has been suggested as a key component for determining the star-forming properties of galaxies today \\citep{Peng2015,Trussler2018}, with environment appearing to play a role in instigating this process \\citep{vonderlinden2010, Davies2016}. \n\nWhile we do infer gas fractions that are, on average, lower for galaxies that are in more extreme environments (for example low-mass satellites of high-mass galaxies), we find that at fixed gas fraction the metallicity is higher for satellites relative to centrals, even in spaxels with high $\\mu$. The differing distributions of $\\log(\\mu)$ evident in Figure \\ref{mu_met_cen_mass} are largely driven by the differences in the distributions of $\\Sigma_{*}$. Although the distributions of total $M_{*}$ between the two subsamples used are not significantly different, the distributions of local stellar mass surface densities are. In Figure \\ref{sig_M_sig_SFR_cen_mass} we show the joint distributions for $\\Sigma_{*}$ and $\\Sigma_{SFR}$ for low-mass satellites, split by the stellar mass of the central galaxy. At fixed $\\Sigma_{*}$, the difference between the means of the $\\Sigma_{SFR}$ is smaller than $0.03$ dex, except above $\\log(\\Sigma_{*})=8.1$, but this range accounts for only $\\sim 10 \\% $ of the data and therefore has a minimal impact on our model fitting. \n\nIt is possible that the distribution of star formation has also changed. \\cite{Schaefer2019} showed that in dense environments, the outer parts of a galaxy can be quenched, leaving star formation in the inner regions unaffected. This transformation was nevertheless accompanied by a reduction in the total specific star formation rate (sSFR). To test for this, we perform a KS-test on the integrated specific star formation rates of the two subsamples. This yields $KS = 0.17$ with $p=0.58$. Furthermore, the median of the sSFR for the satellites of high mass galaxies is $-10.12 \\pm 0.07 \\, \\mathrm{yr^{-1}}$ and the median for the satellites of more massive galaxies is $-10.21 \\pm 0.04 \\, \\mathrm{Gyr^{-1}}$, where the error on the median has been estimated using a bootstrap resampling. The difference of medians is within the error margin. \n\nThe similar distributions of $\\Sigma_{SFR}$ and sSFR disfavour the interpretation that the changing gas fraction due to starvation is responsible to the environmental differences in metallicity on kpc scales within our sample. This is not to say that starvation does not occur in dense environments; our sample selection simply favours the most star-forming galaxies, which are unlikely to have had their star formation rates reduced by environmental effects yet. Environmental differences in metallicity may occur in satellite galaxies before the onset of environment quenching.\n\n\n\\subsection{Comparison to simulations}\nNumerical simulations of galaxy evolution are beginning to show that the gas being accreted onto galaxies cannot be assumed to be pristine in all environments \\citep{Oppenheimer2010,Gupta2018}. In the simulations, the origin of accreted gas is observed to be highly dependent on redshift, with cosmological accretion of low-metallicity gas dominating at high redshift. However, as time progresses, feedback from star formation and AGN activity expel gas from the interstellar media of galaxies, which enriches their local environment with material that subsequently falls onto their neighbours. In the FIRE simulations, \\cite{AnglesAlcazar2017} found that the exchange of gas between galaxies that is facilitated by galactic winds dominates the accretion budget by $z=0$. \n\n\\cite{Gupta2018} explored this effect using data from the IllustrisTNG simulations. They showed that the enrichment of the intergalactic medium and the associated accretion onto galaxies is dependent on both the halo mass and whether a galaxy is infalling into its host halo or whether it has been a satellite for some time. At $z<0.5$, they find that the metallicity of gas being accreted onto galaxies with $9<\\log(M_{*}\/M_{\\odot})<10$ that are infalling into clusters is approximately $0.35 \\, Z_{\\odot}$, which is $1.5$ - $2$ times more metal rich than for similar galaxies in the field. This is consistent with the metallicity difference that we have inferred between the satellites of low-mass and high-mass centrals, though the absolute abundances differ. We again note that the value of $Z_{0}$ returned when the model represented by Equation \\ref{gas_regulator} is fitted to the data is sensitive to the precise values of the yield, $y$, and the gas return fraction, $R$, which are not well constrained by observations. The choice of these values will change the estimate of $Z_{0}$, but the relative difference between subsamples will not be greatly affected.\n\n\n\\subsection{Other studies of the environmental dependence of metallicity}\nThe impact of environment on galaxy evolution, in particular star formation and metallicity, is subtle. For this reason there have been very few observational works that measure the impact of environment on the spatial patterns of chemical abundances in galaxies. It has only been recently that large enough samples of integral field spectroscopic data have become available to adequately measure these effects. \n\nIn a recent study, Lian et al. (\\emph{in prep.}) used MaNGA data to study the metallicity gradients of galaxies as a function of the local environmental overdensity. They find that the metallicity gradients in low-mass satellite galaxies are shallower in dense environments, with a higher metallicity in their outer parts than similar galaxies in the field. They also find that the star formation rate gradients in galaxies in dense environments are steeper and conclude that the most likely explanation for these observations is a variation in the gas accretion timescale in different environments. Superficially this would seem to contradict our results, but we argue that this apparent disagreement can be resolved by noting the differences between the samples of galaxies considered. Lian et al. place less stringent constraints on the number of star-forming spaxels than we do, meaning that their galaxies have lower specific star formation rates on average. They therefore study galaxies that are likely to have inhabited their host halos for a longer period of time, and are more affected by environment quenching processes. This point is made in Section \\ref{Starvation}, where we rule out starvation as the primary driver of the environmental effects discussed in this work. Additionally, we note that Lian et al. placed no constraints on the inclination of galaxies in their sample to the line of sight. This may explain the differences in the metallicity gradients to those reported in our work.\n\n\n\n\n\\section{Conclusions}\nWe have estimated local metallicities, gas fractions, escape velocities and star formation rate surface densities for a sample of nearly face-on star-forming galaxies observed by MaNGA. In this sample we have explored the impact of the environment on local scaling relations between these estimated quantities, with a particular focus on satellite galaxies. \nWe find\n\\begin{itemize}\n\\item{At fixed stellar mass, we find a small but global offset of $0.025 \\, \\mathrm{dex}$ (for O3N2) or $0.05 \\, \\mathrm{dex}$ (for N2S2H$\\alpha$) in the metallicities of galaxies between satellites and centrals. For our sample we find little evidence for changes in the metallicity gradient between satellites and centrals.}\n\\item{The disparity between the metallicity of satellites and centrals is also evident in the O\/H -- $\\Sigma_{*}$ and O\/H -- $\\mu$ local scaling relations. We find the greatest offset when we split our satellite sample by the stellar mass of the galaxy that is central to the respective halo. For satellite galaxies in the range $9<\\log(M_{*}\/M_{\\odot})<10$, the local scaling relations are $\\sim 0.1 \\, \\mathrm{dex}$ more oxygen rich for satellites of hosts more massive than $10^{10.5} \\, M_{\\odot}$ than for hosts less massive than $10^{10} \\, M_{\\odot}$.}\n\\item{The offset in metallicity for satellite galaxies is found to exist between different environments at constant stellar mass surface density, gas mass fraction and star formation rate surface density. From these we conclude that the observed differences cannot be explained by gas starvation occurring in satellites around more massive centrals. Interestingly, the impact of environment on the chemical enrichment of galaxies appears to precede the onset of the quenching of star formation in their disks. }\n\\item{Measured on kpc-scales, local metallicities and gas fractions are found to be quantitatively consistent with the gas regulator model of \\cite{Lilly2013}. We assume that the mass loading factor describing outflows in galaxies is a function of the local escape velocity. Within the framework of the gas regulator model the only explanation for the elevated metallicity is an increase in $Z_{0}$ for satellites of high-mass galaxies. We estimate that the oxygen abundance in the inflowing gas changes from $12+\\log(\\mathrm{O\/H}) = 7.54 \\pm 0.01$ to $7.86 \\pm 0.003$ using the \\cite{Pettini2004} O3N2 indicator, or $12+\\log(\\mathrm{O\/H}) = 7.27 \\pm 0.01 $ to $7.86 \\pm 0.003 $ for N2S2H$\\alpha$.}\n\n\n\\end{itemize}\n\nGiven these conclusions, we interpret the enhanced metallicity of the satellites of more massive centrals to be evidence for the exchange of enriched gas between galaxies. In this picture, which has been motivated by both observations \\citep{Peng2014} and simulations \\citep{Oppenheimer2010,AnglesAlcazar2017,Gupta2018}, feedback driven winds expel metal-rich gas from a massive star-forming central which is subsequently accreted onto nearby satellites. While our estimates for the metallicity of gas accreted onto satellite galaxies are a factor of $\\sim 3$ lower than the predictions of \\cite{Gupta2018}, we note that the values returned by our modeling are subject to inherent uncertainties in the nucleosynthetic yield, the gas return fraction from stellar evolution, and the absolute abundance scaling of the strong-line metallicity diagnostics. Notwithstanding these systematic effects, the inferred differential in the metallicity of gas accreted onto satellites in different environments is qualitatively in good agreement with the simulations.\n\nThe impact of environment on the gas phase metallicity distribution of galaxies is likely to be complicated and multifaceted. In addition to the accretion of enriched gas in dense environments that we have studied here, tidal interactions, mergers and ram pressure can influence the distribution of metals in a galaxy. A complete understanding of the effect of environment on gas phase metallicities must take these other processes into account. A more comprehensive analysis of the detailed environmental dependence of the chemical properties of galaxies will be made possible when the full MaNGA sample becomes available, or with future integral field spectroscopic surveys such as HECTOR \\citep{Bryant2016}.\n\n\n\n\\acknowledgements \nWe would like to thank the anonymous referee, whose constructive comments were extremely helpful in clarifying some aspects of this paper. \nALS, ZJP and CT acknowledge NSF CAREER Award AST-1554877. AJ acknowledges NSF Award 1616547. RM acknowledges ERC Advanced Grant 695671 \"QUENCH\" and support from the the Science and Technology Facilities Council (STFC).\n\nThis research made use of \\texttt{Astropy}, a community-developed core \\texttt{python} package for astronomy \\citep{Astropy2013,Astropy2018}; \\texttt{matplotlib} \\citep{Matplotlib}, an open-source \\texttt{python} plotting library; and \\texttt{LMFIT} \\citep{LMFIT}, an interface for non-linear optimization in \\texttt{python}.\nFunding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\\'{i}sica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) \/ University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\\\"{u}r Astrophysik Potsdam (AIP), Max-Planck-Institut f\\\"{u}r Astronomie (MPIA Heidelberg), Max-Planck-Institut f\\\"{u}r Astrophysik (MPA Garching), Max-Planck-Institut f\\\"{u}r Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\\'{o}rio Nacional \/ MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\\'{o}noma de M\\'{e}xico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.\n\n\n\n \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCharalambos presented the $q-$ deformed Vandermonde and Cauchy formulae. Moreover, the $q-$ deformed univariate discrete probability distributions were investigated.Their properties and limiting distributions were derived \\cite{CA1}.\n\nFurthermore, the $q-$ deformed multinomial coefficients was defined and their recurrence relations were deduced. Also, the $q-$ deformed multinomial and negative $q-$ deformed multinomial probability distributions of the first and second kind were presented \\cite{CA2}. \n\nThe same author extended the multivariate $q-$ deformed vandermonde and Cauchy formulae. Also, the multivariate $q-$ Pol\\'ya and inverse $q-$ Pol\\'ya were constructed \\cite{CA3}.\n\nLet $p$ and $q$ be two positive real numbers such that $ 00, \\forall n\\in\\mathbb{N},$ and $\\mathcal{R}(1,1)=0$ by definition. We denote by $\\mathbb{D}_{R}$ the bidisk \\begin{eqnarray*}\n\t\\mathbb{D}_{R}\n\n\n\t&=&\\left\\lbrace w=(w_1,w_2)\\in\\mathbb{C}^2: |w_j| 1$, it is possible to build a\nmicro tree decomposition $MS$ of $P$ in linear time such that\n$|MS| = O(\\ceil{n_P\/s})$ and $|V(M)| \\leq s$ for any $M \\in MS$\n\\end{lemma}\n\n\\subsection{Implementing the Algorithm} In this section we show\nhow to implement the $\\ensuremath{\\textsc{Down}}$ procedure using the micro tree\ndecomposition. First decompose $P$ according to\nLemma~\\ref{clustering} for a parameter $s$ to be chosen later.\nHence, each micro tree has at most $s$ nodes and $|MS| =\nO(\\ceil{n_P\/s})$. We represent the state $X$ compactly using a bit vector\nfor each micro tree. Specifically, for any micro tree $M$ we store\na bit vector $X_M = [b_{1}, \\ldots, b_{s}]$, such that $X_M[i] = 1$\niff the $i$th node in a preorder traversal of $M$ is in $X$. If\n$|V(M)| < s$ we leave the remaining values undefined. Later we\nchoose $s= \\Theta(\\log n_{T})$ such that each bit vector can be\nrepresented in a single word.\n\nNext we define a $\\ensuremath{\\textsc{Down}}_{M}$ procedure on each micro tree $M\\in\nMS$. Due to the overlap between micro trees the $\\ensuremath{\\textsc{Down}}_{M}$\nprocedure takes a bit $b$ which will be used to propagate\ninformation between micro trees. For each micro tree $M \\in MS$,\nbit vector $X_M$, bit $b$, and $y\\in V(T)$ define:\n\\begin{relate}\n\\item[$\\ensuremath{\\textsc{Down}}_{M}(X_M, b, y)$:] Compute the state $X'_M := \\ensuremath{\\textsc{Child}}(\\{x \\in X_M \\mid \\ensuremath{\\mathrm{label}}(x) = \\ensuremath{\\mathrm{label}}(y)\\}) \\cup \\{x \\in X_M \\mid \\ensuremath{\\mathrm{label}}(x) \\neq \\ensuremath{\\mathrm{label}}(y)\\}$. If $b=0$, return $X_M'$, else return $X_M' \\cup \\{\\ensuremath{\\mathrm{root}}(M)\\}$.\n\\end{relate}\nLater we will show how to implemenent $\\ensuremath{\\textsc{Down}}_{M}$ in constant time\nfor $s = \\Theta(\\log n_{T})$. First we show how to use $\\ensuremath{\\textsc{Down}}_M$ to\nsimulate $\\ensuremath{\\textsc{Down}}$ on $P$. We define a recursive procedure $\\ensuremath{\\textsc{Down}}$\nwhich traverse the hiearchy of micro trees. For micro tree\n$M$, state $X$, bit $b$, and $y \\in V(T)$ define:\n\\begin{relate}\n\\item[$\\ensuremath{\\textsc{Down}}(X,M,b,y)$:] Let $M_1, \\ldots, M_k$ be the children of $M$.\n\\begin{enumerate}\n\\item Compute $X_M := \\ensuremath{\\textsc{Down}}_{M}(X_M, b, y)$.\n\\item For $i:=1$ to $k$ do:\n\\begin{enumerate}\n\\item Compute $\\ensuremath{\\textsc{Down}}(X, M_i, b_{i}, y)$, where $b_{i} = 1$\niff\n\n$\\ensuremath{\\mathrm{root}}(M_i) \\in X_M$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{relate}\nIntuitively, the $\\ensuremath{\\textsc{Down}}$ procedure works in a top-down fashion\nusing the $b$ bit to propagate the new state of the root of micro\ntree. To solve the problem within our framework we initially\nconstruct the state representing $\\{\\ensuremath{\\mathrm{root}}(P)\\}$. Then, at each\nstep we call $\\ensuremath{\\textsc{Down}}(R_{j}, 0, y)$ on each root micro tree $R_{j}$.\nWe formally show that this is correct:\n\\begin{lemma}\nThe above algorithm correctly simulates the $\\ensuremath{\\textsc{Down}}$ procedure on\n$P$.\n\\end{lemma}\n\\begin{proof}\nLet $X$ be the state and let $X' :=\\ensuremath{\\textsc{Down}}(X, y)$. For\nsimplicity, assume that there is only one root micro tree $R$.\nSince the root micro trees can only overlap at $\\ensuremath{\\mathrm{root}}(P)$ it is\nstraightforward to generalize the result to any number of roots.\nWe show that if $X$ is represented by bit vectors at each micro\ntree then calling $\\ensuremath{\\textsc{Down}}(R, 0, y)$ correctly produces the new\nstate $X'$.\n\nIf $R$ is the only micro tree then only line 1 is executed. Since\n$b = 0$ this produces the correct state by definition of\n$\\ensuremath{\\textsc{Down}}_{M}$. Otherwise, consider a micro tree $M$ with children\n$M_{1}, \\ldots, M_{k}$ and assume that $b = 1$ iff $\\ensuremath{\\mathrm{root}}(M) \\in\nX'$. Line 1 computes and stores the new state returned by\n$\\ensuremath{\\textsc{Down}}_{M}$. If $b=0$ the correctness follows immediately. If\n$b=1$ observe that $\\ensuremath{\\textsc{Down}}_{M}$ first computes the new state and\nthen adds $\\ensuremath{\\mathrm{root}}(M)$. Hence, in both cases the state of $M$ is\ncorrectly computed. Line 2 recursively computes the new state of\nthe children of $M$.\n\\end{proof}\n\nIf each micro tree has size at most $s$ and $\\ensuremath{\\textsc{Down}}_{M}$ can be\ncomputed in constant time it follows that the above algorithm\nsolves TPS in $O(\\ceil{n_{P}\/s})$ time. In the following section we show\nhow to do this for $s = \\Theta(\\log n_{T})$, while maintaining\nlinear space.\n\n\\subsection{Representing Micro Trees} In this section we show\nhow to preprocess all micro trees $M \\in MS$ such that\n$\\ensuremath{\\textsc{Down}}_{M}$ can be computed in constant time. This preprocessing\nmay be viewed as a ``Four Russian Technique''~\\cite{ADKF1970}. To\nachieve this in linear space we need the following auxiliary\nprocedures on micro trees. For each micro tree $M$, bit vector\n$X_M$, and $\\alpha \\in \\Sigma$ define:\n\\begin{relate}\n\\item[$\\ensuremath{\\textsc{Child}}_{M}(X_M)$:] Return the bit vector of nodes in $M$ that are children of nodes in $X_M$.\n\\item[$\\ensuremath{\\textsc{Eq}}_{M}(\\alpha)$:] Return the bit vector of nodes in $M$ labeled $\\alpha$.\n\\end{relate}\nBy definition it follows that:\n\\begin{equation*}\n\\begin{aligned}\n\\ensuremath{\\textsc{Down}}_{M}(X_M,b, y) &=\n\\begin{cases}\n\\ensuremath{\\textsc{Child}}_{M}(X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y)))\\; \\cup \\\\\n\\quad (X_M \\backslash (X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y))) & \\text{if $b = 0$}, \\\\\n\\ensuremath{\\textsc{Child}}_{M}(X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y))) \\; \\cup \\\\\n\\quad (X_M \\backslash (X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y))) \\cup \\{\\ensuremath{\\mathrm{root}}(M)\\}\n& \\text{if $b=1$}.\n\\end{cases} \\\\\n\\end{aligned}\n\\end{equation*}\nRecall that the bit vectors are represented in a single word. Hence,\ngiven $\\ensuremath{\\textsc{Child}}_{M}$ and $\\ensuremath{\\textsc{Eq}}_{M}$ we can compute $\\ensuremath{\\textsc{Down}}_M$ using\nstandard bit-operations in constant time.\n\nNext we show how to efficiently implement the operations. For each\nmicro tree $M \\in MS$ we store the value $\\ensuremath{\\textsc{Eq}}_{M}(\\alpha)$ in a\nhash table indexed by $\\alpha$. Since the total number of\ndifferent characters in any $M\\in MS$ is at most $s$, the hash\ntable $\\ensuremath{\\textsc{Eq}}_{M}$ contains at most $s$ entries. Hence, the total\nnumber of entries in all hash tables is $O(n_{P})$. Using perfect\nhashing we can thus represent $\\ensuremath{\\textsc{Eq}}_{M}$ for all micro trees, $M\\in\nMS$, in $O(n_{P})$ space and $O(1)$ worst-case lookup time. The\npreprocessing time is expected $O(n_{P})$ w.h.p.. To get a worst-case bound we \nuse the deterministic dictionary of Hagerup et. al.\n\\cite{HMP2001} with $O((n_{P})\\log (n_{P}))$ worst-case preprocessing\ntime. \n\nNext consider implementing $\\ensuremath{\\textsc{Child}}_{M}$. Since this\nprocedure is independent of the labeling of $M$ it suffices to\nprecompute it for all \\emph{topologically} different rooted trees\nof size at most $s$. The total number of such trees is less than\n$2^{2s}$ and the number of different states in each tree is at\nmost $2^{s}$. Therefore $\\ensuremath{\\textsc{Child}}_{M}$ has to be computed for a\ntotal of $2^{2s}\\cdot 2^{s} = 2^{3s}$ different inputs. For any\ngiven tree and any given state, the value of $\\ensuremath{\\textsc{Child}}_{M}$ can be\ncomputed and encoded in $O(s)$ time. In total we can precompute\nall values of $\\ensuremath{\\textsc{Child}}_{M}$ in $O(s2^{3s})$ time. Choosing the\nlargest $s$ such that $3s + \\log s \\leq n_{T}$ (hence $s =\n\\Theta(\\log n_{T})$) we can precompute\nall values of $\\ensuremath{\\textsc{Child}}_{M}$ in $O(s2^{3s}) = O(n_{T})$ time and space. Each of\nthe inputs to $\\ensuremath{\\textsc{Child}}_{M}$ are encoded in a single word such that\nwe can look them up in constant time.\n\nFinally, note that we also need to report the leaves of a state efficiently since this is needed in line 1 in the $\\ensuremath{\\textsc{Visit}}$-procedure. To do this compute the state $L$ corresponding to all leaves in $P$. Clearly, the leaves of a state $X$ can be computed by performing a bitwise AND of each pair of bit vectors in $L$ and $X$. Computing $L$ uses $O(n_{P})$ time and the bitwise AND operation uses $O(\\ceil{n_{P}\/s})$ time.\n\nCombining the results, we decompose $P$, for $s$ as described\nabove, and compute all values of $\\ensuremath{\\textsc{Eq}}_{M}$ and $\\ensuremath{\\textsc{Child}}_{M}$. Then, we solve TPS using the heavy-path\ntraversal. Since $s = \\Theta(\\log n_{T})$, from Lemmas~\\ref{traversal} and \\ref{clustering} we have the following\ntheorem:\n\\begin{theorem}\\label{faster} For trees $P$ and $T$ the tree path subsequence problem can be solved in $O(\\frac{n_Pn_T}{\\log n_T} +n_T+ n_P\\log n_P)$ time and $O(n_P + n_T)$ space.\n\\end{theorem}\nCombining the results of Theorems~\\ref{simple} and \\ref{faster} proves Theorem~\\ref{main}.\n\n\\section{Acknowledgments}\nThe authors would like to thank Anna {\\\"O}stlin Pagh for many helpful comments.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMost of the problems that arise in different disciplines of science can be\nformulated by the equations in the for\n\\begin{equation}\nFx=0\\text{,} \\label{eqn1}\n\\end{equation\nwhere $F$ is some function. The equations given by (1) can easily be\nreformulated as the fixed point equations of type \n\\begin{equation}\nTx=x\\text{.} \\label{eqn2}\n\\end{equation\nwhere $T$ is a self-map of an ambient space $X$ and $x\\in X$. These\nequations are often classified as linear or nonlinear, depending on whether\nthe mappings used in the equation is linear with respect to the variables.\nOver the years, a considerable attention has been paid to solving such\nequations by using different techniques such as direct and iterative\nmethods. In case of linear equations, both direct and iterative methods are\nused to obtain solutions of the equations. But in case of nonlinear\nequations, due to various reasons, direct methods can be impractical or fail\nin solving equations, and thus iterative methods become a viable\nalternative. Nonlinear problems are of importance interest to\nmathematicians, physicists and engineers and many other scientists, simply\nbecause most systems are intrinsically nonlinear in nature. That is why,\nresearchers in various disciplines of sciences are often faced with the\nsolving such problems. It would be hard to fudge that the role of iterative\napproximation of fixed points have played in the recent progress of\nnonlinear science. Indeed, the phrase iterative approximation has been\nintroduced to describe repetition-based researches into nonlinear problems\nthat are inaccessible to analytic methods. For this reason, the iterative\napproximation of fixed points has become one of the major and basic tools in\nthe theory of equations, and as a result, numerous iterative methods have\nbeen introduced or improved and studied for many years in detail from\nvarious points of aspects by a wide audience of researchers, see, [1-14,\n16-42, 44, 45].\n\nIn this paper, we show that a Picard-S iteration method \\cite{Gursoy} can be\nused to approximate fixed point of weak-contraction mappings. Also, we show\nthat this iteration method is equivalent and converges faster than CR\niteration method \\cite{CR} for the aforementioned class of mappings.\nFurthermore, by providing an example, it is shown that the Picard-S\niteration method converges faster than CR iteration method and hence also\nfaster than all Picard \\cite{Picard}, Mann \\cite{Mann}, Ishikawa \\cit\n{Ishikawa}, Noor \\cite{Noor}, SP \\cite{SP}, S \\cite{S} and some other\niteration methods in the existing literature when applied to\nweak-contraction mappings. Finally, a data dependence result is proven for\nfixed point of weak-contraction mappings with the help of the Picard-S\niteration method.\n\nThroughout this paper the set of all positive integers and zero is shown by \n\\mathbb{N}\n$. Let $B$ be a Banach space, $D$ be a nonempty closed convex subset of $B$\nand $T$ a self-map of $D$. An element $x_{\\ast }$ of $D$ is called a fixed\npoint of $T$ if and only if $Tx_{\\ast }=x_{\\ast }$. The set of all fixed\npoint of $T$ denoted by $F_{T}$. Let $\\left\\{ a_{n}^{i}\\right\\}\n_{n=0}^{\\infty }$, $i\\in \\left\\{ 0,1,2\\right\\} $ be real sequences in $\\left[\n0,1\\right] $ satisfying certain control condition(s).\n\nRenowned Picard iteration method \\cite{Picard} is formulated as follo\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\np_{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \np_{n+1}=Tp_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn3}\n\\end{equation\nand generally used to approximate fixed points of contraction mappings\nsatisfying: for all $x$, $y\\in B$ there exists a $\\delta \\in \\left(\n0,1\\right) $ such tha\n\\begin{equation}\n\\left\\Vert Tx-Ty\\right\\Vert \\leq \\delta \\left\\Vert x-y\\right\\Vert \\text{.}\n\\label{eqn4}\n\\end{equation\nThe following iteration methods are known as Noor \\cite{Noor} and SP \\cit\n{SP} iterations, respectively\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\omega _{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ } \\\\ \n\\omega _{n+1}=\\left( 1-a_{n}^{0}\\right) \\omega _{n}+a_{n}^{0}T\\varpi _{n\n\\text{, \\ \\ } \\\\ \n\\varpi _{n}=\\left( 1-a_{n}^{1}\\right) \\omega _{n}+a_{n}^{1}T\\rho _{n}\\text{, \n} \\\\ \n\\rho _{n}=\\left( 1-a_{n}^{2}\\right) \\omega _{n}+a_{n}^{2}T\\omega _{n}\\text{, \n}n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn5}\n\\end{equation\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nq_{0}\\in D\\text{,\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ } \\\\ \nq_{n+1}=\\left( 1-a_{n}^{0}\\right) r_{n}+a_{n}^{0}Tr_{n}\\text{, \\ \\ } \\\\ \nr_{n}=\\left( 1-a_{n}^{1}\\right) s_{n}+a_{n}^{1}Ts_{n}\\text{,} \\\\ \ns_{n}=\\left( 1-a_{n}^{2}\\right) q_{n}+a_{n}^{2}Tq_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn6}\n\\end{equation}\n\n\\begin{remark}\n(i) If $a_{n}^{2}=0$ for each $n\\in \n\\mathbb{N}\n$ , then the Noor iteration method reduces to iterative method of Ishikawa \n\\cite{Ishikawa}.\n\n(ii) If $a_{n}^{2}=0$ for each $n\\in \n\\mathbb{N}\n$ , then the SP iteration method reduces to iterative method of Thianwan \n\\cite{iam}. \n\n(iii) When $a_{n}^{1}=$ $a_{n}^{2}=0$ for each $n\\in \n\\mathbb{N}\n$, then both Noor and SP iteration methods reduce to an iteration method due\nto Mann \\cite{Mann}. \n\\end{remark}\n\nRecently, G\\\"{u}rsoy and Karakaya \\cite{Gursoy} introduced a Picard-S\niterative scheme as follows: \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nx_{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \nx_{n+1}=Ty_{n}\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ } \\\\ \ny_{n}=\\left( 1-a_{n}^{1}\\right) Tx_{n}+a_{n}^{1}Tz_{n}\\text{, \\ \\ \\ \\ \\ \\ }\n\\\\ \nz_{n}=\\left( 1-a_{n}^{2}\\right) x_{n}+a_{n}^{2}Tx_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn7}\n\\end{equation\nThe following definitions and lemmas will be needed in obtaining the main\nresults of this article.\n\n\\begin{definition}\n\\cite{Berinde} Let $\\left\\{ a_{n}\\right\\} _{n=0}^{\\infty }$ and $\\left\\{\nb_{n}\\right\\} _{n=0}^{\\infty }$ be two sequences of real numbers with limits \n$a$ and $b$, respectively. Assume that there exist\n\\begin{equation}\n\\underset{n\\rightarrow \\infty }{\\lim }\\frac{\\left\\vert a_{n}-a\\right\\vert }\n\\left\\vert b_{n}-b\\right\\vert }=l\\text{.} \\label{eqn8}\n\\end{equation}\n\n(i) If $l=0$, the we say that $\\left\\{ a_{n}\\right\\} _{n=0}^{\\infty }$\nconverges faster to $a$ than $\\left\\{ b_{n}\\right\\} _{n=0}^{\\infty }$ to $b$.\n\n(ii) If $00$ we have \n\\begin{equation}\n\\left\\Vert Tx-\\widetilde{T}x\\right\\Vert \\leq \\varepsilon . \\label{eqn12}\n\\end{equation}\n\\end{definition}\n\n\\begin{lemma}\n\\cite{Weng}Let $\\left\\{ \\beta _{n}\\right\\} _{n=0}^{\\infty }$ and $\\left\\{\n\\rho _{n}\\right\\} _{n=0}^{\\infty }$ be nonnegative real sequences satisfying\nthe following inequality\n\\begin{equation}\n\\beta _{n+1}\\leq \\left( 1-\\lambda _{n}\\right) \\beta _{n}+\\rho _{n}\\text{,}\n\\label{eqn13}\n\\end{equation\nwhere $\\lambda _{n}\\in \\left( 0,1\\right) $, for all $n\\geq n_{0}$, \n\\dsum\\nolimits_{n=1}^{\\infty }\\lambda _{n}=\\infty $, and $\\frac{\\rho _{n}}\n\\lambda _{n}}\\rightarrow 0$ as $n\\rightarrow \\infty $. Then \n\\lim_{n\\rightarrow \\infty }\\beta _{n}=0$.\n\\end{lemma}\n\n\\begin{lemma}\n\\cite{Data Is 2} Let $\\left\\{ \\beta _{n}\\right\\} _{n=0}^{\\infty }$ be a\nnonnegative sequence for which one assumes there exists $n_{0}\\in \n\\mathbb{N}\n$, such that for all $n\\geq n_{0}$ one has satisfied the inequality \n\\begin{equation}\n\\beta _{n+1}\\leq \\left( 1-\\mu _{n}\\right) \\beta _{n}+\\mu _{n}\\gamma _{n\n\\text{,} \\label{eqn15}\n\\end{equation\nwhere $\\mu _{n}\\in \\left( 0,1\\right) ,$ for all $n\\in \n\\mathbb{N}\n$, $\\sum\\limits_{n=0}^{\\infty }\\mu _{n}=\\infty $ and $\\gamma _{n}\\geq 0$, \n\\forall n\\in \n\\mathbb{N}\n$. Then the following inequality holds \n\\begin{equation}\n0\\leq \\lim \\sup_{n\\rightarrow \\infty }\\beta _{n}\\leq \\lim \\sup_{n\\rightarrow\n\\infty }\\gamma _{n}. \\label{eqn16}\n\\end{equation}\n\\end{lemma}\n\n\\section{Main Results}\n\n\\begin{theorem}\nLet $T:D\\rightarrow D$ be a weak-contraction map satisfying condition (11)\nwith $F_{T}\\neq \\emptyset $ and $\\left\\{ x_{n}\\right\\} _{n=0}^{\\infty }$ an\niterative sequence defined by (7) with real sequences $\\left\\{\na_{n}^{i}\\right\\} _{n=0}^{\\infty }$, $i\\in \\left\\{ 1,2\\right\\} $ in $\\left[\n0,1\\right] $ satisfying $\\sum_{k=0}^{\\infty }a_{k}^{1}a_{k}^{2}=\\infty $.\nThen $\\ \\left\\{ x_{n}\\right\\} _{n=0}^{\\infty }$ converges to a unique fixed\npoint $u^{\\ast }$of $T$.\n\\end{theorem}\n\n\\begin{proof}\nUniqueness of $u^{\\ast }$ comes from condition (11). Using Picard-S\niterative scheme (7) and condition (11), we obtai\n\\begin{eqnarray}\n\\left\\Vert z_{n}-u^{\\ast }\\right\\Vert &\\leq &\\left( 1-a_{n}^{2}\\right)\n\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert +a_{n}^{2}\\left\\Vert Tx_{n}-Tu^{\\ast\n}\\right\\Vert \\notag \\\\\n&\\leq &\\left( 1-a_{n}^{2}\\right) \\left\\Vert x_{n}-u^{\\ast }\\right\\Vert\n+a_{n}^{2}\\delta \\left\\Vert x_{n}-u^{\\ast }\\right\\Vert +a_{n}^{2}L\\left\\Vert\nu^{\\ast }-Tu^{\\ast }\\right\\Vert \\notag \\\\\n&=&\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-u^{\\ast }\\right\\Vert \\text{,} \\label{eqn17}\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert y_{n}-u^{\\ast }\\right\\Vert &\\leq &\\left( 1-a_{n}^{1}\\right)\n\\left\\Vert Tx_{n}-Tu^{\\ast }\\right\\Vert +a_{n}^{1}\\left\\Vert Tz_{n}-Tu^{\\ast\n}\\right\\Vert \\notag \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\delta \\left\\Vert x_{n}-u^{\\ast\n}\\right\\Vert +a_{n}^{1}\\delta \\left\\Vert z_{n}-u^{\\ast }\\right\\Vert \\text{,}\n\\label{eqn18}\n\\end{eqnarray\n\\begin{equation}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert \\leq \\delta \\left\\Vert y_{n}-u^{\\ast\n}\\right\\Vert \\text{.} \\label{eqn19}\n\\end{equation\nCombining (16), (17) and (18\n\\begin{equation}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert \\leq \\delta ^{2}\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}-u^{\\ast\n}\\right\\Vert \\text{.} \\label{eqn20}\n\\end{equation\nBy inductio\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert &\\leq &\\delta ^{2\\left( n+1\\right)\n}\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right)\n\\right] \\left\\Vert x_{0}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\leq &\\delta ^{2\\left( n+1\\right) }\\left\\Vert x_{0}-u^{\\ast }\\right\\Vert\n^{n+1}e^{-\\left( 1-\\delta \\right) \\dsum\\nolimits_{k=0}^{n}a_{k}^{1}a_{k}^{2}\n\\text{.} \\label{eqn21}\n\\end{eqnarray\nSince $\\sum_{k=0}^{\\infty }a_{k}^{1}a_{k}^{2}=\\infty $\n\\begin{equation}\ne^{-\\left( 1-\\delta \\right)\n\\dsum\\nolimits_{k=0}^{n}a_{k}^{1}a_{k}^{2}}\\rightarrow 0\\text{ as \nn\\rightarrow \\infty \\text{,} \\label{eqn22}\n\\end{equation\nwhich implies $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast\n}\\right\\Vert $.\n\\end{proof}\n\n\\begin{theorem}\nLet $T:D\\rightarrow D$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $\nbe as in Theorem 1 and $\\{q_{n}\\}_{n=0}^{\\infty }$, $\\{x_{n}\\}_{n=0}^{\\infty\n}$ two iterative sequences defined by SP (6) and Picard-S (7) iteration\nmethods with real sequences $\\left\\{ a_{n}^{i}\\right\\} _{n=0}^{\\infty }$, \ni\\in \\left\\{ 0,1,2\\right\\} $ in $\\left[ 0,1\\right] $ satisfying \n\\sum_{k=0}^{n}a_{k}^{1}a_{k}^{2}=\\infty $. Then the following are equivalent:\n\n(i) $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert =0$;\n\n(ii) $\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert =0$.\n\n\\begin{proof}\n(i)$\\Rightarrow $(ii): It follows from (6), (7), and condition (11) tha\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-q_{n+1}\\right\\Vert &=&\\left\\Vert \\left(\n1-a_{n}^{0}\\right) \\left( Ty_{n}-r_{n}\\right) +a_{n}^{0}\\left(\nTy_{n}-Tr_{n}\\right) \\right\\Vert \\label{eqn23} \\\\\n&\\leq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert Ty_{n}-r_{n}\\right\\Vert\n+a_{n}^{0}\\left\\Vert Ty_{n}-Tr_{n}\\right\\Vert \\notag \\\\\n&\\leq &\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\Vert\ny_{n}-r_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert y_{n}-r_{n}\\right\\Vert &=&\\left\\Vert \\left( 1-a_{n}^{1}\\right)\n\\left( Tx_{n}-s_{n}\\right) +a_{n}^{1}\\left( Tz_{n}-Ts_{n}\\right) \\right\\Vert \n\\label{eqn24} \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\left\\Vert Tx_{n}-s_{n}\\right\\Vert\n+a_{n}^{1}\\left\\Vert Tz_{n}-Ts_{n}\\right\\Vert \\notag \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\left\\Vert Tx_{n}-s_{n}\\right\\Vert\n+a_{n}^{1}\\delta \\left\\Vert z_{n}-s_{n}\\right\\Vert +a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert Tx_{n}-s_{n}\\right\\Vert &=&\\left\\Vert \\left( 1-a_{n}^{2}\\right)\n\\left( Tx_{n}-q_{n}\\right) +a_{n}^{2}\\left( Tx_{n}-Tq_{n}\\right) \\right\\Vert \n\\label{eqn25} \\\\\n&\\leq &\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-q_{n}\\right\\Vert +\\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n\\left\\Vert x_{n}-Tx_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert z_{n}-s_{n}\\right\\Vert &\\leq &\\left( 1-a_{n}^{2}\\right)\n\\left\\Vert x_{n}-q_{n}\\right\\Vert +a_{n}^{2}\\left\\Vert\nTx_{n}-Tq_{n}\\right\\Vert \\label{eqn26} \\\\\n&\\leq &\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-q_{n}\\right\\Vert +a_{n}^{2}L\\left\\Vert x_{n}-Tx_{n}\\right\\Vert \\text{.}\n\\notag\n\\end{eqnarray\nCombining (22), (23), (24), and (25\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-q_{n+1}\\right\\Vert &\\leq &\\left[ 1-a_{n}^{0}\\left(\n1-\\delta \\right) \\right] \\left[ 1-a_{n}^{1}\\left( 1-\\delta \\right) \\right]\n\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-q_{n}\\right\\Vert \\label{eqn27} \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\{ \\left(\n1-a_{n}^{1}\\right) \\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n+a_{n}^{1}a_{n}^{2}\\delta L\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \n\\notag \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{.} \\notag\n\\end{eqnarray\nIt follows from the facts $\\delta \\in \\left( 0,1\\right) $ and $a_{n}^{i}\\in\n\\left[ 0,1\\right] $, $\\forall n\\in \n\\mathbb{N}\n$, $i\\in \\left\\{ 0,1,2\\right\\} $ tha\n\\begin{equation}\n\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left[ 1-a_{n}^{1}\\left(\n1-\\delta \\right) \\right] \\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right]\n<1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\text{.} \\label{eqn28}\n\\end{equation\nHence, inequality (26) become\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-q_{n+1}\\right\\Vert &\\leq &\\left[ 1-a_{n}^{1}a_{n}^{2\n\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}-q_{n}\\right\\Vert \n\\label{eqn29} \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\{ \\left(\n1-a_{n}^{1}\\right) \\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n+a_{n}^{1}a_{n}^{2}\\delta L\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \n\\notag \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{.} \\notag\n\\end{eqnarray\nDenote tha\n\\begin{eqnarray}\n\\beta _{n} &:&=\\left\\Vert x_{n}-q_{n}\\right\\Vert \\text{, \\ \\ } \\label{eqn30}\n\\\\\n\\lambda _{n} &:&=a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\in \\left(\n0,1\\right) \\text{, \\ } \\notag \\\\\n\\rho _{n} &:&=\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\{\n\\left( 1-a_{n}^{1}\\right) \\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n+a_{n}^{1}a_{n}^{2}\\delta L\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \n\\notag \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{.} \\notag\n\\end{eqnarray\nSince $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert =0$\nand $Tu^{\\ast }=u^{\\ast }\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert z_{n}-Tz_{n}\\right\\Vert =0\\text{,}\n\\label{eqn31}\n\\end{equation\nwhich implies $\\frac{\\rho _{n}}{\\lambda _{n}}\\rightarrow 0$ as $n\\rightarrow\n\\infty $. Therefore, inequality (28) perform all assumptions in Lemma 1 and\nthus we obtain $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-q_{n}\\right\\Vert\n=0$. Sinc\n\\begin{equation}\n\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert \\leq \\left\\Vert x_{n}-q_{n}\\right\\Vert\n+\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert \\rightarrow 0\\text{ as }n\\rightarrow\n\\infty \\text{,} \\label{eqn32}\n\\end{equation\n$\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert =0$.\n\n(ii)$\\Rightarrow $(i): It follows from (6), (7), and condition (11) tha\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-x_{n+1}\\right\\Vert &=&\\left\\Vert\nr_{n}-Ty_{n}+a_{n}^{0}\\left( Tr_{n}-r_{n}\\right) \\right\\Vert \\label{eqn33}\n\\\\\n&\\leq &\\delta \\left\\Vert r_{n}-y_{n}\\right\\Vert +\\left( 1+a_{n}^{0}+L\\right)\n\\left\\Vert r_{n}-Tr_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert r_{n}-y_{n}\\right\\Vert &\\leq &\\left( 1-a_{n}^{1}\\right)\n\\left\\Vert s_{n}-Tx_{n}\\right\\Vert +a_{n}^{1}\\left\\Vert\nTs_{n}-Tz_{n}\\right\\Vert \\label{eqn34} \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\left\\Vert s_{n}-Tx_{n}\\right\\Vert\n+a_{n}^{1}\\delta \\left\\Vert s_{n}-z_{n}\\right\\Vert +a_{n}^{1}L\\left\\Vert\ns_{n}-Ts_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert s_{n}-Tx_{n}\\right\\Vert &\\leq &\\left\\Vert\nTs_{n}-Tx_{n}\\right\\Vert +\\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\label{eqn35}\n\\\\\n&\\leq &\\delta \\left\\Vert s_{n}-x_{n}\\right\\Vert +\\left( 1+L\\right)\n\\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\notag \\\\\n&\\leq &\\delta \\left\\Vert q_{n}-x_{n}\\right\\Vert +\\delta a_{n}^{2}\\left\\Vert\nTq_{n}-q_{n}\\right\\Vert +\\left( 1+L\\right) \\left\\Vert\ns_{n}-Ts_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert s_{n}-z_{n}\\right\\Vert &\\leq &\\left( 1-a_{n}^{2}\\right)\n\\left\\Vert q_{n}-x_{n}\\right\\Vert +a_{n}^{2}\\left\\Vert\nTq_{n}-Tx_{n}\\right\\Vert \\label{eqn36} \\\\\n&\\leq &\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nq_{n}-x_{n}\\right\\Vert +a_{n}^{2}L\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\text{.}\n\\notag\n\\end{eqnarray\nCombining (32), (33), (34), and (35\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-x_{n+1}\\right\\Vert &\\leq &\\delta ^{2}\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nq_{n}-x_{n}\\right\\Vert \\label{eqn37} \\\\\n&&+\\delta ^{2}a_{n}^{2}\\left[ 1-a_{n}^{1}\\left( 1-L\\right) \\right]\n\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1+a_{n}^{0}+L\\right) \\left\\Vert r_{n}-Tr_{n}\\right\\Vert +\\delta\n\\left( 1-a_{n}^{1}+L\\right) \\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\text{.} \n\\notag\n\\end{eqnarray\nSince $\\delta \\in \\left( 0,1\\right) \n\\begin{equation}\n\\delta ^{2}\\left[ 1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right]\n<1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\text{.} \\label{eqn38}\n\\end{equation\nHence, inequality (36) become\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-x_{n+1}\\right\\Vert &\\leq &\\left[ 1-a_{n}^{1}a_{n}^{2\n\\left( 1-\\delta \\right) \\right] \\left\\Vert q_{n}-x_{n}\\right\\Vert \n\\label{eqn39} \\\\\n&&+\\delta ^{2}a_{n}^{2}\\left[ 1-a_{n}^{1}\\left( 1-L\\right) \\right]\n\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1+a_{n}^{0}+L\\right) \\left\\Vert r_{n}-Tr_{n}\\right\\Vert +\\delta\n\\left( 1-a_{n}^{1}+L\\right) \\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\text{.} \n\\notag\n\\end{eqnarray\nDenote tha\n\\begin{eqnarray}\n\\beta _{n} &:&=\\left\\Vert q_{n}-x_{n}\\right\\Vert \\text{, \\ \\ } \\label{eqn40}\n\\\\\n\\lambda _{n} &:&=a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\in \\left(\n0,1\\right) \\text{, \\ } \\notag \\\\\n\\rho _{n} &:&=\\delta ^{2}a_{n}^{2}\\left[ 1-a_{n}^{1}\\left( 1-L\\right) \\right]\n\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1+a_{n}^{0}+L\\right) \\left\\Vert r_{n}-Tr_{n}\\right\\Vert +\\delta\n\\left( 1-a_{n}^{1}+L\\right) \\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\text{.} \n\\notag\n\\end{eqnarray\nSince $\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert =0$\nand $Tu^{\\ast }=u^{\\ast }\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-Tq_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert r_{n}-Tr_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert s_{n}-Ts_{n}\\right\\Vert =0\\text{,}\n\\label{eqn41}\n\\end{equation\nwhich implies $\\frac{\\rho _{n}}{\\lambda _{n}}\\rightarrow 0$ as $n\\rightarrow\n\\infty $. Therefore, inequality (38) perform all assumptions in Lemma 1 and\nthus we obtain $\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-x_{n}\\right\\Vert\n=0$. Sinc\n\\begin{equation}\n\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert \\leq \\left\\Vert q_{n}-x_{n}\\right\\Vert\n+\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert \\rightarrow 0\\text{ as }n\\rightarrow\n\\infty \\text{,} \\label{eqn42}\n\\end{equation\n$\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert =0$.\n\\end{proof}\n\\end{theorem}\n\nTaking R. Chugh et al.'s result (\\cite{CR}, Corollary 3.2) into account,\nTheorem 2 leads to the following corollary under weaker assumption:\n\n\\begin{corollary}\nLet $T:D\\rightarrow D$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $\nbe as in Theorem 1. Then the followings are equivalent:\n\n1)The Picard iteration method (3) converges to $u^{\\ast }$,\n\n2) The Mann iteration method \\cite{Mann} converges to $u^{\\ast }$,\n\n3) The Ishikawa iteration method \\cite{Ishikawa} converges to $u^{\\ast }$,\n\n4) The Noor iteration method (5) converges to $u^{\\ast }$,\n\n5) S-iteration method \\cite{S} converges to $u^{\\ast }$,\n\n6) The SP-iteration method (6) converges to $u^{\\ast }$,\n\n7) CR-iteration method \\cite{CR} converges to $u^{\\ast }$,\n\n8) The Picard-S iteration method (7) converges to $u^{\\ast }$.\n\\end{corollary}\n\n\\begin{theorem}\nLet $T:D\\rightarrow D$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $\nbe as in Theorem 1. Suppose that $\\left\\{ \\omega _{n}\\right\\} _{n=0}^{\\infty\n}$, $\\left\\{ q_{n}\\right\\} _{n=0}^{\\infty }$ and $\\left\\{ x_{n}\\right\\}\n_{n=0}^{\\infty }$ are iterative sequences, respectively, defined by Noor\n(5), SP (6) and Picard-S (7) iterative schemes with real sequences $\\left\\{\na_{n}^{i}\\right\\} _{n=0}^{\\infty }\\subset \\left[ 0,1\\right] $, $i\\in \\left\\{\n0\\text{,}1\\text{,}2\\right\\} $ satisfying\n\n(i) $0\\leq a_{n}^{i}<\\frac{1}{1+\\delta }$,\n\n(ii) $\\lim_{n\\rightarrow \\infty }a_{n}^{i}=0$.\n\nThen the iterative sequence defined by (7) converges faster than the\niterative sequences defined by (5) and (6) to a unique fixed point of $T$,\nprovided that the initial point is the same for all iterations.\n\\end{theorem}\n\n\\begin{proof}\nFrom inequality (20), we hav\n\\begin{equation}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert \\leq \\delta ^{2\\left( n+1\\right)\n}\\left\\Vert x_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\dprod\\nolimits_{k=0}^{n}\\left[\n1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] \\label{eqn43}\n\\end{equation\nUsing (6) we obtai\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-u^{\\ast }\\right\\Vert &=&\\left\\Vert \\left(\n1-a_{n}^{0}\\right) r_{n}+a_{n}^{0}Tr_{n}-u^{\\ast }\\right\\Vert \\label{eqn44}\n\\\\\n&\\geq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert r_{n}-u^{\\ast }\\right\\Vert\n-a_{n}^{0}\\left\\Vert Tr_{n}-Tu^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left\\Vert\nr_{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left\\{ \\left(\n1-a_{n}^{1}\\right) \\left\\Vert s_{n}-u^{\\ast }\\right\\Vert -a_{n}^{1}\\delta\n\\left\\Vert s_{n}-u^{\\ast }\\right\\Vert \\right\\} \\notag \\\\\n&=&\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left[\n1-a_{n}^{1}\\left( 1+\\delta \\right) \\right] \\left\\Vert s_{n}-u^{\\ast\n}\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left[\n1-a_{n}^{1}\\left( 1+\\delta \\right) \\right] \\left\\{ \\left( 1-a_{n}^{2}\\right)\n\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert -a_{n}^{2}\\delta \\left\\Vert\nq_{n}-u^{\\ast }\\right\\Vert \\right\\} \\notag \\\\\n&=&\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left[\n1-a_{n}^{1}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{n}^{2}\\left( 1+\\delta\n\\right) \\right] \\left\\Vert q_{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\cdots \\notag \\\\\n&\\geq &\\left\\Vert q_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\prod\\limits_{k=0}^{n}\n\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] \n\\text{.} \\notag\n\\end{eqnarray\nUsing now (42) and (43\n\\begin{equation}\n\\frac{\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert }{\\left\\Vert q_{n+1}-u^{\\ast\n}\\right\\Vert }\\leq \\frac{\\delta ^{2\\left( n+1\\right) }\\left\\Vert\nx_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\dprod\\nolimits_{k=0}^{n}\\left[\n1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }{\\left\\Vert\nq_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\prod\\limits_{k=0}^{n}\\left[\n1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left( 1+\\delta\n\\right) \\right] \\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] }\\text{.}\n\\label{eqn45}\n\\end{equation\nDefine \n\\begin{equation}\n\\theta _{n}=\\frac{\\delta ^{2\\left( n+1\\right) }\\dprod\\nolimits_{k=0}^{n\n\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }\n\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right]\n\\left[ 1-a_{k}^{1}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{k}^{2}\\left(\n1+\\delta \\right) \\right] }\\text{.} \\label{eqn46}\n\\end{equation\nBy the assumtio\n\\begin{eqnarray}\n&&\\lim_{n\\rightarrow \\infty }\\frac{\\theta _{n+1}}{\\theta _{n}} \\label{eqn47}\n\\\\\n&=&\\lim_{n\\rightarrow \\infty }\\frac{\\frac{\\delta ^{2\\left( n+2\\right)\n}\\dprod\\nolimits_{k=0}^{n+1}\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta\n\\right) \\right] }{\\dprod\\nolimits_{k=0}^{n+1}\\left[ 1-a_{k}^{0}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left( 1+\\delta \\right) \\right]\n\\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] }}{\\frac{\\delta ^{2\\left(\nn+1\\right) }\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{1}a_{k}^{2}\\left(\n1-\\delta \\right) \\right] }{\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left( 1+\\delta \\right) \\right]\n\\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] }} \\notag \\\\\n&=&\\lim_{n\\rightarrow \\infty }\\frac{\\delta ^{2}\\left[\n1-a_{n+1}^{1}a_{n+1}^{2}\\left( 1-\\delta \\right) \\right] }{\\left[\n1-a_{n+1}^{0}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{n+1}^{1}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{n+1}^{2}\\left( 1+\\delta \\right) \\right] \n} \\notag \\\\\n&=&\\delta ^{2}<1\\text{.} \\notag\n\\end{eqnarray\nIt thus follows from ratio test that $\\sum\\limits_{n=0}^{\\infty }\\theta\n_{n}<\\infty $. Hence, we have $\\lim_{n\\rightarrow \\infty }\\theta _{n}=0$\nwhich implies that the iterative sequence defined by (7) converges faster\nthan the iterative sequence defined by SP iteration method (6).\n\nUsing Noor iteration method (5), we ge\n\\begin{eqnarray}\n\\left\\Vert \\omega _{n+1}-u^{\\ast }\\right\\Vert &=&\\left\\Vert \\left(\n1-a_{n}^{0}\\right) \\omega _{n}+a_{n}^{0}T\\varpi _{n}-u^{\\ast }\\right\\Vert \n\\label{eqn48} \\\\\n&\\geq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert\n-a_{n}^{0}\\left\\Vert T\\varpi _{n}-Tu^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert\n-a_{n}^{0}\\delta \\left\\Vert \\varpi _{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}-a_{n}^{0}\\delta \\left( 1-a_{n}^{1}\\right) \\right]\n\\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert -a_{n}^{0}a_{n}^{1}\\delta\n^{2}\\left\\Vert \\rho _{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left\\{ 1-a_{n}^{0}-a_{n}^{0}\\delta \\left( 1-a_{n}^{1}\\right)\n-a_{n}^{0}a_{n}^{1}\\delta ^{2}\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right)\n\\right] \\right\\} \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left\\{ 1-a_{n}^{0}-a_{n}^{0}\\delta \\left[ 1-a_{n}^{1}\\left( 1-\\delta\n\\right) \\right] \\right\\} \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert \\notag\n\\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left\\Vert \\omega\n_{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\cdots \\notag \\\\\n&\\geq &\\left\\Vert \\omega _{0}-u^{\\ast }\\right\\Vert\n^{n+1}\\prod\\limits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right]\n\\text{.} \\notag\n\\end{eqnarray\nIt follows by (42) and (47) tha\n\\begin{equation}\n\\frac{\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert }{\\left\\Vert \\omega\n_{n+1}-x_{\\ast }\\right\\Vert }\\leq \\frac{\\delta ^{2\\left( n+1\\right)\n}\\left\\Vert x_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\dprod\\nolimits_{k=0}^{n}\\left[\n1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }{\\left\\Vert \\omega\n_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\prod\\limits_{k=0}^{n}\\left[\n1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] }\\text{.} \\label{eqn49}\n\\end{equation\nDefine \n\\begin{equation}\n\\theta _{n}=\\frac{\\delta ^{2\\left( n+1\\right) }\\dprod\\nolimits_{k=0}^{n\n\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }\n\\prod\\limits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] \n\\text{.} \\label{eqn50}\n\\end{equation\nBy the assumptio\n\\begin{eqnarray}\n\\lim_{n\\rightarrow \\infty }\\frac{\\theta _{n+1}}{\\theta _{n}}\n&=&\\lim_{n\\rightarrow \\infty }\\frac{\\delta ^{2}\\left[\n1-a_{n+1}^{1}a_{n+1}^{2}\\left( 1-\\delta \\right) \\right] }{\\left[\n1-a_{n+1}^{0}\\left( 1+\\delta \\right) \\right] } \\label{eqn51} \\\\\n&=&\\delta ^{2}<1\\text{.} \\notag\n\\end{eqnarray\nIt thus follows from ratio test that $\\sum\\limits_{n=0}^{\\infty }\\theta\n_{n}<\\infty $. Hence, we have $\\lim_{n\\rightarrow \\infty }\\theta _{n}=0$\nwhich implies that the iterative sequence defined by (7) converges faster\nthan the iterative sequence defined by Noor iteration method (5).\n\\end{proof}\n\nBy use of the following example due to \\cite{QR}, it was shown in (\\cite{CR\n, Example 4.1) that CR iterative method \\cite{CR} is faster than all of \\\nPicard (3), S \\cite{S}, Noor (5) and SP (6) iterative methods for a\nparticular class of \\ operators which is included in the class of\nweak-contraction mappings satisfying (11). In the following, for the sake of\nconsistent comparison, we will use the same example as that of \\ (\\cite{CR},\nExample 4.1) in order to compare the rates of convergence between the\nPicard-S iterative scheme (7) and the CR iteration method \\cite{CR} for\nweak-contraction mappings. In the following example, for convenience, we use\nthe notations $\\left( PS_{n}\\right) $ and $\\left( CR_{n}\\right) $ for the\niterative sequences associated to Picard-S (7) and CR \\cite{CR} iteration\nmethods, respectively.\n\n\\begin{example}\n\\cite{QR} Define a mapping $T:\\left[ 0,1\\right] \\rightarrow \\left[ 0,1\\right]\n$ as $Tx=\\frac{x}{2}$. Let $a_{n}^{0}=a_{n}^{1}=a_{n}^{2}=0$, for \nn=1,2,...,24$ and $a_{n}^{0}=a_{n}^{1}=a_{n}^{2}=\\frac{4}{\\sqrt{n}}$, for\nall $n\\geq 25$.\n\nIt can be seen easily that the mapping $T$ satisfies condition (11) with the\nunique fixed point $0\\in F_{T}$. Furthermore, it is easy to see that Example\n1 satisfies all the conditions of Theorem 1. Indeed, let $x_{0}\\neq 0$ be an\ninitial point for the iterative sequences $\\left( PS_{n}\\right) $ and \n\\left( CR_{n}\\right) $. Utilizing Picard-S (7) and CR \\cite{CR} iteration\nmethods we obtain \n\\begin{eqnarray}\nPS_{n} &=&\\frac{1}{2}\\left( \\frac{1}{2}-\\frac{4}{n}\\right) x_{n} \\notag \\\\\n&=&\\cdots \\notag \\\\\n&=&\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{4}-\\frac{2}{k}\\right) x_{0}\\text{\n} \\label{eqn52}\n\\end{eqnarray\n\\begin{eqnarray}\nCR_{n} &=&\\left( \\frac{1}{2}-\\frac{1}{\\sqrt{n}}-\\frac{4}{n}+\\frac{8}{n\\sqrt{\n}}\\right) x_{n} \\notag \\\\\n&=&\\cdots \\notag \\\\\n&=&\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{2}-\\frac{1}{\\sqrt{k}}-\\frac{4}{k}\n\\frac{8}{k\\sqrt{k}}\\right) x_{0}\\text{.} \\label{eqn53}\n\\end{eqnarray\nIt follows from (51) and (52) tha\n\\begin{equation*}\n\\frac{\\left\\vert PS_{n}-0\\right\\vert }{\\left\\vert CR_{n}-0\\right\\vert }\n\\frac{\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{4}-\\frac{2}{k}\\right) x_{0}}\n\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{2}-\\frac{1}{\\sqrt{k}}-\\frac{4}{k}\n\\frac{8}{k\\sqrt{k}}\\right) x_{0}}\n\\end{equation*\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ }=\\dprod\\limits_{k=25}^{n}\\frac{\\frac{1}{4}-\\frac{2}{k\n}{\\frac{1}{2}-\\frac{1}{\\sqrt{k}}-\\frac{4}{k}+\\frac{8}{k\\sqrt{k}}}\n\\end{equation*\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }=\\dprod\\limits_{k=25}^{n}\\frac\n\\left( k-8\\right) \\sqrt{k}}{2\\left( k\\sqrt{k}-2k-8\\sqrt{k}+16\\right) }\n\\end{equation*\n\\begin{eqnarray}\n\\text{ \\ \\ \\ \\ \\ \\ } &=&\\dprod\\limits_{k=25}^{n}\\frac{\\left( k-8\\right) \n\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) \\left( k-8\\right) } \\notag \\\\\n&=&\\dprod\\limits_{k=25}^{n}\\frac{\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) }\\text{\n} \\label{eqn54}\n\\end{eqnarray\nFor all $k\\geq 25$, we hav\n\\begin{eqnarray}\n\\frac{\\left( k-2\\right) \\left( \\sqrt{k}-4\\right) }{4} &>&1 \\notag \\\\\n&\\Rightarrow &\\left( k-2\\right) \\left( \\sqrt{k}-4\\right) >4 \\notag \\\\\n&\\Rightarrow &k\\left( \\sqrt{k}-4\\right) >2\\left( \\sqrt{k}-2\\right) \\notag\n\\\\\n&\\Rightarrow &\\frac{\\sqrt{k}-4}{2\\left( \\sqrt{k}-2\\right) }>\\frac{1}{k} \n\\notag \\\\\n&\\Rightarrow &\\frac{\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) }<1-\\frac{1}{k}\\text\n,} \\label{eqn55}\n\\end{eqnarray\nwhich yield\n\\begin{equation}\n\\frac{\\left\\vert PS_{n}-0\\right\\vert }{\\left\\vert CR_{n}-0\\right\\vert \n=\\dprod\\limits_{k=25}^{n}\\frac{\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) \n<\\dprod\\limits_{k=25}^{n}\\left( 1-\\frac{1}{k}\\right) =\\frac{24}{n}\\text{.}\n\\label{eqn56}\n\\end{equation\nTherefore, we hav\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\frac{\\left\\vert PS_{n}-0\\right\\vert }{\\left\\vert\nCR_{n}-0\\right\\vert }=0\\text{,} \\label{eqn57}\n\\end{equation\nwhich implies that the Picard-S iterative scheme (7) is faster than the CR\niteration method \\cite{CR}$.$\n\\end{example}\n\nHaving regard to R. Chugh et al.'s result (\\cite{CR}, Example 4.1), L.B.\nCiric et al.'s results \\cite{Ciric} and Example 1 above, we conclude that\nPicard-S iteration method is faster than all Picard (3), Mann \\cite{Mann},\nIshikawa \\cite{Ishikawa}, S \\cite{S}, Noor (5) and SP (6) iterative methods.\n\nWe are now able to establish the following data dependence result.\n\n\\begin{theorem}\nLet $T$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $ be as in\nTheorem 1 and $\\widetilde{T}$ an approximate operator of $T$. Let $\\left\\{\nx_{n}\\right\\} _{n=0}^{\\infty }$ be an iterative sequence generated by (7)\nfor $T$ and define an iterative sequence $\\left\\{ \\widetilde{x}_{n}\\right\\}\n_{n=0}^{\\infty }$ as follow\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\widetilde{x}_{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \n\\widetilde{x}_{n+1}=\\widetilde{T}\\widetilde{y}_{n}\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \n\\widetilde{y}_{n}=\\left( 1-a_{n}^{1}\\right) \\widetilde{T}\\widetilde{x\n_{n}+a_{n}^{1}\\widetilde{T}\\widetilde{z}_{n}\\text{, \\ \\ \\ \\ \\ \\ } \\\\ \n\\widetilde{z}_{n}=\\left( 1-a_{n}^{2}\\right) \\widetilde{x}_{n}+a_{n}^{2\n\\widetilde{T}\\widetilde{x}_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn58}\n\\end{equation\nwhere $\\left\\{ a_{n}^{i}\\right\\} _{n=0}^{\\infty }$, $i\\in \\left\\{\n1,2\\right\\} $ be real sequences in $\\left[ 0,1\\right] $ satisfying (i) \n\\frac{1}{2}\\leq a_{n}^{1}a_{n}^{2}$ for all $n\\in \n\\mathbb{N}\n$, and (ii) $\\sum\\limits_{n=0}^{\\infty }a_{n}^{1}a_{n}^{2}=\\infty $. If \n\\widetilde{T}\\widetilde{u}^{\\ast }=\\widetilde{u}^{\\ast }$ such that \n\\widetilde{x}_{n}\\rightarrow \\widetilde{u}^{\\ast }$ as $n\\rightarrow \\infty \n, then we hav\n\\begin{equation}\n\\left\\Vert u^{\\ast }-\\widetilde{u}^{\\ast }\\right\\Vert \\leq \\frac\n5\\varepsilon }{1-\\delta }\\text{,} \\label{eqn59}\n\\end{equation\nwhere $\\varepsilon >0$ is a fixed number.\n\\end{theorem}\n\n\\begin{proof}\nIt follows from (7), (11), (12), and (57) tha\n\\begin{eqnarray}\n\\left\\Vert z_{n}-\\widetilde{z}_{n}\\right\\Vert &\\leq &\\left(\n1-a_{n}^{2}\\right) \\left\\Vert x_{n}-\\widetilde{x}_{n}\\right\\Vert\n+a_{n}^{2}\\left\\Vert Tx_{n}-\\widetilde{T}\\widetilde{x}_{n}\\right\\Vert \\notag\n\\\\\n&\\leq &\\left[ 1-a_{n}^{2}+a_{n}^{2}\\delta \\right] \\left\\Vert x_{n}\n\\widetilde{x}_{n}\\right\\Vert +a_{n}^{2}L\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n+a_{n}^{2}\\varepsilon \\text{,} \\label{eqn60}\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert y_{n}-\\widetilde{y}_{n}\\right\\Vert &\\leq &\\left(\n1-a_{n}^{1}\\right) \\delta \\left\\Vert x_{n}-\\widetilde{x}_{n}\\right\\Vert\n+a_{n}^{1}\\delta \\left\\Vert z_{n}-\\widetilde{z}_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1-a_{n}^{1}\\right) L\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n+a_{n}^{1}L\\left\\Vert z_{n}-Tz_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1-a_{n}^{1}\\right) \\varepsilon +a_{n}^{1}\\varepsilon \\text{,}\n\\label{eqn61}\n\\end{eqnarray\n\\begin{equation}\n\\left\\Vert x_{n+1}-\\widetilde{x}_{n+1}\\right\\Vert \\leq \\delta \\left\\Vert\ny_{n}-\\widetilde{y}_{n}\\right\\Vert +L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n+\\varepsilon \\text{.} \\label{eqn62}\n\\end{equation\nFrom the relations (59), (60), and (61\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-\\widetilde{x}_{n+1}\\right\\Vert &\\leq &\\delta ^{2}\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}\n\\widetilde{x}_{n}\\right\\Vert \\notag \\\\\n&&+\\left\\{ a_{n}^{1}a_{n}^{2}\\delta ^{2}L+\\left( 1-a_{n}^{1}\\right) \\delta\nL\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \\notag \\\\\n&&+L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert +a_{n}^{1}\\delta L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert \\notag \\\\\n&&+a_{n}^{1}a_{n}^{2}\\delta ^{2}\\varepsilon +\\left( 1-a_{n}^{1}\\right)\n\\delta \\varepsilon +a_{n}^{1}\\delta \\varepsilon +\\varepsilon \\text{.}\n\\label{eqn63}\n\\end{eqnarray\nSince $a_{n}^{1}$, $a_{n}^{2}\\in \\left[ 0,1\\right] $ $\\ $and $\\frac{1}{2\n\\leq a_{n}^{1}a_{n}^{2}$ for all $n\\in \n\\mathbb{N}\n\n\\begin{equation}\n1-a_{n}^{1}a_{n}^{2}\\leq a_{n}^{1}a_{n}^{2}\\text{,} \\label{eqn64}\n\\end{equation\n\\begin{equation}\n1-a_{n}^{1}\\leq 1-a_{n}^{1}a_{n}^{2}\\leq a_{n}^{1}a_{n}^{2}\\text{,}\n\\label{eqn65}\n\\end{equation\n\\begin{equation}\n1\\leq 2a_{n}^{1}a_{n}^{2}\\text{.} \\label{eqn66}\n\\end{equation\nUse of the facts $\\delta $, $\\delta ^{2}\\in \\left( 0,1\\right) $, (63), (64),\nand (65) in (62) yields \n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-\\widetilde{x}_{n+1}\\right\\Vert &\\leq &\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}\n\\widetilde{x}_{n}\\right\\Vert \\notag \\\\\n&&+a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\left\\{ \\frac{L\\delta \\left(\n1+\\delta \\right) \\left\\Vert x_{n}-Tx_{n}\\right\\Vert }{1-\\delta }\\right. \n\\notag \\\\\n&&\\left. +\\frac{2L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert +2\\delta L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +5\\varepsilon }{1-\\delta }\\right\\} \\text{.}\n\\label{eqn67}\n\\end{eqnarray\nDefin\n\\begin{eqnarray}\n\\beta _{n} &:&=\\left\\Vert x_{n}-\\widetilde{x}_{n}\\right\\Vert \\text{,}\n\\label{eqn68} \\\\\n\\mu _{n} &:&=a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\in \\left( 0,1\\right) \n\\text{,} \\notag \\\\\n\\gamma _{n} &:&=\\frac{L\\delta \\left( 1+\\delta \\right) \\left\\Vert\nx_{n}-Tx_{n}\\right\\Vert +2L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert +2\\delta\nL\\left\\Vert z_{n}-Tz_{n}\\right\\Vert +5\\varepsilon }{1-\\delta }\\geq 0\\text{.}\n\\notag\n\\end{eqnarray\nHence, the inequality (66) perform all assumptions in Lemma 2 and thus an\napplication of Lemma 2 to (66) yields \n\\begin{eqnarray}\n0 &\\leq &\\lim \\sup_{n\\rightarrow \\infty }\\left\\Vert x_{n}-\\widetilde{x\n_{n}\\right\\Vert \\label{eqn69} \\\\\n&\\leq &\\lim \\sup_{n\\rightarrow \\infty }\\frac{L\\delta \\left( 1+\\delta \\right)\n\\left\\Vert x_{n}-Tx_{n}\\right\\Vert +2L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n+2\\delta L\\left\\Vert z_{n}-Tz_{n}\\right\\Vert +5\\varepsilon }{1-\\delta }. \n\\notag\n\\end{eqnarray\nWe know from Theorem 1 that $\\lim_{n\\rightarrow \\infty }x_{n}=u^{\\ast }$ and\nsince $Tu^{\\ast }=u^{\\ast }\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert z_{n}-Tz_{n}\\right\\Vert =0\\text{.}\n\\label{eqn70}\n\\end{equation\nTherefore the inequality (68) becomes \n\\begin{equation}\n\\left\\Vert u^{\\ast }-\\widetilde{u}^{\\ast }\\right\\Vert \\leq \\frac\n5\\varepsilon }{1-\\delta }. \\label{eqn71}\n\\end{equation}\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}