diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbsso" "b/data_all_eng_slimpj/shuffled/split2/finalzzbsso" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbsso" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\\par With the fast development of deep learning techniques, face recognition systems (FRSs) have become a popular technique for identifying and verifying people due to the ease of capturing biometrics from the face. In our daily lives, one of the most relevant applications of FRS is the Automatic Border Control system, which can quickly verify the identity of a person with his electronic machine-readable travel document (eMRTD) \\cite{icao20159303} by comparing the face image of the traveler with a reference in the database. Although high-accuracy FRS can effectively distinguish an individual from others, it is vulnerable to adversarial attacks that conceal the real identity. Recent research found that attacks based on morphed faces \\cite{ferrara2014magic,scherhag2017vulnerability} pose a serious security risk in various applications. \n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=.75\\linewidth]{image\/fingerprint.eps}\n\\caption{Few-shot learning for morphing attack fingerprinting (MAF), a multiclass extension of MAD. Each class (morphing attack model) of the training set contains a few examples. After training, the model can classify unseen test samples for each class.}\n\\label{fig:MAF}\n\\end{figure*}\n\n\\par Morphing attacks were first introduced in 2014 \\cite{ferrara2014magic}. The morphed face is combined by two or more bona fide faces, and it was shown that commercial face recognition software tools are highly vulnerable to such attacks. In a further study \\cite{ferrara2016effects}, the authors showed that the images of morphed faces are realistic enough to fool human examiners. With the emergence of face morphing generation techniques \\cite{gimp,damer2018morgan,zhang2020mipgan,karras2019style} and numerous easy-to-use face morphing softwares (e.g., MorphThing \\cite{morphthing}, 3Dthis Face Morph \\cite{3dmorph}, Face Swap Online \\cite{faceswap}, Abrosoft FantaMorph \\cite{fanta}, FaceMorpher \\cite{morpher}), there is an imminent need to protect FRS security by detecting morphing attacks \\cite{scherhag2019face}. \n\nSome morphing attack detection (MAD) approaches have been developed since 2018 (for a recent review, see \\cite{venkatesh2021face}). They can be categorized into two types: single image-based (S-MAD) and differential image-based (D-MAD) \\cite{raja2020morphing}. The deep face representation for D-MAD has been studied in \\cite{scherhag2020deep}; existing S-MAD methods can be further classified into two subtypes \\cite{venkatesh2020face}: model-based (using handcraft characteristics) and deep learning-based. Noise-based photo-response non-uniformity (PRNU) methods \\cite{debiasi2018prnu,debiasi2018prnu2,scherhag2019detection,zhang2018face} represent the former subtype due to its popularity and outstanding performance. Originally proposed for camera identification, PRNU turns out to be useful for detecting the liveness of face photos. For the latter subtype, Noiseprint \\cite{cozzolino2018noiseprint} used a CNN to learn the important features, with the objective of improving detection performance and supporting fingerprinting applications. \n\n\\par Despite rapid progress, existing MAD methods are often constructed on a small training dataset and a single modality, which makes them lacking good generalization properties \\cite{raja2020morphing,venkatesh2020face}. The performance of existing MAD methods might be satisfactory for predefined morphing attack models, but degrades rapidly when deployed in the real world facing newly evolved attacks. Although it is possible to alleviate this problem by fine-tuning the existing MAD model, the cost of collecting labeled data for every new morphing attack is often formidable. Furthermore, we argue that MAD alone is not sufficient to meet the demand for increased security risk facing FRS. A more aggressive countermeasure than MAD to formulate the problem of morphing attack fingerprinting (MAF), that is, we aim at a multiclass classification of morphing attack models, as shown in Fig. \\ref{fig:MAF}.\n\n\\par Based on the above observations, we propose to formulate MAF as a few-shot learning problem in this paper. Conventional few-shot learning (FSL) \\cite{snell2017prototypical} learns the knowledge from a few examples of each class and predicts the class label of the new test samples. Similarly, we train the detector using data from both predefined models and new attack models (only a few samples are required) to predict unknown new test samples. This task is named the few-shot MAD (FS-MAD) problem. Unlike existing MAD research, few-shot MAF (FS-MAF) aims at learning general discriminative features, which can be generalized from predefined to new attack models. The problem of few-shot MAF is closely related to camera identification (ID) \\cite{lukas2006digital}, camera model fingerprinting \\cite{cozzolino2018noiseprint}, and GAN fingerprinting (a.k.a. model attribution \\cite{yu2019attributing}) in the literature. The main contributions of this paper are summarized below.\n\n$\\bullet$ Problem formulation of few-shot learning for MAD\/MAF. We challenge the widely accepted assumptions of the MAD community, including the NIST's FRVT MORPH competition. The generalization property of MAD\/MAF methods will be as important as the optimization of recognition accuracy. \n\n$\\bullet$ Feature-level fusion for MAD applications. Although both PRNU and Noiseprint have shown promising performance in camera identification applications, no one has demonstrated their complementary nature in the open literature. We believe that this work is the first to combine them through feature-level fusion and to study the optimal fusion strategy.\n\n$\\bullet$ Design a fusion-based FSL method with adaptive posterior learning (APL) for MAD\/MAF. By adaptively combining the most surprising observations encountered by PRNU and Noiseprint, we can achieve a good generalization property by optimizing the performance of FS-MAD\/FS-MAF at the system level. \n\n$\\bullet$ Construction of a large-scale benchmark dataset to support MAD\/MAF research. More than 20,000 images with varying spatial resolution have been collected from various sources. Extensive experimental results have justified the superior generalization performance of FS-MAD and FS-MAF over all other competing methods. \n\n\n\\section{Related Work}\n\\label{related}\n\n\\subsection{Morphing Attack Detection (MAD)}\n\\noindent \\textbf{Model-based S-MAD}. Residual noise feature-based methods are designed to analyze pixel discontinuity, which may be greatly affected by the morphing process. Generally, noise patterns are extracted by subtracting the given image from a denoised version of the same image using different models, such as the deep multiscale context aggregate network (MS-CAN) \\cite{venkatesh2020detecting}. The most popular should be sensor noise patterns, such as PRNU. Recently, both PRNU-based \\cite{zhang2018face,debiasi2018prnu,debiasi2018prnu2,scherhag2019detection} and scale-space ensemble approaches \\cite{raja2020morphing,raja2017transferable} have been studied. \n\n\n\\noindent \\textbf{Learning-based S-MAD}. Along with rapid advances in deep learning, many methods have considered the extraction of deep learning features for detection. The use of a convolutional neural network (CNN) has reported promising results \\cite{8897214}. Most works are based on pre-trained networks and transfer learning. Commonly adopted deep models contain AlexNet \\cite{krizhevsky2012imagenet}, VGG16 \\cite{simonyan2014very}, VGG19 \\cite{simonyan2014very,raja2017transferable}, GoogleNet \\cite{szegedy2015going}, ResNet \\cite{he2016deep}, etc. In addition, several self-design models were also proposed. More recently, a deep residual color noise pattern was proposed for MAD in \\cite{venkatesh2019morphed}; and an attention-based deep neural network (DNN) \\cite{aghdaie2021attention} was studied, focusing on the salient regions of interest (ROI) that have the most spatial support for the morph detector decision function.\n\n\n\\noindent\\textbf{Learning-based D-MAD}. The presented D-MAD methods mainly focus on feature differences and demorphing. For feature difference-based methods, features of the suspected image and the live image are subtracted and further classified. Texture information, 3D information, gradient information, landmark points, and deep feature information (ArcFace \\cite{scherhag2020deep}, VGG19 \\cite{seibold2020accurate}) are the most popular features used. The authors in \\cite{scherhag2018detecting} computed distance-based and angle-based features of landmark points for analysis. In \\cite{singh2019robust}, a robust method using diffuse reflectance in a deep decomposed 3D shape was proposed. Fusion methods were commonly adopted by concatenating hand-crafted Local Binary Pattern Histogram (LBPH) and transferable deep CNN features \\cite{damer2019multi}, or concatenating feature vectors extracted from texture descriptors, keypoint extractors, gradient estimators and deep neural networks \\cite{scherhag2018towards}. More recently, a discriminative DMAD method in the wavelet subband domain was developed to discern the disparity between a real and a morphed image.\n\n\n\\subsection{Few-Shot Learning (FSL)}\nFew-shot learning addresses the challenge with the generalization property of deep neural networks, i.e., how can a model quickly generalize after only seeing a few examples from each class? Early approaches include meta-learning models \\cite{ravi2016optimization} and deep metric learning techniques \\cite{snell2017prototypical}. More recent advances have explored new directions such as the relation network \\cite{sung2018learning}, meta-transfer learning \\cite{sun2019meta}, adaptive posterior learning (APL) \\cite{ramalho2019adaptive}, and cluster-based object seeker with shared object concentrator (COSOC) \\cite{luo2021rectifying}.\n\n\n\\subsection{Camera and Deepfake Fingerprinting}\n\\par PRNU, as a model-based device fingerprint, has been used to perform multiple digital forensic tasks, such as device identification \\cite{cozzolino2020combining}, device linking \\cite{salazar2021evaluation}, forgery localization \\cite{lin2020prnu}, detection of digital forgeries \\cite{lugstein2021prnu}. It can find any type of forgery, irrespective of its nature, since the lack of PRNU is seen as a possible clue of manipulation. Furthermore, PRNU-based MAD methods \\cite{debiasi2018prnu,debiasi2018prnu2,scherhag2019detection,zhang2018face} also confirm the usefulness of the sensor fingerprint in MAD.\nIn recent years, PRNU has been applied successfully in MAD \\cite{debiasi2018prnu2,debiasi2018prnu,scherhag2019detection}. The method in \\cite{debiasi2018prnu2} shows that region-based PRNU spectral analysis reliably detects morphed face images, while it fails if image post-processing is applied to generated morphs. Based on previous work, a PRNU variance analysis was performed in \\cite{debiasi2018prnu}. It focused on local variations of face images, which can be useful as a reliable indicator for image morphing. The work in \\cite{scherhag2019detection} proposed an improved version of the scheme based on the previous PRNU variance analysis in image blocks. Another work \\cite{marra2019gans} showed that each GAN model leaves a specific fingerprint in the generated images, just as the PRNU traces left by different cameras in real-world photos.\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/pipeline.eps}\n\\vspace{-0.75cm}\n\\caption{An overview of the proposed system (FBC-APL) for few-shot MAF (FS-MAF). It consists of factorized bilinear coding (FBC) and adaptive posterior learning (APL) modules. The output contains the probability that the input image will be classified into one of the known morphing models.}\n\\label{fig:pipeline}\n\\end{figure*}\n\n\n\\section{Methodology}\n\\label{funda}\n\nMorphing attack fingerprinting (MAF) refers to the multiclass generalization of the existing binary MAD problem. In addition to detecting the presence of morphing attacks, we aim at finer-granularity classification about the specific model generating the face morph. It is hypothesized that different attack models inevitably leave fingerprints in morphed images (conceptually similar to the sensor noise fingerprint left by different camera models \\cite{lukas2006digital}).\nFig. \\ref{fig:pipeline} shows the overall system consisting of two stages: feature fusion through factorized bilinear coding (FBC) and few-shot learning (FSL) for MAF. We will first elaborate on fusion-based MAD in detail and then discuss the extension to few-shot MAF.\n\n\n\n\\subsection{Fusion-based Single-Image MAD}\n\\par Noise is often embedded in the image data during acquisition or manipulation. The uniqueness of the noise pattern is determined by the physical source or an artificial algorithm, which can be characterized as a statistical property to reveal the source of the noise \\cite{popescu2004statistical}. The noise of the sensor pattern was first used for the MAD task by performing a facial quantification statistics analysis, which confirmed its effectiveness \\cite{zhang2018face}. Here, we consider two types of sensor noise patterns: Photo Response Non-Uniformity (PRNU) \\cite{fridrich2009digital} and Noiseprint \\cite{cozzolino2018noiseprint}.\n\n\\noindent \\textbf{Photo Response Non-Uniformity (PRNU)}. PRNU originates from slight variations between individual pixels during photoelectric conversion in digital image sensors \\cite{lukas2006digital}. Different image sensors embed this weak signal into acquired images as a unique signature. Although the weak signal itself is mostly imperceptible to the human eye, its uniqueness can be characterized by statistical techniques and exploited by sophisticated fingerprinting methods such as PRNU \\cite{fridrich2009digital}. \nThis systemic and individual pattern, which plays the role of a sensor fingerprint, has proven robust to various innocent image processing operations such as JPEG compression. Although PRNU is stochastic in nature, it is a relatively stable component of the sensor over its lifetime. \n\nPRNU has been widely studied in camera identification because it is not related to image content and is present in every image acquired by the same camera. Most recently, PRNU has been proposed as a promising tool for detecting morphed face images \\cite{debiasi2018prnu,debiasi2018prnu2}.\nThe spatial feature of PRNU can be extracted using the approach presented by Fridrich \\cite{fridrich2009digital}. For each image $I$, the residual noise ${W}_{I}$ is estimated as described in Equation \\eqref{eq1}:\n\\begin{equation}\n\\label{eq1}\n\\vspace{-0.1in}\n{W}_{I} = I - F(I) \n\\end{equation}\nwhere $F$ is a denoising function that filters the noise from the sensor pattern. The clever design of the mapping function $F$ (e.g., wavelet-based filter \\cite{lukas2006digital}) makes PRNU an effective tool for various forensic applications.\n\n\n\\noindent \\textbf{Noiseprint}. Unlike model-based PRNU, data-driven or learning-based methods tackle the problem of camera identification by assuming the availability of training data. Instead of mathematically constructing unique signatures, Noiseprint \\cite{cozzolino2018noiseprint} attempts to learn the embedded noise pattern from the training data. A popular learning methodology adopted by Noiseprint is to construct a Siamese network \\cite{bertinetto2016fully}. The Siamese network is trained with pairs of image patches that come from the same or different cameras in an unsupervised manner. Similarly to PRNU, Noiseprint has shown clear traces of camera fingerprints. It should be noted that Noiseprint has performed better than PRNU when cropped image patches become smaller, implying the benefit of exploiting spatial diversity \\cite{cozzolino2018noiseprint}.\n\n\\par To the best of our knowledge, Noiseprint has not been proposed for MAD in the open literature. Existing deep learning-based S-MADs often use pre-trained networks such as VGG-face \\cite{raja2020morphing}. Our empirical study shows that morphing-related image manipulation leaves evident traces in Noiseprint, suggesting the feasibility of Noiseprint-based MAD. Moreover, morphed faces are often manipulated across the face, whose spatial diversity can be exploited by cropping image patches using Noiseprint. To justify this claim, Fig. \\ref{fig:featurefig} (d) presents the Noiseprint comparison between bona fide and morphed faces averaged over 1,000 examples. Visual inspection clearly shows that the areas around the eyes and nose have more significant (bright) traces than the bona fide faces. In contrast, Fig. \\ref{fig:featurefig} (c) shows the comparison of the extracted PRNU patterns with the same experimental setting. Similar visual differences between bona fide and morphed faces can be observed; more importantly, PRNU and Noiseprint demonstrate complementary patterns (low vs. high frequency) begging for fusion.\n\n\\noindent \\textbf{Feature Fusion Strategy}. Fusion methods are usually based on multiple feature representations or classification models. Taking advantage of diversity, the strategy of combining classifiers \\cite{kittler1998combining} has shown improved recognition performance compared to single-mode approaches. Recent work has shown that fusion methods based on Dempster-Shafer theory can improve the performance of face morphing detectors \\cite{makrushin2019dempster}. However, previous work \\cite{makrushin2019dempster} only considered ensemble models of the scale space and pre-trained CNN models. For the first time, we propose to combine PRNU and Noiseprint using a recently developed similarity-based fusion method, called factorized bilinear coding (FBC) \\cite{gao2020revisiting}.\n\nFBC is a sparse coding formulation that generates a compact and discriminative representation with substantially fewer parameters by learning a dictionary $\\boldsymbol{B}$ to capture the structure of the entire data space. It can preserve as much information as possible and activate as few dictionary atoms as possible. Let $\\boldsymbol{x}_i$, $\\boldsymbol{y}_j$ be the two features extracted from PRNU and Noiseprint, respectively. The key idea behind FBC is to encode the extracted features based on sparse coding and to learn a dictionary $\\boldsymbol{B}$ with $k$ atoms by matrix factorization. Specifically, the sparsity FBC opts to encode the two input features $(\\boldsymbol{x}_i, \\boldsymbol{y}_j)$ in the FBC code $\\boldsymbol{c}_v$ by solving the following optimization problem:\n\n\\begin{equation}\n\\underset{{{\\boldsymbol{c}}_{v}}}{\\mathop{\\min }}\\,\\bigg|\\bigg|{{\\boldsymbol{x}}_{i}}\\boldsymbol{y}_{j}^{\\top}-\\sum\\limits_{l=1}^{k}{c_{v}^{l}}{{\\boldsymbol{U}}_{l}}\\boldsymbol{V}_{l}^{\\top}\\bigg|{{\\bigg|}^{2}}+\\lambda||{{\\boldsymbol{c}}_{v}}|{{|}_{1}}\n\\end{equation}\nwhere $\\lambda$ is a trade-off parameter between the reconstruction error and the sparsity. The dictionary atom $b_l$ of $\\boldsymbol{B}$ is factorized into $\\boldsymbol{U}_{l}\\boldsymbol{V}_{l}^{\\top}$ where \n $\\boldsymbol{U}_{l}$ and $\\boldsymbol{V}_{l}^{\\top}$ are low-rank matrices. The $l_1$ norm $|| \\cdot ||_1$ is used to impose the sparsity constraint on $\\boldsymbol{c}_{v}$. In essence, the bilinear feature $\\boldsymbol{x}_{i}\\boldsymbol{y}_{j}^{\\top}$ is reconstructed by $\\sum\\limits_{l=1}^{k}{c_{v}^{l}} \\boldsymbol{U}_{l}\\boldsymbol{V}_{l}^{\\top}$\nwith $\\boldsymbol{c}_{v}$ being the FBC code and $c_v^l$ representing the $l$-th element of $\\boldsymbol{c}_{v}$.\n\nThis optimization can be solved using well-studied methods such as LASSO \\cite{tibshirani1996regression}. With two groups of features $\\{\\boldsymbol{x}_i\\}_{i=1}^m$ and $\\{\\boldsymbol{y}_j\\}_{j=1}^n$ at our disposal, we first calculate all FBC codes $\\{\\boldsymbol{c}_v\\}_{v=1}^N$ and then fuse them by the operation $max$ to achieve global representation $\\boldsymbol{z}$: \n\\begin{equation}\n \\boldsymbol{z}=max\\left\\{\\boldsymbol{c}_{v}\\right\\}_{i=1}^{N}.\n \\label{eq:3}\n\\end{equation}\nThe entire FBC module is shown in Fig. \\ref{fig:fbc}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/fbc.eps}\n\\vspace{-0.65cm}\n\\caption{The architecture of the FBC module to combine PRNU and Noiseprint. $\\tilde{\\boldsymbol{U}}$ and $\\tilde{\\boldsymbol{V}}$ replace $\\boldsymbol{U}$ and $\\boldsymbol{V}$ to avoid numerically unstable matrix inversion operations; $\\boldsymbol{P}$ is a fixed binary matrix.}\n\\label{fig:fbc}\n\\end{figure}\n\n\n\n\\subsection{Few-shot learning for Morphing Attack Fingerprinting}\n\\par Based on the FBC-fused feature $\\boldsymbol{z}$, we construct a few-shot learning module as follows. Inspired by recent work on adaptive posterior learning (APL) \\cite{ramalho2019adaptive}, we have redesigned the FSL module to adaptively select feature vectors of any size (e.g., FBC-fused feature) as input. This newly designed module consists of three parts: an encoder, a decoder, and an external memory store. The encoder is used to generate a compact representation for the incoming query data; the memory saves the previously seen representation by the encoder; the decoder aims at generating a probability distribution over targets by analyzing the query representation and pairwise data returned from the memory block. Next, we will elaborate on the design of these three components.\n\n\\noindent\\textbf{Encoder}. The encoder can convert input data of any size to a compact embedding with low dimensionality. It is implemented by a convolutional network, which is composed of a single first convolution to map the input to 64 feature channels, followed by 15 convolutional blocks. Each block is made up of a batch normalization step, followed by a ReLU activation and a convolutional layer with kernel size 3. For every three blocks (one combo), the convolution contains a stride 2 to down-sample the image. All layers have 64 features. Finally, the feature is flattened to a 1D vector and passed through Layer Normalization, generating a 64-dimensional embedding as an encoded representation.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/MAD_APL_module2.eps}\n\\vspace{-0.4cm}\n\\caption{(a) APL training procedure for iterations. We train the APL module on a sequence of episodes ($x_t$, $y_t$), where $x_t$ is the FBC feature and $y_t$ is the true label. At first, the memory is empty; at each iteration, a batch of samples is fed to the module, and a prediction is made. Cross-entropy loss L($\\hat{y}_t$, $y_t$) is calculated and a gradient update step is performed to minimize the loss in that batch alone. The loss is also fed to the memory controller so that the network can decide whether to write to memory. (b) and (c) show the behavior of the accuracy and memory size in a 9-class training scenario. APL stops writing to memory after having about 7 examples per class for classification.}\n\\label{fig:fscnn}\n\\end{figure*}\n\n\n\\noindent\\textbf{Memory}. The external memory store is a database to store experiences. It is key-value data. Each row represents the information for one data point. Each column is decomposed into an embedding (encoded representation) and a true label. The memory store is managed by a controller that decides which embeddings can be written into the memory while at the same time tries to minimize the amount of written embeddings. During the writing process, a quantity metric surprise is defined. The higher the probability that the model assigns to the true class correctly, the less surprised it will be. If the confidence in the prediction in the correct class is smaller than the probability assigned by a uniform prediction, the embedding should be written into memory. During the querying process, the memory is queried for the k-nearest-neighbors of the embeddings of queries from the encoder. The distance metric used to calculate the proximity between points is an open choice, and here we use two types (euclidean distance and cosine distance). Both the full-row data for each of the neighbors and query embeddings are concatenated and fed to the decoder. \n\n\\begin{figure\n\\centering\n\\includegraphics[width=0.8\\linewidth]{image\/db_sample.eps}\n\\caption{Face samples in five merged datasets. (a) FERET-Morphs (bona fide faces come from FERET \\cite{feret}), (b) FRGC-Morphs (bona fide faces come from FRGC V2.0 \\cite{frgc}), (c) FRLL-Morphs (bona fide faces come from Face Research Lab London Set (FRLL) \\cite{amslraw}), (d) CelebA-Morphs (bona fide faces come from CelebA \\cite{liu2015deep}), and (e) Doppelg\u00e4nger Morphs (bona fide faces come from the Web collection).}\n\\label{fig:sample}\n\\end{figure}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.95\\linewidth]{image\/self_sample.eps}\n\\caption{Some sample pairs of bona-fide face images from the Doppelg\u00e4nger dataset (note that these look-alike pairs do not have biological connections).}.\n\\label{fig:self}\n\\end{figure*}\n\n\n\\noindent\\textbf{Decoder}. The decoder takes the concatenation of query embedding, recalled neighbor embeddings from memory, labels, and distances as input. The architecture is a self-attention-based relational feedforward module. It processes each of the neighbors individually by comparing them with the query and then does a cross-element comparison with a self-attention module before reducing the activations with an attention vector calculated from neighbor distances. The self-attention blocks are repeated five times in a residual manner. The resulting tensors are called activation tensors. In addition, the distances between neighbors and the query are passed through a softmax layer to generate an attention vector, which is summed with the activation tensor on the first axis to obtain the final logit result for classification. The self-attention block comprises a multihead attention layer, a multihead dot product attention (MHDPA) layer \\cite{santoro2018relational} for cross-element comparison, and a nonlinear multilayer perceptron (MLP) layer to process each element individually. \n\n\n\n\\noindent\\textbf{Training}. During APL training, as shown in Fig. \\ref{fig:fscnn} (a), the query data (that is, the FBC-fused feature vector $\\boldsymbol{z}$) are passed through the encoder to generate an embedding, and this representation is used to query an external memory store. At first, the memory is empty; at each training episode, a batch of examples is fed to the model, and a prediction is made. Cross-entropy loss is used to be fed to the memory controller to decide whether to write to memory. After the query is searched in memory, the returned memory contents, as well as the query, are fed to the decoder for classification. Figs. \\ref{fig:fscnn} (b) and (c) show the behavior (accuracy and memory size) of APL during a single episode. The accuracy of APL increases as it sees more samples and saturates at some point, indicating that the additional inputs do not surprise the module anymore. In the case of the 9-class classification scenario, we have observed that about 7 examples per class are sufficient to reach performance saturation.\n\n\\noindent\\textbf{Morphing Attack Fingerprinting}. Both PRNU \\cite{lukas2006digital} and Noiseprint \\cite{cozzolino2018noiseprint} were originally proposed for the identification of camera models, which is known to be a fingerprint in image forensics. The duality between image generation in the cyber and physical worlds inspires us to extend the existing problem formulation of binary MAD \\cite{debiasi2018prnu,debiasi2018prnu2,scherhag2019detection,zhang2018face} into multiclass fingerprinting. Different camera models (e.g., Sony vs. Nikon) are analogous to varying face morphing methods (e.g., LMA \\cite{damer2018morgan} vs. StyleGAN2 \\cite{karras2020analyzing}); therefore, it is desirable to go beyond MAD by exploring the feasibility of distinguishing one morphing attack from another. Fortunately, the system shown in Fig. \\ref{fig:pipeline} easily lends itself to generalization from binary to multiclass classification by resetting the hyperparameters, like the number of classes, the data path for each class, etc. To learn a discriminative FBC feature for fingerprinting, multiclass labeled data for training and testing should be prepared to be fed to the FBC module for retraining. When the FBC feature is available, it will be fed to the APL module for multiclass classification.\n\n\\section{Experiments}\n\\label{result}\n\n\n\\begin{table}[t]\n\\begin{center}\n\\small\n\\caption{The hybrid face morphing benchmark database consists of five image sources and 3-6 different morphing methods for each.}\n\\label{tab:dbinfo}\n\\vspace{-0.3cm}\n\\begin{threeparttable}\n\\begin{tabular}{ l |l c c}\n\\hline\\noalign{\\smallskip}\nDatabase & Subset & \\#Images & Resolution \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\\multirow{4}{*}{FERET-Morphs} &\tbona fide \\cite{feret} &\t576 & 512x768\\\\\n\t& FaceMorpher \\cite{sarkar2020vulnerability} &\t529 & 512x768\\\\\n\t& OpenCV \\cite{sarkar2020vulnerability} &529 & 512x768\\\\\n\t& StyleGAN2 \\cite{sarkar2020vulnerability} & 529 & 1024x1024 \\\\\n\\hline\n\\multirow{4}{*}{FRGC-Morphs} & bona fide \\cite{frgc} & \t964 & 1704x2272\\\\\n\t& FaceMorpher \\cite{sarkar2020vulnerability} &\t964 & 512x768\\\\\n\t& OpenCV \\cite{sarkar2020vulnerability} & 964 & 512x768\\\\\n\t& StyleGAN2 \\cite{sarkar2020vulnerability} & 964 & 1024x1024\\\\\n\\hline\n\\multirow{7}{*}{FRLL-Morphs} & bona fide \\cite{amslraw} & 102+1932 & 413x531\\\\\n\t& AMSL \\cite{neubert2018extended} & 2175 & 413x531 \\\\\n\t& FaceMorpher \\cite{sarkar2020vulnerability} &\t1222 & 431x513 \\\\\n\t& OpenCV \\cite{sarkar2020vulnerability}& 1221 & 431x513 \\\\\n\t& LMA &768 & 413x531\\\\\n\t& WebMorph \\cite{sarkar2020vulnerability} & 1221 & 413x531\\\\\n\t& StyleGAN2 \\cite{sarkar2020vulnerability} & 1222 & 1024x1024 \\\\\n\\hline\n\\multirow{4}{*}{CelebA-Morphs*} & bona fide \\cite{liu2015deep} & 2989 & 128x128 \\\\\n\t& MorGAN \\cite{damer2018morgan}& 1000 & 64x64\\\\\n\t& CIEMorGAN \\cite{damer2019realistic} & 1000 & 128x128 \\\\\n\t& LMA \\cite{damer2018morgan} & 1000 & 128x128 \\\\\n\\hline\n\\multirow{4}{*}{Doppelg\u00e4nger} & bona fide & 306 & 1024x1024 \\\\\n\t& FaceMorpher &\t150 & 1024x1024 \\\\\n\t& OpenCV &\t153 & 1024x1024\\\\\n\t& StyleGAN2\t& 153 & 1024x1024 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\small\n\\item * means only the cropped faces from raw images are used; no facial cropping is used for other datasets. The raw number of bona fide images in FRLL-Morphs is 102. Based on the raw faces, data augmentation is implemented to obtain extra 1932 images. \n\\end{tablenotes}\n\\end{threeparttable}\n\\vspace{-0.2in}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Large-scale Morphing Benchmark Dataset}\n\\noindent \\textbf{Benchmark Dataset Description.} To simulate the amount and distribution of data in real-world applications, we have combined five datasets to build a large-scale evaluation benchmark for detecting and fingerprinting few-shot morphing attacks. It contains four publicly available datasets, namely, FERET-Morphs \\cite{feret,sarkar2020vulnerability}, FRGC-Morphs \\cite{frgc,sarkar2020vulnerability}, FRLL-Morphs \\cite{amslraw,neubert2018extended,sarkar2020vulnerability}, and CelebA-Morphs \\cite{liu2015deep,damer2018morgan,damer2019realistic}. We also generated a new dataset with high-resolution faces collected from the Web, named Doppelg\u00e4nger Morphs, which contains morphing attacks from three algorithms and satisfies the so-called Doppelg\u00e4nger constraint \\cite{rottcher2020finding} (that is, look-alike faces without biological connections, refer to Fig. \\ref{fig:self}). A total of more than 20,000 images (6,869 bona fide faces and 15,764 morphed faces) have been collected, as shown in Table \\ref{tab:dbinfo}. Eight morphing algorithms are involved, including five landmark-based methods, OpenCV \\cite{opencv}, FaceMorpher \\cite{facemorpher}, LMA \\cite{damer2018morgan}, WebMorph \\cite{webmorph}, and AMSL \\cite{neubert2018extended}, and three adversarial generative networks based, including MorGAN \\cite{damer2018morgan}, CIEMorGAN \\cite{damer2019realistic}, and StyleGAN2 \\cite{karras2020analyzing}. Fig. \\ref{fig:sample} provides some cropped face samples with real faces and morphed faces from different morphing algorithms in these five datasets. To the best of our knowledge, this is one of the largest and most diverse face morphing benchmarks that can be used for MAD and MAF evaluations. \n\n\n\\noindent \\textbf{Evaluation Protocols.}\nBased on the large-scale dataset collected for few-shot MAD and MAF benchmarks, we have designed the evaluation protocols for each task as follows:\n\n$\\bullet$ Protocol FS-MAD (few-shot MAD). This protocol is designed for the few-shot binary classification (bona fide\/morphed). Training data comes from predefined types and a few (1 or 5) samples per new type. The test data come from new types. Here, the predefined types in our experiment contain five types of morphing results generated by FaceMorpher \\cite{facemorpher}, OpenCV \\cite{opencv}, WebMorph \\cite{webmorph}, StyleGAN2 \\cite{karras2020analyzing}, and AMSL \\cite{neubert2018extended}, and their corresponding bona fide faces. Faces of these types are from the FERET-Morphs, FRGC-Morphs, FRLL-Morphs, and Doppelg\u00e4nger-Morphs datasets. The morphing faces generated by LMA \\cite{damer2018morgan}, MorGAN \\cite{damer2018morgan}, and CIEMorGAN \\cite{damer2019realistic}, and their corresponding bona fide faces, are treated as new types. Faces of these types are from the CelebA-Morphs dataset.\n \n\n$\\bullet$ Protocol FS-MAF (few-shot MAF). This protocol is designed for multiclass fingerprint classification on the hybrid large-scale benchmark and for five separate morph datasets. Each morphing type and bona fide type are treated as different categories, namely FERET-Morphs, FRGC-Morphs, CelebA-Morphs, and Doppelg\u00e4nger datasets all with 4 classes, FRLL-Morphs with 7 classes, and the hybrid with 9 classes. For each data set, the data are split according to the rule of 8: 2. Training data consist of 1 and 5 images per class for 1 shot and 5-shot learning, respectively. The testing data contains non-overlapping data with the training in each dataset. To reduce the bias of the imbalanced distribution of the data, a similar number of faces is maintained for each class in each test set. \n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Traditional MAD performance (Accuracy-\\%) comparison of different feature-level fusion methods. NP - Noiseprint; CN - Concatenation; CC - Convex Compression; $\\bot$ - spatial; $\\square$ - spectral.}\n\\vspace{-0.3cm}\n\\label{tab:toyexp}\n\\begin{tabular}{ l c c c c c c }\n\\hline\\noalign{\\smallskip}\n{Feature} & CN & Sum & Max & CC & FBC (ours) \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nPRNU $\\bot$+PRNU $\\square$ & 83.78 & 84.23 & 83.78 & 84.23 & 84.42 \\\\\nNP $\\bot$ + NP $\\square$ & 89.19 & 89.64 & 89.64 & 89.64 & 96.40\\\\\nPRNU $\\bot$ + NP $\\square$ & 89.19 & 89.19 & 89.64 & 89.19 & 89.59\\\\\nPRNU $\\square$ + NP $\\bot$ & 83.78 & 84.23 & 83.78 & 85.59 & 86.04\\\\\nPRNU $\\square$. + NP $\\square$ & 86.94 & 85.59 & 85.59 & 86.94 & 84.68 \\\\\nPRNU $\\bot$ + NP $\\bot$ & \\textbf{91.44} & \\textbf{91.89} & \\textbf{91.89} & \\textbf{94.59} & \\textbf{96.85}\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{Performance (\\%) comparison of few-shot MAD. Accu. - Accuracy.}\n\\vspace{-0.3cm}\n\\label{tab:madfs}\n\\resizebox{.95\\linewidth}{!}{\n\\begin{tabular}{ l |c c c |c c c}\n\\hline\\noalign{\\smallskip}\n & \\multicolumn{3}{c}{1-shot} & \\multicolumn{3}{|c}{5-shot} \\\\\nMethod & Accu. & D-EER & ACER & Accu. &\tD-EER &\tACER \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nXception \\cite{chollet2017xception} & 66.50 & 32.50 & 33.50 & 73.25 & 27.00 & 26.75 \\\\\nMobileNetV2 \\cite{sandler2018mobilenetv2} & 67.00 & 36.50 & 33.00 & 71.25 & 29.00 & 28.75 \\\\\nNasNetMobile \\cite{zoph2018learning} & 59.00 & 40.50 & 41.00 & 66.25 & 35.00 & 33.75 \\\\\nDenseNet121 \\cite{huang2017densely} &68.25 & 31.50 & 31.75 & 73.50 & 24.50 & 26.50 \\\\\nArcFace \\cite{deng2019arcface} & 58.00 & 41.00 & 42.00 & 62.25 &\t37.50 & 37.75 \\\\\n\\hline\nRaghavendra. et al. \\cite{raghavendra2017face} & 49.25 & 48.00 & 50.75 & 46.75 & 47.50 & 53.25 \\\\\nMB-LBP \\cite{scherhag2020face} & 61.00 & 38.50 & 39.00 & 69.25 & 31.00 & 30.75 \\\\\nFS-SPN \\cite{zhang2018face} & 51.50 & 45.00 & 48.50 & 58.25 & 43.50 & 41.75 \\\\\t\nPipeline Footprint \\cite{neubert2018reducing} & 54.25 & 44.50 &45.75 &\t60.25 &\t38.50 &\t39.75 \\\\\nPRNU Analysis \\cite{debiasi2018prnu} & 56.50 & 57.00 & 43.50 & 64.25 & 66.70 & 35.75 \\\\\nInception-MAD \\cite{damer2022privacy} & 62.00 & 34.50 & 38.00 & 67.75 & 32.50 & 32.25 \\\\\nMixFaceNet-MAD \\cite{damer2022privacy} & 76.10 & 27.50 & 28.00 & 82.16 & 24.50 & 24.25 \\\\\nNoiseprint-SVM \\cite{cozzolino2018noiseprint} & 53.75 & 50.50 & 46.25 & 61.25 & 38.50 & 38.75 \\\\\n\\hline\nMeta-Baseline \\cite{chen2021meta} & 60.45 & - & - & 71.38 & - & - \\\\\nCOSOC \\cite{luo2021rectifying} & 66.89 & -&-& 74.54 &-&- \\\\\n\\hline\n\\textbf{FBC-APL} & \\textbf{99.25} & \\textbf{1.50} & \\textbf{0.75} & \\textbf{99.75} & \\textbf{0.50} & \\textbf{0.25} \\\\\n\\noalign{\\smallskip} \\hline\n\\end{tabular}}\n\\end{center}\n\\end{table}\n\n\n\\subsection{Experimental Settings}\n\\noindent \\textbf{Data Preprocessing}. Dlib face detector \\cite{king2009dlib} is used to detect and crop the face region. The cropped face is normalized according to the coordinates of the eye and resized to a fixed size of $270\\times270$ pixels. The feature extraction of PRNU and Noiseprint is performed on the processed faces, respectively. The resulting vector dimension for each type of feature is 72,900 ($270\\times270$).\n\n\\noindent \\textbf{Performance Metrics}. Following previous MAD studies \\cite{raja2020morphing,scherhag2020deep}, we report performance using four metrics, including: (1) Accuracy; (2) D-EER; (3) ACER; (4) Confusion Matrix. Detection Equal-Error-Rate(D-EER) is the error rate for which both BPCER and APCER are identical. Average Classification Error Rate (ACER) is calculated by the mean of the APCER and BPCER values. Attack Presentation Classification Error Rate (APCER) reports the proportion of morph attack samples incorrectly classified as bona fide presentation, and the Bona Fide Presentation Classification Error Rate (BPCER) refers to the proportion of bona fide samples incorrectly classified as morphed samples. Both APCER and BPCER are commonly used in previous studies of MAD \\cite{raja2020morphing,scherhag2020deep}.\n\n\n\\subsection{Comparison of Feature Extraction and Fusion Strategies}\nFirst, we show the visual comparison of extracted features by different methods.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{image\/avg_feature.eps}\n\\caption{Average of (a) MB-LBP, (b) FS-SPN, (c) PRNU and (b) Noiseprint features over 1000 randomly selected face images. Left: bona fide; right: morphed faces.}\n\\label{fig:featurefig}\n\\end{figure}\n\n\\par We first compare different feature-level fusion strategies to combine PRNU and Noiseprint patterns, including element-wise operation (sum\/max), convex compression (CC) \\cite{norouzi2013zero}, vector concatenation, and our factorized bilinear coding (FBC) method \\cite{gao2020revisiting}. We consider the features in both the spatial and the spectral domains. The PRNU and Noiseprint features extracted from the images are treated as spatial features. The spectral features are obtained by applying the discrete Fourier transform to the spatial features. Any two types of feature are fused to perform traditional MAD tasks on a subset of the test data. Therefore, six different fusion features are generated. For concatenation, the final dimension of the feature is 145,800. For sum, max, and CC, it is 72,900. The fusion feature of FBC is as compact as 2,048 dimensions. All generated features are fed into the SVM with a linear kernel for binary classification. As shown in Table \\ref{tab:toyexp}, the fusion of spatial features of PRNU and Noiseprint performs best for the six features, which can be attributed to the fact that the two patterns in the spatial domain contain more discriminative features (as shown in Fig.~\\ref{fig:featurefig}). Furthermore, our FBC-based fusion achieves the highest accuracy among the five fusion strategies.\n\n\n\\begin{table*}[!t]\n\\small\n\\caption{Accuracy(\\%) of 1-shot MAF classification on single and hybrid datasets.}\n\\vspace{-0.3cm}\n\\label{tab:maf1}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{ l |c c c c c c}\n\\hline\n\\multirow{2}{*}{Method} & FERET-Morphs & FRGC-Morphs & FRLL-Morphs & CelebA-Morphs & Doppelg\u00e4nger & Hybrid \\\\\n& 4-class & 4-class & 7-class & 4-class & 4-class & 9-class \\\\\n\\hline\nXception \\cite{chollet2017xception} & 29.47&\t25.26&\t17.68&\t16.67&\t21.05&\t15.11\\\\\nMobileNetV2 \\cite{sandler2018mobilenetv2} & 31.58&\t33.68&\t31.30&\t55.19&\t25.26&\t17.33\\\\\nNasNetMobile \\cite{zoph2018learning} & 32.63&\t27.37&\t22.61&\t19.26&\t23.16&\t12.88\\\\\nDenseNet121 \\cite{huang2017densely} & 46.32&\t26.32&\t22.03&\t47.04&\t23.16&\t19.33\\\\\nArcFace \\cite{deng2019arcface} & 29.33&\t39.64&\t26.12&\t28.33&\t18.03&\t15.22\\\\\n\\hline\nRaghavendra. et al. \\cite{raghavendra2017face} & 38.95&\t43.16&\t29.28&\t89.63&\t31.58&\t11.11 \\\\ \nMB-LBP \\cite{scherhag2020face} & 33.95 &\t33.42&\t34.59&\t34.50 &\t21.31&\t14.89 \\\\\nFS-SPN \\cite{zhang2018face} & 25.41&\t31.22&\t23.71&\t61.50 &\t32.79&\t29.44 \\\\\t\nPipeline Footprint \\cite{neubert2018reducing} & 26.32&\t29.47&\t29.28&\t25.93&\t25.26&\t21.89 \\\\\nPRNU Analysis \\cite{debiasi2018prnu} & 34.74 & 26.32 & 11.01 & 37.04 &\t25.26 &\t18.56 \\\\\nInception-MAD \\cite{damer2022privacy} & 23.16 &\t30.53 &\t20.00\t& 44.81 & 29.47 & 21.78 \\\\\nMixFaceNet-MAD \\cite{damer2022privacy} & 36.84 & 37.89 & 35.94 & 57.04 & 49.47 & 33.56 \\\\\nNoiseprint-SVM \\cite{cozzolino2018noiseprint} & 50.53 & 43.16 & 22.61 & 84.44 & 31.58 & 22.00 \\\\\n\\hline\nMeta-Baseline \\cite{chen2021meta} & 51.05 & 51.44 & 34.77 & 61.43 & 33.43 & 53.46 \\\\\nCOSOC \\cite{luo2021rectifying} & 54.58 & 64.37 & 35.22 & 63.19 & 34.30 & 59.55 \\\\\n\\hline\nFBC & 96.93 & 98.83 & 94.06 & 99.50 & 56.67 & 96.11 \\\\\nFBC-all & 98.11 & 99.48 & 98.42 & 100 & 84.17 & 96.78 \\\\\n\\textbf{FBC-APL} & \\textbf{98.82} & \\textbf{99.61} & \\textbf{98.24} & \\textbf{99.67} & \\textbf{91.67} & \\textbf{98.11} \\\\\n\\hline\n\\end{tabular}}\n\\end{table*}\n\n\n\\begin{table*}[!t]\n\\small\n\\caption{Accuracy(\\%) of 5-shot MAF classification on single and hybrid datasets.}\n\\vspace{-0.3cm}\n\\label{tab:maf5}\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{ l |c c c c c c}\n\\hline\n\\multirow{2}{*}{Method} & FERET-Morphs & FRGC-Morphs & FRLL-Morphs & CelebA-Morphs & Doppelg\u00e4nger & Hybrid \\\\\n& 4-class & 4-class & 7-class & 4-class & 4-class & 9-class \\\\\n\\hline\nXception \\cite{chollet2017xception} & 46.32& 43.16&\t31.01&\t73.70&\t28.42&\t43.67\\\\\nMobileNetV2 \\cite{sandler2018mobilenetv2} & 55.79 & 53.68 &\t40.00 & 89.26 & 26.32 & 54.56 \\\\\nNasNetMobile \\cite{zoph2018learning} & 48.42 & 40.00 & 24.35 &\t67.41&\t27.37&\t37.33\\\\\nDenseNet121 \\cite{huang2017densely} & 54.74 & 55.79 &\t36.23&\t89.26&\t25.26\t&53.33\\\\\nArcFace \\cite{deng2019arcface} & 44.34 & 50.91 & 33.81 & 39.67 & 20.49 & 29.11 \\\\\n\\hline\nRaghavendra. et al. \\cite{raghavendra2017face} & 45.26 & 61.05 & 31.59 & 42.96 &\t28.42 &\t11.11 \\\\\nMB-LBP \\cite{scherhag2020face} & 69.28 & 74.87 & 42.67 & 63.00\t& 26.23 & 42.11 \\\\\nFS-SPN \\cite{zhang2018face} & 41.34 & 41.97 & 26.91 & 82.67 & 27.04 & 43.89 \\\\\t\nPipeline Footprint \\cite{neubert2018reducing} & 45.26 & 61.05 & 31.59 &\t42.96 & 28.42 & 37.78 \\\\\nPRNU Analysis \\cite{debiasi2018prnu} & 53.68 & 32.63 & 29.86 & 78.15 & 26.32 & 39.22 \\\\\nInception-MAD \\cite{damer2022privacy} & 50.53 &\t51.58 &\t37.39 &\t82.59 &\t29.47 & 44.00 \\\\\nMixFaceNet-MAD \\cite{damer2022privacy} & 63.16\t& 63.68 & 53.48 & 82.59 & 33.68 & 51.00 \\\\\nNoiseprint-SVM \\cite{cozzolino2018noiseprint} & 69.47 & 69.47 & 57.39 & 87.41 & 37.89 & 51.89 \\\\\n\\hline\nMeta-Baseline \\cite{chen2021meta} & 60.60 & 64.72 & 50.74 & 81.42 & 36.80 & 61.98 \\\\\nCOSOC \\cite{luo2021rectifying} & 65.98 & 75.04 & 54.90 & 89.60 & 41.81 & 72.62 \\\\\n\\hline\nFBC & 97.64 & 99.09 & 96.94 & 99.50 & 65.83 & 96.22 \\\\\nFBC-all & 98.11 & 99.48 & 98.42 & 100 & 84.17 & 96.78 \\\\\n\\textbf{FBC-APL} & \\textbf{98.82} & \\textbf{99.61} & \\textbf{98.24} & \\textbf{99.67} & \\textbf{96.67} & \\textbf{98.22} \\\\\n\\hline\n\\end{tabular}\n}\n\\end{table*}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{image\/confusion_matrix.eps}\n\\vspace{-0.75cm}\n\\caption{Confusion matrix of few-shot MAF classification on hybrid dataset.}\n\\label{fig:confus}\n\\end{figure*}\n\n\n\\subsection{Few-shot Learning for MAD}\n\n\n\nWe extend the traditional MAD problem to a few-shot learning problem. First, the PRNU and Noiseprint features are extracted, respectively. Then an FBC module (VGG-16 \\cite{simonyan2014very} as the backbone) is trained as a binary classifier for feature fusion, taking PRNU and Noiseprint features from the entire training set (all images of predefined types) as input. Based on the pre-trained FBC module, 2,048-dimensional fusion representations are generated and then fed to the APL module for binary few-shot learning using the cross-entropy loss. Here, the Euclidean distance is used to query the top five nearest neighbors of the memory component. The APL output is a tuple of the probability distribution for each class. The results in terms of accuracy, D-EER, and ACER are shown in Table \\ref{tab:madfs}. Two methods based on FSL \\cite{luo2021rectifying,chen2021meta}, two methods based on face recognition (FR) \\cite{schroff2015facenet,deng2019arcface}, several popular deep models pre-trained \\cite{chollet2017xception,sandler2018mobilenetv2,zoph2018learning,huang2017densely} on ImageNet \\cite{deng2009imagenet}, and eight current MAD methods \\cite{raghavendra2017face,scherhag2020face,zhang2018face,neubert2018reducing,debiasi2018prnu,damer2022privacy,cozzolino2018noiseprint}, are adopted for comparison. Due to the effective fusion of two complementary patterns (i.e., PRNU and Noiseprint) and the APL module, our proposed FBC-APL clearly outperforms other competing methods by a large margin.\n\n\n\n\n\n\\subsection{Few-shot Learning for MAF}\nUnlike the few-shot MAD problem, in MAF, the FBC module uses ResNet50 \\cite{he2016deep} as the backbone and is pre-trained as a nine-class classifier using all the training data (about 80\\%) of the collected database. The FBC fusion feature obtained from the training samples is then fed to the APL module for multiclass few-shot learning. A cosine similarity score is adopted to compute the similarity between queries and the data stored in memory to find the three nearest neighbors. From Tables \\ref{tab:maf1} and \\ref{tab:maf5}, one can see that our FBC-APL has achieved outstanding performance, and some results are even better than the FBC-all method, which uses FBC features from all training data to fit SVM for classification. To better illustrate the effectiveness of the proposed FBC-FSL method, we have compared the confusion matrix for nine different classes (including bona fide and eight different morphing models), as shown in Fig. \\ref{fig:confus}.\n\n\n\n\\subsection{Discussions and Limitations}\nWhy did the proposed method outperform other competing methods by a large margin? We believe there are three contributing reasons. First, PRNU and Noiseprint feature maps as shown in Fig. \\ref{fig:featurefig} have shown better discriminative capability than others; meanwhile, their complementary property makes fusion an efficient strategy for improving the accuracy. Second, we have specifically taken the few-shot constraints into the design (i.e., the adoption of APL module) while other competing approaches often assume numerous training samples. Third, from binary MAD to multi-class MAF, our FBC fusion strategy is more effective on distinguishing different classes as shown in Fig. \\ref{fig:confus}. Note that we have achieved unanimously better results than other methods across six different datasets, as shown in Table \\ref{tab:maf5}, which justifies the good generalization property of our approach.\n\nThe overall pipeline in Fig. \\ref{fig:pipeline} can be further optimized by end-to-end training. In our current implementation, the three steps are separated, that is, the extraction of PRNU and Noiseprint features, FBC-based fusion, and APL-based FSL. From the perspective of network design, end-to-end training could further improve the performance of the FBC-APL model. Moreover, there are still smaller and more challenging datasets for morphing attacks in the public domain. Validation of the generalization property for the FBC-APL model remains to be completed, especially when novel face morphing attacks (e.g., adversarial morphing attack \\cite{wang2020amora}, transformer-based, and 3D reconstruction-based face morphing) are invented. Finally, we have not considered the so-called post-morphing process \\cite{damer2021pw} where the print and scan operations are performed when issuing a passport or identity document.\n\n\n\n\\section{Conclusion and Future Work}\n\\label{con}\n\\par Face morphing attacks pose a serious security threat to FRS. In this work, we proposed a few-shot learning framework for the detection of non-reference morphing attacks and fingerprinting problems based on factorized bilinear coding of two types of camera fingerprint feature, PRNU and Noiseprint. Additionally, a large-scale database is collected that contains five types of face dataset and eight different morphing methods to evaluate the proposed few-shot MAD and fingerprinting problem. The results show outstanding performance of the proposed fusion-based few-shot MAF framework on our newly collected large-scale morphing dataset. \nWe note that face-morphing attack and defense research is likely to coevolve in the future. Future work on the attack side will include the invention of more powerful morphing attacks, such as GANformer-based \\cite{hudson2021generative} and diffusion model-based \\cite{dhariwal2021diffusion}. Consequently, defense models that include MAD and MAF could focus on the study of the feasibility of detecting novel attacks and morphed face images from printed and scanned image data. In practical applications, optimizing differential morphing attack detection with live trusted capture is also an interesting new research direction.\n\n\n\\section*{Acknowledgments}\nThis work was partially supported by the NSF Center for Identification (CITeR) awards 20s14l and 21s3li.\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\nThe IceCube collaboration gratefully acknowledges the support from the following agencies and institutions: USA {\\textendash} U.S. National Science Foundation-Office of Polar Programs,\nU.S. National Science Foundation-Physics Division,\nU.S. National Science Foundation-EPSCoR,\nWisconsin Alumni Research Foundation,\nCenter for High Throughput Computing (CHTC) at the University of Wisconsin{\\textendash}Madison,\nOpen Science Grid (OSG),\nExtreme Science and Engineering Discovery Environment (XSEDE),\nFrontera computing project at the Texas Advanced Computing Center,\nU.S. Department of Energy-National Energy Research Scientific Computing Center,\nParticle astrophysics research computing center at the University of Maryland,\nInstitute for Cyber-Enabled Research at Michigan State University,\nand Astroparticle physics computational facility at Marquette University;\nBelgium {\\textendash} Funds for Scientific Research (FRS-FNRS and FWO),\nFWO Odysseus and Big Science programmes,\nand Belgian Federal Science Policy Office (Belspo);\nGermany {\\textendash} Bundesministerium f{\\\"u}r Bildung und Forschung (BMBF),\nDeutsche Forschungsgemeinschaft (DFG),\nHelmholtz Alliance for Astroparticle Physics (HAP),\nInitiative and Networking Fund of the Helmholtz Association,\nDeutsches Elektronen Synchrotron (DESY),\nand High Performance Computing cluster of the RWTH Aachen;\nSweden {\\textendash} Swedish Research Council,\nSwedish Polar Research Secretariat,\nSwedish National Infrastructure for Computing (SNIC),\nand Knut and Alice Wallenberg Foundation;\nAustralia {\\textendash} Australian Research Council;\nCanada {\\textendash} Natural Sciences and Engineering Research Council of Canada,\nCalcul Qu{\\'e}bec, Compute Ontario, Canada Foundation for Innovation, WestGrid, and Compute Canada;\nDenmark {\\textendash} Villum Fonden and Carlsberg Foundation;\nNew Zealand {\\textendash} Marsden Fund;\nJapan {\\textendash} Japan Society for Promotion of Science (JSPS)\nand Institute for Global Prominent Research (IGPR) of Chiba University;\nKorea {\\textendash} National Research Foundation of Korea (NRF);\nSwitzerland {\\textendash} Swiss National Science Foundation (SNSF);\nUnited Kingdom {\\textendash} Department of Physics, University of Oxford.\n\\section{Investigation of the significance of TXS 0506+056}\n\\label{sec:TXS_significance_investigation}\nThe significance of TXS 0506+056 found by this multi-flare algorithm is smaller than the (single-flare) time-dependent significance that was determined in \\cite{IceCube:2018cha}. The goal of this Appendix is to show that the decrease of significance is only due to the different event selection of the sample used in this analysis, and not due to the different likelihood algorithms. It is mainly related to 2 cascade events that are rejected in the new event selection, presented in~\\citep{Aartsen:2019fau}. This was discussed also in IceCube~\\citep{Abbasi:2021bvk}. As a matter of fact, the new selection was focused on muon tracks for achieving best angular resolutions for the point-source search.\n\nThe differences between this analysis and the one described in \\cite{IceCube:2018cha} are mainly of three types. These are investigated using the analysis described in this letter and the one presented in \\cite{IceCube:2018cha} to find out how much each of them contributes to the change in significance of TXS 0506+056. The results are summarized in Table~\\ref{tab:TXS_comparisons}.\n\n\\paragraph{\\textbf{Different datasets:}}\n As mentioned also in Section \\ref{sec:detector}, the event selections used to produce the dataset analyzed in \\cite{IceCube:2018cha} and the one analyzed in this work (from~\\cite{Aartsen:2019fau}) are different. According to the internal IceCube nomenclature, the two datasets are referred to as \\MA{{\\tt PSTracks v2}} and \\MA{{\\tt PSTracks v3}}, respectively. In some cases the different event selection results in the reconstruction of slightly different energy and local angles. An extensive and detailed description of the two datasets can be found in~\\cite{Abbasi:2021bvk}.\n \n The significance of TXS 0506+056 is estimated on \\MA{{\\tt PSTracks v2}} and \\MA{{\\tt PSTracks v3}} by applying the multi-flare algorithm to the years 2012-2015 (containing only one of the two flares detected by this analysis). We observe the same drop in significance (from $4.0~\\sigma$ in \\MA{{\\tt PSTracks v2}} to $2.6~\\sigma$ in \\MA{{\\tt PSTracks v3}}) described in~\\cite{Abbasi:2021bvk}. The significance observed for \\MA{{\\tt PSTracks v3}} increases to $3.4~\\sigma$ if the two high-energy events, present in \\MA{{\\tt PSTracks v2}} but absent in \\MA{{\\tt PSTracks v3}}, are added by hand to the dataset. It is worth noticing also that the pre-trial significance observed for \\MA{{\\tt PSTracks v2}} with the multi-flare algorithm is not different to the pre-trial significance reported in \\citep{IceCube:2018cha}, which was obtained with a single-flare algorithm.\n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \\paragraph{\\textbf{Different algorithms:}}\n The multi-flare algorithm has been developed for this analysis and applied for the first time in this work. \n This is a crucial difference between this work and the one presented in \\cite{IceCube:2018cha}, since a multi-flare likelihood could in principle consist of more fit parameters than a single-flare likelihood. The increased parameter space of the fit may thus degrade the sensitivity. This degradation was avoided by requiring a pre-selection of candidate flares with $\\mathrm{TS}\\ge2$ (see Section \\ref{sec:analysis} and Appendix~\\ref{sec:multi-flare_algorithm}).\n \n Other minor improvements between the two analyses concern:\n \n a Gaussian integral factor, included in the marginalization term to correct for boundary effects;\n the time PDF normalization, set to 1 across each IceCube sample by considering only up times of the detector (in \\cite{IceCube:2018cha} it was set to 1 in an infinite range, regardless the up times). The results, shown in Table~\\ref{tab:TXS_comparisons} for the single- and multi-flare algorithm applied to the 2012-2015 data, suggest that the multi-flare algorithm is not responsible for the drop of the significance, when applied to the same dataset. \n\n \n \\paragraph{\\textbf{Different strategies for combining independent samples:}} \n The third and last potential source of change in significance is due to the different strategies adopted to combine the IceCube samples.\n Since the 10-year data sample of IceCube concerns different IceCube detector configurations, triggers and event cuts, this analysis is based on the maximization of the joint likelihood defined as the product of the likelihoods of each IceCube sample (see Section \\ref{sec:analysis}). The strategy adopted in~\\cite{IceCube:2018cha}, instead, consists in maximizing the likelihood of each IceCube sample, picking up the most significant p-value and reporting it as post-trial after correcting for the look-elsewhere effect. Such a correction is made by penalizing the most significant $p$-value by the ratio of the livetime of the sample with the most significant $p$-value to the total time. To investigate this difference, the single-flare algorithm is applied to \\MA{{\\tt PSTracks v3}}. To reproduce the analysis in~\\citep{IceCube:2018cha}, the TS is maximized only across the 3 years between 2012-2015 (containing the most significant flare) and the $p$-value is penalized by the ratio of 10 years to 3 years, adopting the same logic described in \\cite{IceCube:2018cha}. In the analysis presented in this letter, the whole 10-year data are analyzed with a single joint likelihood (as described in Section \\ref{sec:analysis} but without the multiple flare feature), and the same penalization of the $p$-value is not needed in this case. As seen in Table~\\ref{tab:TXS_comparisons}, it can be stated that the results obtained in the two cases are comparable and that the strategy adopted to combine the different samples is not responsible for a substantial change in significance.\n\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{>{\\centering\\arraybackslash}m{5cm} >{\\centering\\arraybackslash}m{3.5cm} >{\\centering\\arraybackslash}m{3.5cm}}\n \\multicolumn{3}{c}{TXS 0506+056 change in significance}\\\\\n \\hline\n \\hline\n \\multirow{3}{*}{\\parbox{4.2cm}{\\centering Different datasets (multi-flare, 2012-2015 only)}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering \\MA{{\\tt PSTracks v2}}\\\\(\\cite{IceCube:2018cha})}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering \\MA{{\\tt PSTracks v3}}\\\\(This work)}}\\\\\n & & \\\\\n & $4.0~\\sigma$ & $2.6~\\sigma$ \\\\[3pt]\n \\hline\n \\multirow{6}{*}{\\parbox{3cm}{\\centering Different algorithms (2012-2015 only)}} &\\multirow{2}{*}{\\parbox{3.5cm}{\\centering Single-flare\\\\(\\cite{IceCube:2018cha})}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering Multi-flare\\\\(This work)}}\\\\\n & & \\\\\n & \\multicolumn{2}{c}{\\MA{{\\tt PSTracks v2}}}\\\\\n & $4.0~\\sigma$ & $4.0~\\sigma$ \\\\\n & \\multicolumn{2}{c}{\\MA{{\\tt PSTracks v3}}}\\\\\n & $2.7~\\sigma$ & $2.6~\\sigma$ \\\\\n \\hline\n \\multirow{3}{*}{\\parbox{5cm}{\\centering Strategy of sample combination (single-flare, \\MA{{\\tt PSTracks v3}})}} & \\multirow{2}{*}{\\parbox{3.5cm}{Separate likelihoods\\\\(\\centering\\cite{IceCube:2018cha})}} & \\multirow{2}{*}{\\parbox{3.5cm}{\\centering Joint likelihood\\\\(This work)}}\\\\\n & & \\\\\n & $2.2~\\sigma$ (post-trial) & $2.3~\\sigma$ \\\\\n \\hline\n \\hline\n\\end{tabular}\n\\caption{Results of the comparison between the significance obtained for TXS 0506+056 when using an analysis with features similar to the one in \\cite{IceCube:2018cha} and the one presented in this paper. When testing the impact of different datasets, the years 2012-2015 are analyzed with the multi-flare algorithm. \nWhen testing the impact of a different strategy in the combination of the samples, the single-flare algorithm is used on the dataset \\MA{{\\tt PSTracks v3}}: in one case only the IceCube sample containing the known flare is analyzed and the p-value penalized, adopting the same logic as in~\\cite{IceCube:2018cha}; in the other case all the 10-year samples are combined in a joint likelihood, as described in Section~\\ref{sec:analysis}, and no penalization is needed.}\n\\label{tab:TXS_comparisons}\n\\end{table}\n\n\n\n\n\\section{\\textit{A posteriori} comparisons with the time-integrated analysis}\n\\label{app:variab}\n\nThe results of these time-dependent analyses, despite unveiling new features of the source catalog, partly overlap with the results of the time-integrated search~\\citep{Aartsen:2019fau}. In fact, the time-dependent and time-integrated analyses are based on similar likelihood functions, sharing the same space and energy PDFs, but the time-dependent analysis distinguishes itself by adding a time PDF. This time-dependent analysis was planned together with the time-integrated analysis, and it was not developed based on the time-integrated unblinded results. Nonetheless, one might wonder how the results of the time-dependent analysis can be interpreted in the light of the prior knowledge of the time-integrated results. To address such a question, two tests are proposed in this Appendix. A first test estimates the time variability of the four most significant sources of the time-integrated analysis. A second test estimates the probability of obtaining the observed pre-trial significance of $3.8~\\sigma$ from a time-dependent binomial test (see Section~\\ref{sec:results}) on the source catalog, in the assumption that the neutrino excess observed by the time-integrated analysis~\\citep{Aartsen:2019fau} does not have any time structure. Both tests exploit the same approach, based on producing pseudo-realizations of the data by randomizing the time of the events and, unlike for the standard time-dependent analysis, keeping fixed the associated equatorial coordinates.\n\n\\paragraph{\\bf Time-variability test:}\nThis test aims at quantifying the time variability of the highly-significant events detected from the directions of NGC 1068, TXS 0506+056, PKS 1424+240 and GB6 J1542+6129 and at testing the compatibility of their arrival time with a flat distribution.\nThis test is sensitive only to the time information of the events and is unavoidably less sensitive than the time-dependent search described in Section~\\ref{sec:analysis} (referred to as standard time-dependent analysis), which is sensitive to energy, space and time information. Moreover, the significance of the likelihood method using the three variables at the same time is not equivalent to the product of likelihood methods that use one variable at a time. \n\nThe null (or background) hypothesis of the time-variability test assumes that the time-integrated signal-like events (i.e. the events with the highest time-integrated signal-over-background ratio, that mostly contribute to the significance around each source direction) are not clustered in time. Pseudo-realizations of the data for this null hypothesis (also called background samples which allow to count trials) are obtained similarly to the standard time-dependent analysis: events in a declination band around the location of the tested sources are selected and assigned a new time taken from a real up time of the detector. This procedure destroys any time correlation among events. However, while the standard analysis keeps the local coordinates of an event (azimuth and zenith) fixed and recalculates the right ascension using the new randomized time, the time-variability test freezes the equatorial coordinates of the events at the measured values, and randomizes the azimuth (notably the zenith angle, corresponding to an equatorial coordinate, at the South Pole does not depend on the time). This method guarantees that the same time-integrated signal-like events from the direction of a given source are present in the background sample with randomized times. On the other hand, this method flattens out the sub-daily modulation of the event rate in local coordinates due to the increased reconstruction efficiency along azimuth directions where more strings are aligned. As described in Section~\\ref{sec:analysis}, in the standard analysis this sub-daily modulation of the event rate is taken into account by using a correction in local coordinates to the background PDF. The azimuth dependency of the reconstruction efficiency is averaged out for flares longer than $\\sigma_T=0.2$ days as a consequence of the Earth rotation, while it might induce a change up to 5\\% in the TS for flares as short as $\\sigma_T = 10^{-3}$ days. Given that the variability observed for the four most significant time-integrated sources was beyond a flare duration of $\\sigma_T\\gg 1$ day, a lower limit $\\sigma_T^{min} = 0.2$ days is used for this time-variability test.\n\nWhereas for the standard analysis signal samples are produced by injecting Monte Carlo events on top of the background events, for the time-variability test $n_s$ events among real signal-like events are selected in the data and their times are sampled from a Gaussian distribution. The real signal-like events, potentially usable for signal injection in the time-variability test, are randomly chosen among the $2\\hat{N}_s^{t-int}$ events with the highest time-integrated signal-over-background ratio, where $\\hat{N}_s^{t-int}$ is the best-fit number of signal-like events reported by the time-integrated analysis~\\citep{Aartsen:2019fau}.\n\nThe likelihood in Eq.~\\ref{eq:10-year-likelihood} is maximized on the background and signal samples of the time-variability test and the corresponding TS distributions (for illustration at the location of NGC 1068) are shown in Fig.~\\ref{fig:ts_comparison}, for comparison with the same distributions for the standard analysis. For both analyses the separation of the signal and background TS is better for shorter flares (left plots) than longer ones (right plots). A notable feature concerns the background TS distributions in blue. For the standard analysis the TS distribution has a characteristic spike in the first bin populated by under-fluctuations set to zero. On the other hand, the TS distribution for the time-variability test is on average shifted towards larger values of TS, showing a more signal-like behavior. This is a consequence of preserving the same time-integrated space and energy variables of signal-like events in the background sample with the method described above. \nIt is to be noted that the time-integrated analysis in~\\cite{Aartsen:2019fau} fits a spectral index of NGC 1068 of 3.16, while the best-fit spectral index for the time-dependent analysis is harder, namely 2.8 (see Tab.~\\ref{tab:PS_results1}). As a consequence of preserving the spatial and energy information of the events, the background and signal samples of the time-variability test (used to make the distributions in the last row in Fig.~\\ref{fig:time-variability_comparison}) have a varying spectral index centered around 2.8. Notably, about 89\\% of the spectral indices of the 100,000 generated background samples are contained between $\\gamma^f=2$ and of $\\gamma^f=3$. Hence, these values of the spectral indices are used for the signal injection in the standard analysis when comparing the TS distributions of the standard analysis with the same distributions of the time-variability test in Fig.~\\ref{fig:time-variability_comparison}. In general, for harder spectral indices and the same flare duration $\\sigma_T^f$, the time-variability test characterizes the difference between background and signal less powerfully than the standard analysis. In fact, in the time-variability test the coordinates of the events are frozen to the true values, hence the differences between the spatial and energy PDFs of signal and background are not exploited, unlike for the standard analysis. \n\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=.95\\linewidth]{figures\/TSdistributions.png}\n\t\\caption{Comparison of the TS distributions for signals of different intensity $n_s$ and for the background between the standard analysis (first and second row) and the time-variability test (third row) at the location of NGC 1068. The left plots are made for a flare duration of $\\sigma_T=1$~d, the right plots for 100~d. Spectral indices of $\\gamma^f=2$ (first row) and $\\gamma^f=3$ (second row) are used for the injected signal in the standard analysis.}\n\t\\label{fig:ts_comparison}\n\\end{figure}\n\n \nTo complete the comparison between the standard time-dependent analysis and the time-variability test, the sensitivity and $5~\\sigma$ DP at the location of NGC 1068 are shown for the two analyses in Fig.~\\ref{fig:time-variability_comparison}. The times of signal events are sampled from a Gaussian distribution with fixed mean $t_0=58000$ MJD and variable width $\\sigma_T$. This plot can be understood by observing the TS distributions in Figure~\\ref{fig:ts_comparison}.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=.6\\linewidth]{figures\/sens_DP_time-variability_VS_std-ana.png}\n\t\\caption{Comparison of the sensitivity (dashed lines) and $5~\\sigma$ DP (solid lines) of the standard analysis (blue and orange, respectively for $\\gamma=2$ and $\\gamma=3$) and the time-variability test (green lines) in terms of the time-integrated flux per flare $F_0^f$ described in equation~\\ref{eq:time-integrated_flux}. These curves are produced at the location of NGC 1068 under the hypothesis of a single signal flare. Notice that the reconstructed value of the spectral index for NGC 1068 in \\cite{Aartsen:2019fau} is 3.16.}\n\t\\label{fig:time-variability_comparison}\n\\end{figure}\n\nFor each of the four aforementioned sources, the likelihood in Eq.~\\ref{eq:10-year-likelihood} is maximized on the real data and an observed TS is reported. A pre-trial p-value for the time-variability test is then evaluated by counting the fraction of generated background samples with TS larger than the observed TS. The post-trial p-value of each source is obtained by applying a Sidak correction (\\cite{Abdi2007}) to the pre-trial p-values with penalty factor 4 (the number of sources). The results of this test are shown in Table~\\ref{tab:time_var_analysis}. None of the four sources shows a significant time variability for the signal-like neutrino events. \n\n\\begin{table}\n\t\\centering\n\t\\begin{tabular}{>{\\centering\\arraybackslash}m{2.8cm} >{\\centering\\arraybackslash}m{2.8cm} >{\\centering\\arraybackslash}m{2.8cm} }\n\t\t\\multicolumn{3}{c}{Time-variability results}\\\\\n\t\t\\hline\n\t\t\\hline\n\t\tSource & Pre-trial p-value & post-trial p-value\\\\[3pt] \\hline\n\t\t\\vspace{3pt}\n\t\tNGC 1068 & 0.13 & 0.43 \\\\[3pt]\n\t\tTXS 0506+056 & 0.24 & 0.67\\\\ [3pt]\n\t\tPKS 1424+240 & 0.33 & 0.80 \\\\[3pt]\n\t\tGB6 J1542+6129 & 0.029 & 0.11 \\\\[3pt]\n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Results of the time-variability test applied to the four most significant sources of the time-integrated analysis of Ref.~\\cite{Aartsen:2019fau}. The table shows the p-values before (pre-trial) and after (post-trial) the Sidak correction with penalty factor 4. As described in this Appendix, the time-variability test only assesses the time distribution of the recorded events, by comparing with simulated samples in which the event directions and energies remain fixed as recorded, but times are randomized according to a uniform distribution.}\n\t\\label{tab:time_var_analysis}\n\\end{table}\n\nIt is worth noticing the case of M87: this source was an under-fluctuation for the time-integrated analysis, with no signal-like events identified in \\citep{Aartsen:2019fau}, but it shows up as the most significant source of the catalog when a time-dependent analysis is performed. Although a time-variability test is not made in this case, with $\\hat{n}_s=3$ signal-like neutrino events in a time scale of $\\hat{\\sigma}_T=2.0$ minutes, almost the entire significance of this source is expected to come from the time variability of the detected events.\n\n\\paragraph{\\bf Posterior time-dependent binomial test:} The second test determines the \\textit{a posteriori} probability that the time-dependent binomial test (see Section~\\ref{sec:analysis} referred to as \"standard\" in this Appendix) produces a pre-trial significance as high or higher than the observed value of $3.8~\\sigma$, in the assumption that the time-integrated neutrino excess is steady in time (background hypothesis). To do so, the same binomial test described in Section~\\ref{sec:analysis} is repeated on the list of p-values of the Northern sources. The per-source p-values are computed in the same way, by comparing the TS of each source with a distribution of TS from fully-scrambled (randomized times and recalculated right ascensions) background samples at the respective declination. As a matter of fact, the binomial p-value of the data for this test (referred to as \"posterior binomial test\") is the same as reported in Section~\\ref{sec:results} ($3.8\\sigma$). Nevertheless, the difference between the standard and the posterior binomial test is in the realization of the background samples used to translate the binomial p-value into a post-trial p-value.\n\nIn the posterior binomial test, background pseudo-realizations of the data for all the Northern sources of the catalog are obtained in the same way as described for the time-variability test: the times of the events are randomized while the equatorial coordinates are fixed at the recorded values, such that the spatial correlations among the events are preserved and the time-integrated information is effectively incorporated in the background hypothesis. For each pseudo-realization of the Northern catalog, the likelihood in Eq.~\\ref{eq:10-year-likelihood} is maximized at the location of each source and the corresponding TS is converted into a pre-trial p-value as described in Section~\\ref{sec:analysis}, by comparison with a distribution of TS from fully-scrambled background samples at the same declination. The lower limit on the flare duration $\\sigma_T^{min}$ is removed in this test to allow a proper comparison with the standard time-dependent binomial test. As a consequence, the azimuth-dependent correction to the background spatial PDF is neglected. However, this is a minor correction that has an impact at most of 5\\% only for time scales of the flares as short as $\\sim10^{-3}$ days.\n\nOnce a pre-trial p-value is computed for all the sources in a particular pseudo-realization of the Northern catalog, the binomial test is performed on this set of p-values, resulting in a background binomial p-value $P_{bin}$. This method is then repeated on many pseudo-realizations of the Northern catalog to produce the distribution of background binomial p-values for the posterior binomial test shown in blue in Fig.~\\ref{fig:binomial_p-value_distr}. These p-values are the typical binomial p-values that the binomial test produces if the neutrino events of the data (including the time-integrated excess) have no time structure. For comparison, the orange histogram in Fig.~\\ref{fig:binomial_p-value_distr} is the distribution of background binomial p-values for the standard binomial test, used in Section~\\ref{sec:results} to calculate the post-trial binomial p-value in the assumption that the time-integrated information is also randomized. Note that when the time-integrated information is preserved, the overall distribution is shifted towards higher values of $-\\log_{10}(P_{bin})$ as a consequence of including the additional information about the time-integrated excess in the background samples.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=.8\\linewidth]{figures\/binomial_test_comparison.png}\n\t\\caption{Background distribution of the binomial p-value $P_{bin}$ for the posterior (blue) and standard (orange) binomial test. For the posterior binomial test, the background sample is produced by randomizing the time of the events while keeping fixed the equatorial coordinates; for the standard analysis, the equatorial coordinates are recalculated (assuming fixed local coordinates) after the time is randomized.}\n\t\\label{fig:binomial_p-value_distr}\n\\end{figure}\n\nFinally, the probability that the time-dependent binomial test produces a more significant result than the one observed in the real data ($3.8~\\sigma$ pre-trial), given the prior knowledge about the time-integrated excess and under the assumption that the neutrino events do not have any time correlation, is estimated by counting the fraction of background binomial p-values of the posterior binomial test that are more significant than the observed result. Such estimation leads to a probability of $0.9\\%$.\n\n\\section{Multi-flare algorithm}\n\\label{sec:multi-flare_algorithm}\n\n\nThe multi-flare algorithm aims at determining the number of flares to fit in the data. This is done by evaluating the TS of clusters of events with the highest signal-over-background ratio of the spatial and energy PDFs and selecting as candidate flares those that pass a given TS threshold. On the one hand, a high value of TS threshold is required to suppress background fluctuations (fake flares), on the other hand a low value is desired to avoid the rejection of signal flares of low intensity.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.80\\linewidth]{figures\/background_reco_flares_VS_declinations.png} \n\t\\caption{Fraction of trials in which, under the null hypothesis, 1 single flare (blue line) or more than 1 flare (orange line) are reconstructed as a function of the declination if a TS threshold of 2 is applied to select the candidate flares.}\n\t\\label{fig:bkg_reco_flares}\n\\end{figure}\n\nThis multi-flare algorithm selects as candidate flares the cluster of events with the highest TS and all additional clusters of events passing a TS threshold of 2. The choice of this threshold ensures a high efficiency in rejecting fake flares, with a frequency of multiple flare reconstruction under the null hypothesis of $\\lesssim 0.1\\%$ as shown in Fig.~\\ref{fig:bkg_reco_flares}. Such a high rejection efficiency allows to preserve a sensitivity and a DP comparable to the single-flare algorithm, as shown in Fig.~\\ref{fig:single_VS_multi_sensDP} at the declination of TXS 0506+056. Note additionally that if only one candidate flare is selected by the multi-flare algorithm, the multi-flare algorithm is completely equivalent to the single-flare algorithm. \n\n\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.49\\linewidth]{figures\/sensitivity_singleflare_VS_multiflare.png}\n\t\\includegraphics[width=.49\\linewidth]{figures\/5sigmaDP_singleflare_VS_multiflare.png}\n\t\\caption{Sensitivity (left) and discovery potential (right) of the single-flare (orange lines) and multi-flare (blue lines) algorithm as a function of the flare duration $\\sigma_T$. Sensitivity and discovery potential are evaluate at the declination of TXS 0506+056 under the hypothesis of a single signal flare with a spectrum $E^{-2}$ (solid lines) and $E^{-3}$ (dashed lines). The bottom plots show the ratio of the multi-flare to single-flare curves above.}\n\t\\label{fig:single_VS_multi_sensDP}\n\\end{figure}\n\nTo quantify the goodness of the multi-flare reconstruction, two quantities are introduced: the multi-flare efficiency, defined as the fraction of trials in which all the signal flares are identified (no matter if additional fake flares are also reconstructed), and the multi-flare purity, defined as the fraction of trials in which no fake flares are reconstructed (no matter if all the signal flares are identified). The former is an indicator of how frequent the algorithm is able to identify \\textit{all} the signal flares injected in the data; the latter is an indicator of how well the algorithm is able to reject fake flares. Note that a partially reconstructed flare is considered as a fake flare in the estimation of efficiency and purity. These two quantities are shown in Fig.~\\ref{fig:efficiency_and_purity} under the hypothesis of two flares of equal intensity as a function of the time-integrated flux of each flare, for spectra $E^{-2}$ and $E^{-3}$ and for some values of $\\sigma_T$. The efficiency asymptotically reaches the value of 1: if the signal is strong enough the algorithm is always able to identify it. However, the flux required to reach such asymptotic \\textit{plateau} depends on the parameters of the flare (spectral index $\\gamma$ and flare duration $\\sigma_T$), and notably in extreme cases (soft spectra, long flare duration) the convergence is very slow, as a consequence of the high TS threshold. Nevertheless, note that in such extreme cases the flare intensity is mostly below the sensitivity level. The purity also tends to an asymptotic \\textit{plateau} at $\\gtrsim 95\\%$ with a rapidity that depends on the flare parameters.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_efficiency_gamma2.png} \n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_purity_gamma2.png}\n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_efficiency_gamma3.png} \n\t\\includegraphics[width=.49\\linewidth]{figures\/working_point_purity_gamma3.png}\n\t\\caption{Efficiency (left plots) and purity (right plots) under the hypothesis of a two flares of equal intensity as a function of the time-integrated flux of each flare. Efficiency and purity are calculated for a spectrum $E^{-2}$ (top plots, declination of TXS 0506+056) and $E^{-3}$ (bottom plots, declination of NGC 1068) and for some values of $\\sigma_T$ (see legend). Efficiency is defined as the fraction of trials in which \\textit{all} the injected flares are correctly reconstructed (no matter if additional fake flares are also reconstructed); purity is defined as the fraction of trials in which no fake flares are reconstructed. Note that a partially reconstructed flare is considered as a fake flare when calculating the efficiency and purity.}\n\t\\label{fig:efficiency_and_purity}\n\\end{figure}\n\n\n\\section{Sensitivity, discovery potential and upper limits}\n\\label{sec:sens_DP_upLims}\n\nThe sensitivity and discovery potential (DP) are evaluated by injecting a fake signal in the dataset and looking at the signal-like TS distributions. The sensitivity is defined as the signal flux required such that the resulting TS is greater than the background median in 90\\% of the trials; the $5~\\sigma$ DP is defined as the signal flux required such that the resulting TS is greater than the $5~\\sigma$ threshold of the background TS distribution in 50\\% of the trials. The sensitivity and $5~\\sigma$ discovery potential (DP) of the multi-flare analysis as a function of the declination are shown in Fig.~\\ref{fig:sensDP} for a single (left) and a double (right) signal flare hypothesis. In the latter case, the intensity and spectral shape of the two flares are the same.\n\nThe sensitivity and $5~\\sigma$ DP are expressed in terms of a time-integrated flux:\n\\begin{equation}\n \\label{eq:time-integrated_flux}\n F = \\int_{T_{live}}E^2\\Phi(E,t)dt=\\sum_{f=\\mathrm{flares}}F_0^f\\left(\\frac{E}{\\mathrm{TeV}}\\right)^{2-\\gamma^f},\n\\end{equation}\nwhere $F_0^f$ is the time-integrated flux normalization factor of the $f$-th flare, independent of the flare duration $\\sigma_T^f$ and carrying the units of an energy divided by an area, and $\\Phi(E,t)$ is the overall flux, defined as the sum of the flux of all the flares from a single direction:\n\\begin{equation}\n \\label{eq:flux_definition}\n \\Phi(E,t)=\\sum_{f=\\mathrm{flares}}\\frac{F_0^f}{\\sqrt{2\\pi}\\sigma_T^f}\\left(\\frac{E}{\\mathrm{TeV}}\\right)^{-\\gamma^f}G^f(t|t_0^f,\\sigma_T^f).\n\\end{equation}\nIn Eq. \\ref{eq:flux_definition}, $G^f(t|t_0^f,\\sigma_T^f)=\\exp\\left[-\\frac{1}{2}\\left(\\frac{t-t_0^f}{\\sigma_T^f}\\right)^2\\right]$ is the Gaussian time profile of the $f$-th flare.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=.49\\linewidth]{figures\/timeInt_sensitivity_singleflare.png} \n\t\\includegraphics[width=.49\\linewidth]{figures\/timeInt_sensitivity_doubleflare.png} \n\t\\caption{Sensitivity (dashed lines) and $5~\\sigma$ DP (solid lines) of the multi-flare analysis vs declination, expressed in terms of the flux normalization factor per flare $F_0^f$ defined in Eq.~\\ref{eq:time-integrated_flux}, under the hypothesis of a single (left plot) and a double (right plot) signal flare. \n\n\tThe assumed energy dependence of the flares has a spectral index of $\\gamma^f = 2$ and $\\gamma^f = 3$ (see labels), and a flare duration of $\\sigma_T^f = 1$~day (blue lines) and $\\sigma_T^f = 100$~days (orange lines). The double-flare\n\tassumes two identical and well separated flares.}\n\t\\label{fig:sensDP}\n\\end{figure}\n\nSensitivities and DPs are shown in Fig.~\\ref{fig:sensDP} for two different hypotheses of the spectral index of the flares ($\\gamma^f=2$ and $\\gamma^f=3$) and two different flare durations ($\\sigma_T^f=1$ day and $\\sigma_T^f=100$ days). In the double-flare case, two identical and well separated flares are assumed, with the same spectral index $\\gamma^f$, flare duration $\\sigma_T^f$ and time-integrated flux normalization per flare $F_0^f$.\n\nThe 90\\% confidence level (CL) upper limits on the flux of each source of the catalog are defined as the flux required to produce a TS distribution that exceeds the unblinded TS of the respective source in 90\\% of the trials. These upper limits are expressed in terms of a time-integrated flux by mean of the factor $F_{90\\%}$, defined as:\n\n\\begin{equation}\n \\label{eq:time-int_flux_upLims}\n F = F_{90\\%}\\sum_{f=\\mathrm{flares}}\\left(\\frac{E}{\\mathrm{TeV}}\\right)^{2-\\gamma^f}.\n\\end{equation}\n\nIn the case of TXS 0506+056, the only observed multi-flare source of the catalog, the upper limits are evaluated assuming the same flare intensity for the two flares. As a matter of fact, only one global factor $F_{90\\%}$ appears in Eq.~\\ref{eq:time-int_flux_upLims}.\n\nThe upper limits of the not under-fluctuating sources of the catalog, together with the coordinates, maximum-likelihood parameters and pre-trial p-values, are reported in Table~\\ref{tab:PS_results1}. To calculate these upper limits, a spectral index $\\gamma^f=2$ in Eq. \\ref{eq:time-int_flux_upLims} is assumed for all the flares, whereas the flare time $t_0^f$ and duration $\\sigma_T^f$ are taken as the maximum-likelihood parameters. Only one flare is injected for each source, except for TXS 0506+056 for which two flares are injected, according to the maximum-likelihood results.\n\n\\setlength\\LTleft{-3cm}\n\\begin{center}\n\t\\begin{longtable}{>{\\centering\\arraybackslash}m{3.1cm} >{\\centering\\arraybackslash}m{1.0cm} >{\\centering\\arraybackslash}m{1.0cm} >{\\centering\\arraybackslash}m{1.1cm} >{\\centering\\arraybackslash}m{1.0cm} >{\\centering\\arraybackslash}m{2.3cm} >{\\centering\\arraybackslash}m{2.2cm} >{\\centering\\arraybackslash}m{1.5cm} >{\\centering\\arraybackslash}m{1.9cm}}\n\t\\hline\n\t\t\\multicolumn{9}{|c|}{catalog results}\\\\ \\hline\n\t\tSource & R.A. & $\\delta$ & $\\hat{n}_s$ & $\\hat{\\gamma}$ & $\\hat{t}_0$ & $\\hat{\\sigma}_T$ & $-\\log_{10}(p_{loc})$ & $F_{90\\%}\\times10^{4}$\\\\\n\t\t & [ deg ] & [ deg ] & & & [ MJD ] & [ days ] & & [ TeV cm$^{-2}$ ] \\\\ [3pt]\n\t\t \\midrule\n\t\t\\endfirsthead\n\t\t\\midrule\n\t\tSource & R.A. & $\\delta$ & $\\hat{n}_s$ & $\\hat{\\gamma}$ & $\\hat{t}_0$ & $\\hat{\\sigma}_T$ & $-\\log_{10}(p_{loc})$ & $F_{90\\%}\\times10^{4}$\\\\\n\t\t & [ deg ] & [ deg ] & & & [ MJD ] & [ days ] & & [ TeV cm$^{-2}$ ] \\\\[3pt]\n\t\t\\midrule\n\t\t\\endhead\n\t\tS5 0716+71 & 110.49 & 71.34 & -- & -- & -- & -- & -- & --\\\\\n\t\tS4 1749+70 & 267.15 & 70.10 & -- & -- & -- & -- & -- & --\\\\\n\t\tM82 & 148.95 & 69.67 & 27.8 &4.0 & 57395.8 & 200.0 & 1.51 & 5.7\\\\\n\t\t1ES 1959+650 & 300.01 & 65.15 & 3.9 &3.3 & 55028.4 & $1.8\\times10^{-1}$ & 2.21 & 3.8\\\\\n\t\t\\textbf{GB6 J1542+6129} & \\textbf{235.75} & \\textbf{61.50} & $\\mathbf{23.7^{+9.7}_{-7.9}}$ & $\\mathbf{2.7^{+0.5}_{-0.3}}$ & $\\mathbf{57740^{+80}_{-60}}$ & $\\mathbf{147^{+110}_{-25}}$ & \\textbf{2.67} & \\textbf{5.3}\\\\\n\t\tPG 1246+586 & 192.08 & 58.34 & -- & -- & -- & -- & -- & --\\\\\n\t\tTXS 1902+556 & 285.80 & 55.68 & 3.2 &4.0 & 54862.5 & 6.0 & 0.46 & 3.6\\\\ \n\t\t4C +55.17 & 149.42 & 55.38 & 11.2 &3.6 & 58303.7 & 59.7 & 1.00 & 2.5\\\\ \n\t\tS4 1250+53 & 193.31 & 53.02 & 6.1 &2.2 & 55062.9 & 35.9 & 0.74 & 3.7\\\\ \n\t\t1ES 0806+524 & 122.46 & 52.31 & 6.5 &3.1 & 55248.3 & 43.3 & 0.39 & 2.8\\\\ \n\t\t1H 1013+498 & 153.77 & 49.43 & 3.1 &2.2 & 58053.6 & $2.7\\times10^{-1}$ & 0.41 & 1.2\\\\ \n\t\tB3 1343+451 & 206.40 & 44.88 & 4.0 &2.7 & 57856.5 & $2.8\\times10^{-1}$ & 0.49 & 1.2\\\\ \n\t\tMG4 J200112+4352 & 300.30 & 43.89 & 11.6 &2.0 & 56776.2 & 105.9 & 1.00 & 2.6\\\\\n\t\t3C 66A & 35.67 & 43.04 & -- & -- & -- & -- & -- & --\\\\\n\t\tS4 0814+42 & 124.56 & 42.38 & 3.4 &2.6 & 56301.3 & 3.1 & 0.47 & 1.3\\\\ \n\t\tBL Lac & 330.69 & 42.28 & 3.8 &4.0 & 54637.6 & 5.6 & 0.48 & 2.5\\\\\n\t\t2HWC J2031+415 & 307.93 & 41.51 & 18.8 & 3.4 & 58056.8 & 114.0 & 0.93 & 2.4\\\\\n\t\tNGC 1275 & 49.96 & 41.51 & -- & -- & -- & -- & -- & --\\\\\n\t\tB3 0609+413 & 93.22 & 41.37 & 8.7 &1.7 & 56736.2 & 163.7 & 0.90 & 2.5\\\\ \n\t\tM31 & 10.82 & 41.24 & 8.6 &2.3 & 57900.7 & 16.4 & 1.29 & 2.1\\\\\n\t\tTXS 2241+406 & 341.06 & 40.96 & 3.8 &2.9 & 55334.5 & $2.5\\times10^{-1}$ & 0.55 & 1.7\\\\\n\t\tGamma Cygni & 305.56 & 40.26 & 5.8 &1.5 & 57336.8 & 13.0 & 0.95 & 1.8\\\\\n\t\tMkn 501 & 253.47 & 39.76 & -- & -- & -- & -- & -- & --\\\\\n\t\tB3 0133+388 & 24.14 & 39.10 & -- & -- & -- & -- & -- & --\\\\\n\t\tMkn 421 & 166.12 & 38.21 & 2.9 &2.1 & 54875.0 & $7.6\\times10^{-1}$ & 1.23 & 2.8\\\\\n\t\t4C +38.41 & 248.82 & 38.14 & 6.2 &2.1 & 56751.6 & 9.0 & 0.60 & 1.5\\\\ \n\t\tMG2 J201534+3710 & 303.92 & 37.19 & 3.9 &1.3 & 57326.7 & 129.4 & 0.45 & 1.8\\\\ \n\t\tMGRO J2019+37 & 304.85 & 36.80 & 4.2 &1.4 & 57330.9 & 135.0 & 0.40 & 1.7\\\\\n\t\tB2 0218+357 & 35.28 & 35.94 & -- & -- & -- & -- & -- & --\\\\\n\t\tB2 2114+33 & 319.06 & 33.66 & -- & -- & -- & -- & -- & --\\\\\n\t\tB2 1520+31 & 230.55 & 31.74 & 5.0 &2.4 & 55999.0 & 2.7 & 0.66 & 1.2\\\\\n\t\tNGC 598 & 23.52 & 30.62 & 4.9 &1.8 & 56520.7 & 33.0 & 0.75 & 1.7\\\\\n\t\tPG 1218+304 & 185.34 & 30.17 & 2.0 &2.4 & 54647.8 & $2.1\\times10^{-2}$ & 1.12 & 2.1\\\\ \n\t\tB2 1215+30 & 184.48 & 30.12 & 2.0 &2.4 & 54647.8 & $2.1\\times10^{-2}$ & 1.21 & 2.2\\\\\n\t\tTon 599 & 179.88 & 29.24 & 2.0 &1.7 & 55024.2 & $3.0\\times10^{-3}$ & 0.45 & 1.2\\\\\n\t\tMG2 J043337+2905 & 68.41 & 29.10 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +28.07 & 39.48 & 28.80 & -- & -- & -- & -- & -- & --\\\\\n\t\tW Comae & 185.38 & 28.24 & 3.1 &3.4 & 55682.4 & 1.5 & 0.49 & 1.2\\\\\n\t\tTXS 0141+268 & 26.15 & 27.09 & -- & -- & -- & -- & -- & --\\\\\n\t\tON 246 & 187.56 & 25.30 & -- & -- & -- & -- & -- & --\\\\\n\t\t1ES 0647+250 & 102.70 & 25.06 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 1441+25 & 220.99 & 25.03 & 4.1 &1.7 & 56994.7 & 105.6 & 0.69 & 1.8\\\\ \n\t\tPKS 1424+240 & 216.76 & 23.80 & 17.9 &2.8 & 57182.6 & 200.0 & 1.00 & 2.2\\\\\n\t\tS2 0109+22 & 18.03 & 22.75 & 4.6 &4.0 & 55153.2 & $9.2\\times10^{-1}$ & 0.93 & 1.6\\\\\n\t\tCrab nebula & 83.63 & 22.01 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +21.35 & 186.23 & 21.38 & 2.0 &2.3 & 55690.3 & $1.2\\times10^{-3}$ & 0.64 & 0.9\\\\\n\t\tTXS 0518+211 & 80.44 & 21.21 & -- & -- & -- & -- & -- & --\\\\\n\t\tRGB J2243+203 & 340.99 & 20.36 & 11.2 &3.6 & 57300.1 & 33.0 & 0.81 & 1.5\\\\ \n\t\tOJ 287 & 133.71 & 20.12 & 3.6 &2.6 & 56416.8 & $8.4\\times10^{-1}$ & 0.72 & 1.0\\\\ \n\t\tPKS 1717+177 & 259.81 & 17.75 & 2.0 &3.3 & 54587.2 & $2.0\\times10^{-1}$ & 0.45 & 1.3\\\\ \n\t\tOX 169 & 325.89 & 17.73 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0735+17 & 114.54 & 17.71 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0235+164 & 39.67 & 16.62 & -- & -- & -- & -- & -- & --\\\\\n\t\t3C 454.3 & 343.50 & 16.15 & 5.1 &2.0 & 56119.1 & 200.0 & 0.46 & 1.3\\\\\n\t\t4C +14.23 & 111.33 & 14.42 & 3.1 &2.0 & 58076.6 & 1.2 & 0.81 & 1.0\\\\\n\t\tPSR B0656+14 & 104.95 & 14.24 & -- & -- & -- & -- & -- & --\\\\\n\t\t\\textbf{M87} & \\textbf{187.71} & \\textbf{12.39} & $\\mathbf{3.0^{+2.0}_{-1.4}}$ &$\\mathbf{4.0^{+0.9}_{-0.9}}$ & $\\mathbf{57730.031^{+0.001}_{-0.001}}$ & $\\mathbf{1.4^{+1.3}_{-0.4}\\times10^{-3}}$ & \\textbf{3.35} & \\textbf{0.9}\\\\\n\t\t1H 1720+117 & 261.27 & 11.88 & -- & -- & -- & -- & -- & --\\\\\n\t\tCTA 102 & 338.15 & 11.73 & -- & -- & -- & -- & -- & --\\\\\n\t\tPG 1553+113 & 238.93 & 11.19 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 2032+107 & 308.85 & 10.94 & -- & -- & -- & -- & -- & --\\\\\n\t\tMG1 J021114+1051 & 32.81 & 10.86 & 2.8 &2.1 & 56179.2 & $8.9\\times10^{-1}$ & 0.52 & 0.9\\\\ \n\t\t1RXS J194246.3+1 & 295.70 & 10.56 & 4.2 &3.4 & 54904.8 & 24.3 & 0.51 & 1.4\\\\\n\t\tPKS 1502+106 & 226.10 & 10.50 & 9.8 &2.5 & 55509.5 & 21.6 & 1.97 & 1.8\\\\ \n\t\tOT 081 & 267.87 & 9.65 & 9.7 &2.9 & 57751.6 & 45.7 & 0.79 & 1.3\\\\\n\t\tRX J1931.1+0937 & 292.78 & 9.63 & -- & -- & -- & -- & -- & --\\\\\n\t\tOG +050 & 83.18 & 7.55 & -- & -- & -- & -- & -- & --\\\\\n\t\tMGRO J1908+06 & 287.17 & 6.18 & 2.9 &2.1 & 57045.2 & 4.6 & 0.63 & 0.9\\\\\n\t\tPKS 0019+058 & 5.64 & 6.14 & -- & -- & -- & -- & -- & --\\\\\n\t\t\\multirow{2}{*}{\\textbf{TXS 0506+056}} & \\multirow{2}{*}{\\textbf{77.35}} & \\multirow{2}{*}{\\textbf{5.70}} &$\\mathbf{10.0^{+5.2}_{-4.2}}$ & $\\mathbf{2.2^{+0.3}_{-0.3}}$ & $\\mathbf{57000^{+30}_{-30}}$ & $\\mathbf{62^{+27}_{-27}}$ & \\multirow{2}{*}{\\textbf{2.77}} & \\multirow{2}{*}{\\textbf{1.7}}\\\\ & & & $\\mathbf{7.6^{+6.1}_{-5.8}}$ & $\\mathbf{2.6^{+0.5}_{-0.6}}$ & $\\mathbf{58020^{+40}_{-40}}$ & $\\mathbf{42^{+42}_{-28}}$ & & \\\\\n\t\tPKS 0502+049 & 76.34 & 5.00 & 2.7 &2.0 & 57072.1 & 1.2 & 0.81 & 0.9\\\\\n\t\tMG1 J123931+0443 & 189.89 & 4.73 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0829+046 & 127.97 & 4.49 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 1502+036 & 226.26 & 3.44 & 2.0 &2.9 & 54606.9 & $3.4\\times10^{-1}$ & 0.53 & 1.2\\\\\n\t\tHESS J1857+026 & 284.30 & 2.67 & 3.6 &2.3 & 54984.4 & $2.0\\times10^{-1}$ & 0.71 & 0.9\\\\\n\t\t3C 273 & 187.27 & 2.04 & -- & -- & -- & -- & -- & --\\\\\n\t\tOJ 014 & 122.87 & 1.78 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0215+015 & 34.46 & 1.74 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0736+01 & 114.82 & 1.62 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +01.02 & 17.16 & 1.59 & -- & -- & -- & -- & -- & --\\\\\n\t\t4C +01.28 & 164.61 & 1.56 & -- & -- & -- & -- & -- & --\\\\\n\t\tGRS 1285.0 & 283.15 & 0.69 & 6.5 &2.8 & 54808.6 & 87.3 & 0.39 & 1.9\\\\\n\t\tPKS 0422+00 & 66.19 & 0.60 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS B1130+008 & 173.20 & 0.58 & -- & -- & -- & -- & -- & --\\\\\n\t\tPMN J0948+0022 & 147.24 & 0.37 & 2.0 &2.4 & 55610.7 & $4.3\\times10^{-4}$ & 0.90 & 0.6\\\\ \n\t\tHESS J1852-000 & 283.00 & 0.00 & 5.4 &2.8 & 54751.9 & 100.3 & 0.38 & 1.9\\\\\n\t\t\\textbf{NGC 1068} & \\textbf{40.67} & \\textbf{-0.01} & $\\mathbf{23.0^{+8.7}_{-7.9}}$ &$\\mathbf{2.8^{+0.3}_{-0.3}}$ & $\\mathbf{56290^{+90}_{-80}}$ & $\\mathbf{198^{+64}_{-64}}$ & \\textbf{2.65} & \\textbf{1.9}\\\\\n\t\tHESS J1849-000 & 282.26 & -0.02 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0440-00 & 70.66 & -0.29 & 6.2 &2.6 & 57896.8 & 66.8 & 0.51 & 0.9\\\\\n\t\tPKS 1216-010 & 184.64 & -1.33 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0420-01 & 65.83 & -1.33 & -- & -- & -- & -- & -- & --\\\\\n\t\tNVSS J190836-012 & 287.20 & -1.53 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0336-01 & 54.88 & -1.77 & -- & -- & -- & -- & -- & --\\\\\n\t\tS3 0458-02 & 75.30 & -1.97 & 4.6 &2.5 & 56974.6 & $7.0\\times10^{-1}$ & 0.65 & 0.7\\\\ \n\t\tNVSS J141826-023 & 214.61 & -2.56 & 3.7 &2.9 & 57733.0 & $3.4\\times10^{-1}$ & 0.44 & 0.6\\\\\n\t\tPKS 2320-035 & 350.88 & -3.29 & 10.8 &3.2 & 56176.8 & 160.2 & 0.57 & 1.1\\\\\n\t\tHESS J1843-033 & 280.75 & -3.30 & -- & -- & -- & -- & -- & --\\\\[3pt]\n\t\t\\midrule\n\t\tPKS 1329-049 & 203.02 & -5.16 & -- & -- & -- & -- & -- & --\\\\\n\t\tHESS J1841-055 & 280.23 & -5.55 & -- & -- & -- & -- & -- & --\\\\\n\t\t3C 279 & 194.04 & -5.79 & -- & -- & -- & -- & -- & --\\\\\n\t\tHESS J1837-069 & 279.43 & -6.93 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0805-07 & 122.07 & -7.86 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 1510-089 & 228.21 & -9.10 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0048-09 & 12.68 & -9.49 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 0727-11 & 112.58 & -11.69 & -- & -- & -- & -- & -- & --\\\\\n\t\tPKS 2233-148 & 339.14 & -14.56 & 2.0 &2.8 & 54877.5 & $2.6\\times10^{-3}$ & 1.04 & 12.0\\\\ \n\t\tNGC 253 & 11.90 & -25.29 & 4.1 &2.5 & 56511.7 & 22.7 & 0.52 & 8.7\\\\\n\t\tNGC 4945 & 196.36 & -49.47 & 2.0 &1.9 & 54739.8 & $2.4\\times10^{-1}$ & 0.63 & 55.3\\\\\n\t\tLMC & 80.00 & -68.75 & -- & -- & -- & -- & -- & --\\\\\n\t\tSMC & 14.50 & -72.75 & -- & -- & -- & -- & -- & --\\\\\n\t\t\\hline\\hline\n\t\n\t\t\\caption{Coordinates (Right Ascension R.A. and declination $\\delta$), maximum-likelihood flare parameters, logarithm of the local pre-trial p-values $p_{loc}$ of the sources of the catalog and the 90\\% CL upper limits on the time-integrated flux $F_{90\\%}$ (in units of TeV cm$^{-2}$) defined in equation~\\ref{eq:time-int_flux_upLims} for an $E^{-2}$ spectrum. Under-fluctuating results are shown with hyphens. For the four sources that give rise to the $3.0~\\sigma$ excess of the binomial test in the Northern hemisphere (highlighted in bold), the fit parameters are shown with the confidence interval at $68\\%$ CL. A line is used to separate the Northern from Southern sources. The parameters of the flare from TXS 0506+056 at 58020 MJD and related to the neutrino alert ($n_s=7.6$, $\\gamma=2.6$, $\\sigma_T=42$ days) are different from those reported in \\cite{IceCube:2018cha}, when the data available for analysis extended up to 40 days after the central time of the flare. This analysis includes 7 additional months and reconstructs a longer, more significant flare associated with the same alert.}\n\t\t\\label{tab:PS_results1}\n\t\\end{longtable}\n\\end{center}\n\n\\section{Estimation of the single-flare significance of TXS 0506+056}\n\\label{sec:singleflare_significance}\n\nThis Appendix is intended to describe how the single-flare significances of the two flares of TXS 0506+056, that are shown in Fig.~\\ref{fig:best_fit_flares}, are estimated.\n\nBy factorizing the background PDF, the multi-flare likelihood ratio $\\Lambda_{mf}^{-1}$ in Eq.~\\ref{eq:teststatistic} can be written as follows:\n\\begin{equation}\n \\label{eq:likelihood_ratio_simple}\n \\Lambda^{-1}_{mf}=\\frac{\\mathcal{L}(\\vec{\\hat{n}}_s, \\vec{\\hat{\\gamma}}, \\vec{\\hat{t}}_0,\\vec{\\hat{\\sigma}}_T)}{\\mathcal{L}(n_s=0)}=\\prod_{j=\\mathrm{sample}}\\prod_{i=\\mathrm{1}}^{N_j}\\left(1+\\sum_{f=\\mathrm{flares}}\\mathcal{F}^f_{i,j}\\right), \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\mathcal{F}_{i,j}^f \\coloneqq\\frac{n_s^f(\\mathcal{S}^f_{i,j}\/\\mathcal{B}_{i,j}-1)}{N}.\n\\end{equation}\nThe single-flare signal and background PDFs in Eq.~\\ref{eq:likelihood_ratio_simple} are the same as in Eq.~\\ref{eq:multi-likelihood}, but for the sake of clarity here they explicitly show the flare ($f$), event ($i$) and sample ($j$) indices. In addition, the dependency on the parameters, being the same as in Eq.~\\ref{eq:multi-likelihood}, is omitted to simplify the notation.\n\n\nFor TXS 0506+056 there are two identified flares, thus $\\sum_f \\mathcal{F}^f_{i,j}=\\mathcal{F}^1_{i,j}+\\mathcal{F}^2_{i,j}$. In addition, when an event $i$ does not contribute significantly to $\\mathcal{F}^f_{i,j}$, then $\\mathcal{F}^f_{i,j}\\sim10^{-6}\\text{--}10^{-4}$. Since an event can contribute significantly only to one flare, the crossed terms $\\mathcal{F}^1_{i,j}\\mathcal{F}^2_{i,j}$ can be neglected and it is meaningful to retain only terms at first order in $\\mathcal{F}^f_{i,j}$. Based on these observations, the likelihood ratio in Eq. \\ref{eq:likelihood_ratio_simple} can be well approximated as:\n\\begin{equation}\n \\Lambda^{-1}_{mf}=\\prod_{j=\\mathrm{sample}}\\prod_{i=\\mathrm{1}}^{N_j}\\left(1+\\mathcal{F}^1_{i,j}+\\mathcal{F}^2_{i,j}\\right)\\simeq\n \\prod_{j=\\mathrm{sample}}\\prod_{i=\\mathrm{1}}^{N_j}\\left(1+\\mathcal{F}^1_{i,j}\\right)\\left(1+\\mathcal{F}^2_{i,j}\\right)=\\left(\\Lambda_{sf}^{f=1}\\right)^{-1}\\left(\\Lambda_{sf}^{f=2}\\right)^{-1}.\n\\end{equation}\nThus, it can be factorized into single-flare components, that are equivalent to the multi-flare likelihood ratio when only one flare is considered. This result can be easily generalised to $N_f>2$ flares.\n\nSuch a factorization can be exploited to disentangle the contribution of each flare to the multi-flare TS in Eq. \\ref{eq:teststatistic}:\n\\begin{equation}\n \\mathrm{TS}\\simeq-2\\log\\left[\\frac{1}{2}\\prod_{f=\\mathrm{flares}}\\left(\\frac{T_{live}}{\\hat{\\sigma}_T^fI\\left[\\hat{t}_0^f,\\hat{\\sigma}_T^f\\right]}(\\Lambda_{sf}^f)^{-1}\\right)\\right]=-2\\sum_{f=\\mathrm{flares}}\\log\\left[\\left(\\frac{1}{2}\\right)^{1\/N_f}\\frac{T_{live}}{\\hat{\\sigma}_T^fI\\left[\\hat{t}_0^f,\\hat{\\sigma}_T^f\\right]}(\\Lambda_{sf}^f)^{-1}\\right]=\\sum_{f=\\mathrm{flares}}\\mathrm{TS}_{sf}^{f},\n\\end{equation}\nwhere $\\mathrm{TS}_{sf}^f$ is the contribution of the $f$-th flare to the multi-flare TS and can be interpreted as a single-flare TS.\n\nThe single-flare significance $\\sigma_{sf}^f$ can be obtained in the same way as the multi-flare significance, but using the single-flare TS instead of the multi-flare TS. Assuming that the two flares of TXS 0506+056 are independent, one might expect to retrieve the multi-flare TS by summing linearly the single-flare TS and to retrieve the multi-flare significance $\\sigma_{mf}$ by summing in quadrature the single-flare significance. Although this is effectively observed for the TS, the summation in quadrature of the single-fare significance results in a mismatch of nearly 2.5\\% with respect to the multi-flare significance. To correct for this mismatch, the single-flare significance is redefined as $\\sigma^{\\prime f}_{sf}$ through the following relation:\n\\begin{equation}\n \\frac{\\sigma^{\\prime 1}_{sf}}{\\sigma^{\\prime 2}_{sf}}=\\frac{\\sigma^1_{sf}}{\\sigma_{sf}^2}, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\sqrt{\\left(\\sigma^{\\prime 1}_{sf}\\right)^2+\\left(\\sigma^{\\prime 2}_{sf}\\right)^2}=\\sigma_{mf}.\n\\end{equation}\n\nFor TXS 0506+056 this method is used to disentangle the single-flare significance $\\sigma^{\\prime f}_{sf}$ of the 2 flares used in Fig.~\\ref{fig:best_fit_flares}.\n\n\\section{Discussion on the multi-messenger context}\n\\label{sec:MM}\n\nAs shown in Section~\\ref{sec:results}, M87 is the most significant source of the catalog, exhibiting 3 events over a time lag of minute scale with post-trail significance of $1.7~\\sigma$. It is one of the closest ($z=0.00436$) potential cosmic ray accelerators, hosting a supermassive black hole of $6.5\\times10^9M_\\odot$. Its jet was already observed more than a century ago ~\\citep{blanford_agn} in a large elliptical radio galaxy of Fanaroff-Riley type I in the Virgo cluster.\nIt has been observed in $>100$~GeV energy region: VERITAS detected a flare extending beyond 350~GeV with a spectral index at the peak of $2.19 \\pm 0.07$ \\citep{Aliu_2012} in Apr. 2010. In a 2008 flare, a clear correlation between the X-ray emission and the TeV one \\cite{Acciari_2008,Albert:2008kb}. Previous positive detection was reported by HEGRA in 1998\/99 above 700 GeV~\\citep{2003A&A...403L...1A} , and up to $\\sim 10$~TeV by H.E.S.S. in 89 hours of observation between 2003-6, showing a variability at the time scale of a few days in the 2005 high state associated to the Schwarzschild radius of M87 \\cite{Aharonian_2006}. Recently, MAGIC reported the results on the monitoring of M87 for 156 hours in 2012-15 \\cite{MAGIC2020}. It is worth noting that HAWC set an upper limit above 2 TeV for 760 d of data. The non-observation of gamma-rays at $>$~TeV energies, may indicate a cut-off in the spectrum. Such cut-off may differ for neutrinos, being less affected by the absorption in the source and by the extra-galactic background light. \n\nThe gamma-ray observations from M87 are summarized in Fig.~\\ref{fig:MM}, together with the 10-year time-integrated upper limits on the neutrino flux estimated in~\\cite{Aartsen:2019fau} for a spectrum of the form $dN\/dE\\sim E^{-2}$. \n\nPrecise radio observations \\cite{Sikora_2016} indicate a persistent central ridge structure, namely a spine flow in the interior of M87 jet, in addition to the well-known limb-brightening profile, which needs further measurements. A composite structure of the jet has been speculated also for TXS 0506+056 based on observations months after the detection of the IceCube high-energy event that triggered its multi-wavelength observations. With the millimeter-VLBI it was observed that the core jet expands in size with apparent super-luminal velocity \\cite{Ros:2019bgo}. This can be interpreted as deceleration due to proton loading from jet-star interactions in the inner host galaxy and\/or spine-sheath structure of the jet \\cite{2005A&A...432..401G,Tavecchio:2008be}. This sort of spine-sheat structure has been advocated as a possible explanation for the higher flux of neutrinos than gamma-rays and also suggested for TXS 0506+056 by MAGIC \\citep{2018ApJ...863L..10A}, while models with a single zone struggle to explain the 2014-2015 flare of TXS 0506+056 (see e.g. \\cite{Murase_2018,Zhang:2019htg,2018ApJ...864...84K}).\n\nOther models, e.g \\cite{Inoue:2019yfs,Murase_2020}, have been revised to explain the more recent observations of IceCube on NGC 1068 \\citep{Aartsen:2019fau}. These models focus on the higher observed flux of IceCube neutrino events in the $\\sim 1-50$~TeV region with respect to the level of gamma-ray fluxes observed at lower energy by Fermi and the limits of MAGIC. The corona super-hot plasma around the super-massive black hole accelerates protons, carrying few percent of the thermal energy, through plasma turbulence \\cite{Murase_2020} or shock acceleration \\cite{Inoue:2019yfs} leading to the creation of neutrinos and gamma rays. The environment is dense enough to prevent the escape of $\\gg$ 100 MeV gamma rays while $\\sim \\mathrm{MeV}$ gamma-rays would be their result from cascading down.\nFurther insights will be needed in both messengers and all wavelengths to better constrain the structure of jets and acceleration mechanisms in one or multiple zones.\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{figures\/MM_plot.png} \n\t\\caption{The gamma-ray flux in the steady state of the source observed between 2012-2015 \\cite{MAGIC2020} is shown with black (Fermi-LAT) and blue dots (MAGIC). The higher level dashed lines are levels of flux observed during flares (see references in the text). The purple line with downing arrows corresponds to the 10-year time-integrated upper limits taken from ~\\cite{Aartsen:2019fau}, with an assumed spectrum $dN\/dE\\sim E^{-2}$.}\n\\label{fig:MM}\n\\end{figure}\n\n\n\n\\section{Results} \\label{sec:results}\n\nThe point-source search identifies M87 as the most significant source in the Northern hemisphere, with a pre-trial p-value of $p_{loc}=4.6\\times10^{-4}$, which becomes $4.3\\times10^{-2}$ ($1.7~\\sigma$) post-trial. In the Southern hemisphere, the most significant source is PKS 2233-148 with a pre-trial p-value of $p_{loc}=0.092$ and post-trial p-value of $0.72$. TXS 0506+056 is the only source of the catalog for which 2 flares are found. The time profiles of the neutrino flares reconstructed by this analysis at the location of each source, together with their pre-trial significance $\\sigma_{loc}^f$, are visualized in Fig.~\\ref{fig:best_fit_flares}. For the sake of clarity, the flare significance is denoted as $\\sigma_{loc}^f$ while the overall multi-flare significance is referred to as $\\sigma_{loc}=\\sqrt{\\sum_f\\sigma_{loc}^{f2}}$. For single-flare sources (all but TXS 0506+056) the flare and multi-flare significances coincide. \n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{Gaussians_TXS_2weights.png} \n \\caption{Pre-trial flare significance $\\sigma_{loc}^f$ for the sources of the catalog. For all sources a single flare has been found, except for TXS 0506+056 for which 2 flares are found. In this case, the pre-trial significance of each individual flare is calculated as described in Appendix \\ref{sec:singleflare_significance}. The sources of the catalog with multi-flare pre-trial significance $\\sigma_{loc}\\ge2$ are labeled with their names.}\n \n \\label{fig:best_fit_flares}\n\\end{figure}\n\nThe cumulative distributions of pre-trial p-values at the location of the sources of the catalog, used as inputs to the population study, are shown in Fig.~\\ref{fig:pvalues_distribution}.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=.49\\linewidth]{figures\/cumulative_pvals_north_distribution.png} \n \\includegraphics[width=.49\\linewidth]{figures\/cumulative_pvals_south_distribution.png} \n \\caption{Cumulative distributions of the pre-trial p-values of the sources of the catalog in the Northern (left) and Southern (right) hemispheres. The cumulative p-values of the unblinded data are shown in red and compared to the background expectations in blue.}\n \\label{fig:pvalues_distribution}\n\\end{figure}\n\nThe pre-trial binomial p-value is shown in Fig.~\\ref{fig:binomial_test} as a function of the source index $k$. The smallest binomial p-value is selected in each hemisphere and converted into a post-trial binomial p-value as described in Section~\\ref{sec:analysis}. In the Northern hemisphere the smallest pre-trial binomial p-value is $7.3\\times10^{-5}$ ($3.8~\\sigma$) when $k=4$ sources are considered (M87, TXS 0506+056, GB6 J1542+6129, NGC 1068), corresponding to a post-trial p-value of $1.6\\times 10^{-3}$ ($3.0~\\sigma$). In the Southern hemisphere the smallest pre-trial binomial p-value is 0.71, obtained by $k=1$ source (PKS 2233-148) and corresponding to a post-trial p-value of $0.89$.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=.47\\linewidth]{figures\/northern_binomial_pval_noBkg.png} \n \\includegraphics[width=.49\\linewidth]{figures\/southern_binomial_pval_noBkg.png} \n \\caption{Pre-trial binomial p-value $P_{bin}(k)$ as a function of the source index $k$ in the Northern (left) and Southern (right) hemispheres. The edge with the under-fluctuating sources, with binomial p-value set to 1, is shown in blue.}\n \\label{fig:binomial_test}\n\\end{figure}\n\nThe results of the two searches are summarized in Table~\\ref{tab:summary_results}. Having not found any significant time-dependent excess, upper limits on the neutrino emission from the sources of the catalog are estimated as discussed in Appendix~\\ref{sec:sens_DP_upLims}, using Eq.~\\ref{eq:time-integrated_flux} and~\\ref{eq:flux_definition}.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{>{\\centering\\arraybackslash}m{2.8cm} >{\\centering\\arraybackslash}m{2.5cm} >{\\centering\\arraybackslash}m{1.7cm} >{\\centering\\arraybackslash}m{2.5cm}}\n \\multicolumn{4}{c}{Summary of the results}\\\\\n \\hline\n \\hline\n \\multirow{2}{*}{Analysis} & \\multirow{2}{*}{Hemisphere} & \\multicolumn{2}{c}{p-value}\\\\ & & Pre-trial & Post-trial \\\\[3pt] \\hline\n \\multirow{2}{*}{Point-source} & North & $4.6\\times10^{-4}$ & $4.3\\times10^{-2}$ ($1.7~\\sigma$)\\\\ & South & $9.2\\times 10^{-2}$ & 0.72\\\\ [3pt] \\hline\n \\multirow{2}{*}{Binomial test} & North & $7.3\\times10^{-5}$ & $1.6\\times10^{-3}$ ($3.0~\\sigma$) \\\\ & South & $0.71$ & $0.89$\\\\\n \\hline\n \\hline\n\\end{tabular}\n\\caption{Summary of the results of the two analyses: for the point-source search the results of the best sources in the Northern (M87) and Southern (PKS 2233-148) hemisphere are reported.}\n\\label{tab:summary_results}\n\\end{table}\n\\section{Introduction}\n\\label{sec:intro}\n\nAfter more than 100 years since their discovery, the origin and acceleration processes of cosmic rays (CRs) remain unsolved. Relevant hints exist, one being provided by a neutrino event detected by IceCube with most probable energy of 290 TeV which triggered follow-up gamma-ray observations ~\\citep{IceCube:2018dnn}. These observations identified in the 50\\% containment region for the arrival direction of the IceCube event a classified BL Lac object, though possibly a Flat-Spectrum Radio Quasar (FSRQ) \\citep{Padovani:2019xcv}, at redshift $z = 0.34$, known as TXS 0506+056. It was in a flaring state \\citep{IceCube:2018dnn} with a chance correlation between the neutrino event and the photon counterpart rejected at the $3~\\sigma$ level. The intriguing aspect of the possible coincidence between the neutrino event and the gamma-ray flare hints at TXS 0506+056 being a potential CR source. Additionally, in the analysis of the data prior to the event alert IceCube found a neutrino flare of 110 day duration between 2014\/2015~\\citep{IceCube:2018cha} at a significance of $3.7~\\sigma$, if a Gaussian time window is assumed. In this case, no clear flare has been identified in available gamma-ray data from TXS 0506+056~\\citep{Aartsen:2019gxs,Glauch:2019emd}. \nThe total contribution of the observed TXS 0506+056 neutrino flares to the diffuse astrophysical flux observed by IceCube~\\citep{Aartsen:2013jdh,Aartsen:2014gkd,Aartsen:2016xlq,Aartsen:2017mau} is at most a few percent~\\citep{IceCube:2018cha}.\nIn addition, time-integrated upper limits on stacked catalogs of classes of sources (e.g. tidal disruption events \\citep{Stein:2019ivm}, blazars \\citep{Aartsen:2016lir}, gamma ray bursts \\citep{Aartsen:2017wea}, compact binary mergers \\citep{Aartsen:2020mla} and pulsar wind nebulae \\citep{Aartsen:2020eof}), \nconstrain their contribution to the measured diffuse flux. While these limits depend on assumptions on the emission of such classes of sources, such as their spectral shapes and their uniformity within the class, they indicate that there might be a mixture of contributing classes and still unidentified contributors.\n\nRecently, IceCube performed another analysis on neutrino sources: a time-integrated search for point-like neutrino source signals using ten years of data~\\citep{Aartsen:2019fau}. This search uses a maximum-likelihood (ML) method to test the locations of a catalog of 110 selected sources and the full sky. As an intriguing coincidence, the two searches find the hottest spot to be a region including the Seyfert II galaxy NGC 1068, with a significance reported from the catalog search of $2.9~\\sigma$. Additionally, a population study of the catalog revealed a $3.3~\\sigma$-level incompatibility of the neutrino events from the directions of four Northern sources with respect to the estimated background: NGC 1068, TXS 0506+056, PKS 1424+240 and GB6 J1542+6129.\n\n\nTo fully investigate this catalog of sources, this letter shows the results of a complementary time-dependent study. Time-dependent searches are particularly interesting not only because of their better sensitivity to time-integrated searches for flares of duration $\\lesssim 200$ d, due to the suppression of the time-constant background of atmospheric neutrinos, but also because flare events are particularly suitable periods for neutrino production in blazars. In fact, the injection rate of accelerated protons and the density of target photon fields for photo-meson interactions can be noticeably increased during flaring periods of blazars, leading to an enhanced neutrino luminosity $L_\\nu \\propto L^{1.5\\text{--}2}_\\gamma$ (see \\cite{Zhang:2019htg} and references therein), where $L_\\gamma$ is the photon luminosity. \nApart from the aforementioned evidence of the 2014\/2015 flare from the direction of TXS 0506+056, other IceCube time-dependent searches did not find any significant excess. Nevertheless, they constrained specific emission models \\citep{Abbasi:2020dfi} or set upper limits on the neutrino emission from selected sources \\citep{Aartsen:2015wto}. Triggered searches adopt lightcurves or flare directions from gamma-ray experiments, while sky scans search for largest flares anywhere in the sky.\nIn this paper, we extend these searches to a multiple flare scan based on a ML method. \n\\section{Conclusions} \\label{sec:conclusions}\n\nThe time-dependent point-source search presented in this letter identified M87 as the most significant source in the Northern hemisphere, with $\\hat{n}_s=3$ signal-like neutrino events in a time window of $\\hat{\\sigma}_T=2.0$ minutes and with a soft spectrum ($\\hat{\\gamma}=3.95$). The post-trial significance of M87 is found to be $1.7~\\sigma$. Because of the quite short time lag between the events, the time-dependent search is more sensitive than the time-integrated one, which explains the absence of significant signals in previous IceCube time-integrated analyses that had included M87. For the case of~\\cite{OSullivan:2019rpq}, a smaller data sample from Apr. 26, 2012 to May 11, 2017 was used. The difference in significance is due to small changes in the event reconstruction and angular uncertainty estimation between the two samples.\n\nThis analysis also identifies the two known flares at the location of TXS 0506+056, one corresponding to the most significant flare at $\\sim 57000$ MJD \\citep{IceCube:2018cha} and the other related to the high-energy event alert IceCube-170922A detected on 22 Sep. 2017 \\citep{IceCube:2018dnn}. Although these two flares are consistently identified, the significance of the result at the location of TXS 0506+056 is lower than the one reported in~\\citep{IceCube:2018cha}. This is due to the new data selection \\citep{Abbasi:2021bvk} described in Section~\\ref{sec:detector}, which introduces a different energy reconstruction from the past one~\\citep{Abbasi:2021bvk}. Further information about the reduced significance of TXS 0506+056 resulting from this analysis are provided in Appendix~\\ref{sec:TXS_significance_investigation}.\n\n\n\nThe time-dependent binomial test of the Northern hemisphere suggests an incompatibility at $3.0~\\sigma$ significance of the neutrino events from four sources with respect to the overall Northern background expectation. Of the four most significant sources in the Northern hemisphere, three are common with the time-integrated analysis~\\citep{Aartsen:2019fau}, namely NGC 1068, TXS 0506+056, GB6 J1542+6129, whereas a fourth source (M87) is different and shows a strong time-dependent behavior. However, the results of the time-dependent and time-integrated binomial test partly overlap, as both share the same space and energy PDFs in the likelihood definition in Eq.~\\ref{eq:multi-likelihood} and both select the same three out of four sources. For this reason, although a time-dependent structure of the data is suggested by the binomial test, a time-independent scenario cannot be excluded by this analysis (see Appendix~\\ref{app:variab} for a further discussion).\n\n\nNo significant result is found in the Southern hemisphere. This is consistent with the lower sensitivity due to the substantially larger background of atmospheric muons in the Southern hemisphere.\n\\section{Data analysis methods}\n\\label{sec:analysis}\n\nThe presented analyses are based on an unbinned ML method similar to previous IceCube analyses, extended to allow the detection of multiple flares and to handle different IceCube samples (IC40, IC59, IC79, IC86-I, IC86-II-VII) with different detector configurations. Since each IceCube sample is independent, the total 10-year likelihood $\\mathcal{L}$ is defined as the product of the likelihoods of each single IceCube sample $\\mathcal{L}_j$:\n\\begin{equation}\n \\label{eq:10-year-likelihood}\n \\mathcal{L}(\\vec{n}_s, \\vec{\\gamma}, \\vec{t}_0, \\vec{\\sigma}_T)=\\prod_{j=\\mathrm{sample}}\\mathcal{L}_j(\\vec{n}_{s,j}, \\vec{\\gamma}, \\vec{t}_0, \\vec{\\sigma}_T),\n\\end{equation}\nwhere $\\mathcal{L}_j$ is defined as\n\n\\begin{equation}\n\\label{eq:multi-likelihood}\n\\mathcal{L}_j(\\vec{n}_{s,j}, \\vec{\\gamma}, \\vec{t}_0, \\vec{\\sigma}_T) = \\prod_{i=1}^{N_j}\\left[\\frac{\\sum_{f=\\mathrm{flares}}n_{s,j}^f\\mathcal{S}_j(|\\vect{x_s}-\\vect{x_i}|,\\sigma_i, E_i,t_i; \\gamma^f, t_0^f, \\sigma_T^f)}{N_j}+\\left(1-\\frac{\\sum_fn_{s,j}^f}{N_j}\\right)\\mathcal{B}_j(\\sin\\delta_i, E_i)\\right] .\n\\end{equation}\n\nFor each flare $f$, the likelihood in Eq.~\\ref{eq:10-year-likelihood} is a function of four parameters described below: the total number of signal-like events in the flare $n_s^f$, the flare spectral index $\\gamma^f$, the flaring time $t_0^f$ and the flare duration $\\sigma_T^f$. They are denoted with an arrow in the likelihood arguments to indicate that there are as many sets of these four parameters as the number of flares. For each flare $f$, $n_{s,j}^f$ in Eq.~\\ref{eq:multi-likelihood} denotes the partial contribution of the $j$-th sample to the total number of signal-like events in that flare, such that $n_s^f=\\sum_j n_{s,j}^f$. Such partial contribution $n_{s,j}^f$ is estimated from the relative effective area of the IceCube configuration of the $j$-th sample (determined by Monte Carlo simulations of the detector and varying with spectral index and declination) and the fraction of time that the $f$-th flare stretches on the data-taking period of the $j$-th sample.\n\nFor each IceCube sample $j$, with $N_j$ total events, the likelihood in Eq.~\\ref{eq:multi-likelihood} is constructed from a single-flare signal probability density function (PDF) $\\mathcal{S}_j$, weighted by $n_{s,j}^f$ and summed over all flares from a source (multi-flare signal PDF), and a background PDF $\\mathcal{B}_j$. The single-flare signal PDF and the background PDF are the product of a space, energy and time PDFs, as also described in \\cite{Aartsen:2015wto}. The spatial signal PDF assumes a cluster of events distributed according to a 2D Gaussian around the source position $\\vect{x_s}$, with $\\sigma_i$ being the estimated angular uncertainty on the $\\vect{x_i}$ position of the $i$-th event. For the signal energy PDF, that depends on the declination $\\delta_i$ and the energy proxy $E_i$ of the events (the energy as measured by IceCube from visible light released in the detector by muon tracks), an unbroken power law $\\propto E^{-\\gamma^f}$ is used. The spectral index $\\gamma^f$ is bound within $1\\le\\gamma^f\\le4$ and can be different for each flare $f$. The signal time PDF of each flare $f$ is provided by a one-dimensional Gaussian $\\propto \\exp{[-(t_i-t_0^f)^2\/(2\\sigma_T^{f2})]}$, where $t_i$ is the time of the $i$-th event. Its normalization is such that the integral of the time PDF across the up times of each IceCube sample is 1. The central time of each Gaussian flare $t_0^f$ is constrained within the 10-year period of the analyzed data and the flare duration $\\sigma_T^f$ cannot exceed an upper limit of 200 days, above which time-integrated searches are more sensitive than time-dependent ones. For computational efficiency, the signal time PDF of each flare is truncated at $\\pm 4\\sigma_T^f$, where the flare can be considered concluded.\n\nThe spatial background PDF is obtained through a data-driven method by scrambling the time of the events and correcting the right ascension accordingly, assuming fixed local coordinates (azimuth, zenith). It depends only on the declination $\\delta_i$ of the events and it is uniform in right ascension. Due to the natural tendency of the reconstruction to be more efficient if the direction of the source is aligned with the strings of the detector, an azimuth-dependent correction is applied to the spatial background PDF. Such correction is relevant for time scales shorter than one day, whereas it is negligible for longer time scales, since any azimuth dependency is averaged out by the Earth rotation. The background energy PDF is taken from scrambled data as well, and it is fully described in~\\cite{Aartsen:2013uuv}. It depends on the declination $\\delta_i$ and the energy proxy $E_i$ of the events. The background time PDF is uniform, as expected for atmospheric muons and neutrinos if seasonal sinusoidal variations are neglected. The maximal amplitude for these variations is 10\\% for the downgoing muons produced in the polar atmosphere and smaller for atmospheric neutrinos coming from all latitudes \\citep{Gaisser:2013lrk}.\n\nThe test statistic (TS) is defined as:\n\\begin{equation}\n\\label{eq:teststatistic}\n\\mathrm{TS}=-2\\ln\\left[\\frac{1}{2}\\left(\\prod_{f=\\mathrm{flares}}\\frac{T_{live}}{\\hat{\\sigma}_T^f I\\left[\\hat{t}_0^f, \\hat{\\sigma}_T^f\\right]}\\right)\\times\\frac{\\mathcal{L}(\\vec{n}_s=\\vec{0})}{\\mathcal{L}(\\vec{\\hat{n}}_s, \\vec{\\hat{\\gamma}}, \\vec{\\hat{t}}_0, \\vec{\\hat{\\sigma}}_T)}\\right] ,\n\\end{equation} \nwhere the parameters that maximize the likelihood function in Eq.~\\ref{eq:10-year-likelihood} are denoted with a hat and $\\mathcal{L}(\\vec{n}_s=\\vec{0})$ is the background likelihood, obtained from Eq.~\\ref{eq:10-year-likelihood} by setting $n_s^f=0$ for all the flares.\nThe likelihood ratio is multiplied by a marginalization term intended to penalize short flares, similarly used in previous time-dependent single-flare IceCube analyses to correct a natural bias of the likelihood towards selecting short flares. This was discussed in~\\cite{Braun:2009wp} for the single-flare analysis. For the multi-flare analysis, the numerical factor $1\/2$ in the equation above is chosen such that the marginalization term has the same form as the single-flare one when the true hypothesis is a single flare. The factor $0$ is obtained by averaging $P_n(t)$ over all possible initial coin states. However, we observe that we get exactly the same result by only taking into account any pair of orthogonal coin states. This is due to the fact that the average probability distribution resulting from two walks starting with any two orthogonal coin states at the origin is equal to the one resulting from the evolution of a completely mixed coin state. (The resulting distribution is symmetric since the completely mixed coin state at the origin is reflection invariant.) Also, for the long-time limit, the bound states stay in the vicinity of the origin, whereas the extended states get spread over the infinite position space yielding probabilities going to zero. Based on these facts, we can obtain an analytic expression to estimate the long-time behaviour of $\\left$ by projecting the evolved state onto the bound subspace and averaging the corresponding probabilities over two orthogonal initial states, such that\n\\begin{eqnarray}\n\\left< P_0 \\right > &=& \\frac{1}{2}\\left[(1-\\lambda_+^2)^2 + (1-\\lambda_-^2)^2\\right]~~\\textrm{and} \\\\\n\\left< P_n \\right >\n&=&\n\\frac{1}{4}\n[\\lambda_+^{2|n|-2} (1+\\lambda_+^2) (1-\\lambda_+^2)^2\n\\nonumber \\\\\n&+&\n\\lambda_-^{2|n|-2} (1+\\lambda_-^2) (1-\\lambda_-^2)^2],\n\\label{eq:Pn}\n\\end{eqnarray}\nwhere $n\\neq 0$ and non-zero probabilities appear for even (odd) sites only after even (odd) number of steps. To quantify the localisation, we utilize the participation ratio of the averaged probability distribution, which is given by\n\\begin{equation}\n\\mathrm{PR} = \\sum_n \\left^2.\n\t\\label{eq:PR}\n\\end{equation}\nFor a uniform probability distribution over $N$ sites, PR yields its minimum value $\\sim N^{-1}$. At the other extreme of localisation at one site, PR takes its maximum value of one. In figure~\\ref{fig:average_loc}, the numeric results for the PR (green solid curve) and $\\left$ (orange dashed curve) for $150$ steps are represented. Both of them is calculated by using the average probability distribution $\\left$ which is averaged over a pair of orthogonal initial coin states as we mentioned before. We also provide the analytic prediction of PR (black dots) for the long-time behaviour using (\\ref{eq:Pn}) and (\\ref{eq:PR}) which slightly differs from its numerical simulation, whereas we omitted that of $\\left$ for clarity since it exactly fits to the numerical data. First of all, both curves exhibit similar behaviour with respect to $\\phi$ and $\\left$ pointing out that localisation occurs around the impurity site. They get maximized at $\\phi=\\pi$ and vanish at the standard quantum walk limit $\\phi=0,2\\pi$. The kinks at $\\phi=\\pi\/2,3\\pi\/2$ are due to bound states appearing or disappearing in this model as discussed previously. This behaviour matches exactly that of the effective localisation length determined by the bound states in figure~\\ref{fig:bandStr}(b), which consequently shows that the localisation properties of the walk in the long-time limit is determined by the number and character of the stationary bound states. The slight difference between the numerical and analytical results of PR stems from the finite number of time steps in the numerical simulation and the fact that contribution from the extended states is completely excluded in the analytical expression. As a consequence of this, the numerical data stays above the analytical prediction. For example, as we approach the standard walk case, the wavefunction for a finite-step walk stays relatively ``localised'' in comparison to that of the long-time case which spreads infinitely over the position space without any localisation. Hence, the numerical prediction will become zero in the standard walk in this limit as well. The very good agreement between the numerical and analytical results in figure~\\ref{fig:average_loc} implies that the effect of the extended states on the PR is negligible even after $150$ steps.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.85]{fig4.pdf}\n\\caption{The numerical results for the participation ratio (PR) and the average probability at the origin $\\left$ with respect to $\\phi$ after $150$ steps. The analytical prediction for PR (black dots) is also provided.\n\\label{fig:average_loc}}\n\\end{figure}\n\n\\subsection{Non-Markovianity}\n\nWe now turn our attention to the non-Markovian behaviour of the dynamics of the coin for the quantum walk with a phase impurity. As mentioned before, we are interested in the effects of localised bound states and their symmetry on the degree of non-Markovianity of the reduced coin evolution. In order to quantify the amount of memory effects in the open system dynamics from different perspectives, we will comparatively study two well-established measures of quantum non-Markovianity that are based on the information flow dynamics between the coin and the spatial degrees of freedom.\n\nLet us first briefly discuss how to characterize the non-Markovian nature of an open system evolution and identify the existence of possible memory effects in the dynamics. Assume that we have a quantum map \\ALS{$\\Lambda_{t,0}$}{$\\Lambda(t,0)$}, i.e., a completely positive trace preserving (CPTP) map describing the evolution of the open quantum system. The property of divisibility implies that divisible maps satisfy the decomposition rule $\\Lambda(t,0) = \\Lambda(t,s) \\Lambda(s,0)$, where $\\Lambda(t,s)$ is a CPTP map for all $s\\leq t$. Markovian or so-called memoryless dynamical maps are recognized as the ones that satisfy this decomposition rule. On the other hand, when the divisibility rule is violated, i.e., when \\ALS{$\\Lambda_{t,s}$}{$\\Lambda(t,s)$} is not a CPTP map or when it does not even exist, then the dynamical map $\\Lambda$ is said to be non-divisible and the evolution it describes non-Markovian. The concept of divisibility can also be discussed in the context of discrete dynamics, such as quantum walk, where $t,s \\in \\mathbb{N}$~\\cite{luoma15}.\n\nThe first non-Markovianity measure that we utilize in our work is known as \\ALS{the BLP}{Breuer-Laine-Piilo (BLP)} measure~\\cite{breuer09} which is based on the idea of distinguishability of two open system states under a given dynamical evolution. In this approach, the changes in the distinguishability between two arbitrary initial states of the open system during the dynamics are interpreted as the information flow between the open system and its environment. In particular, if distinguishability between the initial states decreases monotonically in time throughout the evolution, the dynamics is said to be Markovian, since in this case information flows from the open system to its environment in a monotonic fashion. However, if distinguishability temporarily increases during the dynamics, then this is understood as a back-flow of information from the environment to the open system giving rise to non-Markovian memory effects. The distinguishability of two systems can be quantified through trace distance between their density matrices $\\rho_1$ and $\\rho_2$ as\n\\begin{equation}\nD(\\rho_1, \\rho_2)\\!=\\!\n\\frac{1}{2}\n||\\rho_1\\!-\\!\\rho_2||_1\n\\!=\\!\n\\frac{1}{2}\n\\Tr \\left[(\\rho_1\\!-\\!\\rho_2)^{\\dagger} (\\rho_1\\!-\\!\\rho_2)\\right]^{1\/2}\n\\label{eq:trace_dist}\n\\end{equation}\nwhich acquires its maximum value of one, when the states $\\rho_1$ and $\\rho_2$ are orthogonal. At this point, we should stress that since CPTP maps are contractions for the trace distance, BLP measure vanishes for divisible maps, resulting in a memoryless evolution. However, we also emphasize that it is possible for trace distance to monotonically decrease for certain non-divisible maps as well. Therefore, as is well known in the recent literature, even though widely used as a measure for non-Markovianity on its own, BLP measure is actually a witness for the non-divisibility of quantum dynamical maps. The BLP measure can be expressed in discrete time as \\cite{luoma15}\n\\begin{equation}\n{\\cal{N}}\n=\n\\max_{\\rho_{1,2}}\n\\sum_{t, \\Delta D>0} \\Delta D_t\n=\n\\sum_{t} \\Delta D_t \\Theta(\\Delta D_t),\n\\label{eq:nonmarkov}\n\\end{equation}\nwhere $\\Theta(x)$ denotes the Heaviside step function,\n\\begin{equation}\n\\Delta D_t\n=\nD(\\rho_{1,t}, \\rho_{2,t})-D(\\rho_{1,t-1}, \\rho_{2,t-1}).\n\\end{equation}\nand the maximization is carried out over all possible initial state pairs. It has been shown that the pair which maximizes the sum in (\\ref{eq:nonmarkov}) is a pair of orthogonal of states~\\cite{wissmann12}. In our analysis, we study the reduced system dynamics of a pair of such initial states, namely, $\\ket{\\psi_{S,A}}$ introduced before, with opposite reflection symmetry, which will be later on revealed as the optimal initial state pair optimizing the BLP measure.\n\nThe time evolution of $\\rho^\\mathrm{coin}_{S,A}$ is particularly easy to visualize because the parametrization $\\rho^\\mathrm{coin}_t = (I + \\vec{r}_t\\cdot \\vec{\\sigma})\/2$ has only one non-zero component, i.e. $r_{x,t}$, throughout the time evolution which is shown in figure~\\ref{fig:spinxoscillations} for representative values of the phase $\\phi$. For $\\phi=0$, which gives the standart quantum walk, both $r^S_{x,t}$ (black dotted line in figure~\\ref{fig:spinxoscillations}(a)) and $r^A_{x,t}=-r^S_{x,t}$ (black dotted line in figure~\\ref{fig:spinxoscillations}(b)) undergo damped oscillations with a period of four steps as the steady-state is reached. Since the oscillations are out of phase for these orthogonal initial states, the trace distance between such states also oscillates in time with decreasing amplitude (black dotted line in figure~\\ref{fig:spinxoscillations}(c)). Therefore, even though there is a back-flow of information from the environment to the open system in the standard walk, the damping in oscillations shows that information flow between the two subsystems reduces and eventually vanishes in time~\\cite{hinarejos2014}. For non-zero values of $\\phi$, oscillations in the initial state component $r^{A(S)}_{x,t}$ arise depending on the overlap with the bound states. When $\\phi=\\pi\/4$, the oscillations in $r^A_{x,t}$ die out very quickly, whereas oscillations with period two between sublattice symmetric pair of localised states survive for $r^S_{x,t}$ as shown by the blue dot-dashed line in figure~\\ref{fig:spinxoscillations}(a)-(b). For $\\phi=\\pi\/2$, similar oscillations exist, except they die out more slowly for $r^A_{x,t}$ which has a finite overlap with the emerging reflection anti-symmetric bound state whereas oscillations continue with higher amplitudes for $r^S_{x,t}$ since the reflection symmetric bound-state becomes more localised for this value of $\\phi$. At $\\phi=\\pi$ where bound states of both parities exist, oscillations in $r_{x,t}$ occur with higher amplitudes for both of the initial states in comparison with the other shown phase values.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale=0.90]{fig5.pdf}\n\\caption{Oscillations in the reduced coin density matrices starting from $\\ket{\\Psi_\\text{S}}$ in (a) and from $\\ket{\\Psi_\\text{A}}$ in (b) as a function of time for representative values of the phase parameter $\\phi$. The trace distance of these coin states $D(\\rho_S,\\rho_A)=|r_{x,A}-r_{x,S}|$ is shown in (c) and the oscillating behaviour gives rise to non-zero BLP measure.} \n\\label{fig:spinxoscillations}\n\\end{figure}\n\nHaving obtained the time dependence of $\\rho^\\mathrm{coin}_{S,A}$, we calculate the trace distance $D(\\rho_S, \\rho_A) = |r_{S,x}-r_{A,x}|$, and display our findings in figure~\\ref{fig:spinxoscillations}(c), as a function of $\\phi$. In contrast to the standard quantum walk where the trace distance oscillations die out in time, we find that they survive for non-zero $\\phi$, as at least one of $r^{S,A}_{x,t}$ keeps oscillating in time. However, we should keep in mind that the value of the trace distance also depends on the mean values $\\overline{r^{S,A}_{x,t}}$ about which oscillations take place. For example, when $\\phi=\\pi\/2$ we get oscillations in $D(\\rho_1,\\rho_2)$ with smaller amplitudes than in $r^S_{x,t}$, which will be of importance in our later discussions.\n\nAs the persistent oscillations in trace distance play a crucial role for the evaluation of the BLP measure in our model, the oscillation means $\\overline{r^{S,A}_{x,t}}$ and the oscillation amplitudes are plotted in figure~\\ref{fig:oscillations}(a). Comparison with figure~\\ref{fig:bandStr}(c) reveals that, as the overlap between one of the the initial states and the bound states increases, $\\overline{r^{S,A}_{x,t}}$ converges to the $r_x$ of the corresponding bound state and oscillations appear. For the interval $\\phi \\in (\\pi \/2, 3 \\pi \/2)$, $\\overline{r^{S,A}_{x,t}}$ becomes the same as $r_x$ in the long time limit. The difference in $\\overline{r^{S,A}_{x,t}}$ approaches to zero at $\\phi \\sim 0.6 \\pi$ and $\\phi \\sim 1.4 \\pi$, yielding very small values for the trace distance together with the fact that essentially one of $r^{S,A}_{x,t}$ oscillates about their common mean. For other values of $\\phi$, the trace distance is mainly determined by the oscillations in $r^{S,A}_{x,t}$. Since the period of the oscillations is two time steps due to the sublattice symmetry, the changes in trace distance can be obtained by subtracting the value at even time step from the neighbouring odd time step which is plotted in figure~\\ref{fig:oscillations} (b) at three different times. These plots clearly demonstrate that the trace distance oscillations quickly converge to their long time limit. As the bound states get more localised for certain $\\phi$ values and also the overlap of the initial states with them increases, so do the amplitude of the oscillations in the trace distance.\n\nTo evaluate the BLP measure, we maximize the sum of the positive increases in trace distance over all possible orthogonal pairs of initial states starting at the impurity site which is shown in figure~\\ref{fig:oscillations}(c) as a function of $\\phi$ for three increasing values of time. The result reveals that the pair $\\ket{\\psi_{S,A}}$ that we used for the preceeding analysis actually maximizes the sum in the BLP measure in the long-time limit. In contrast to the standard walk, the initial states maximizing BLP measure are equal superposition of symmetric and anti-symmetric states and these states do not change under other decoherence mechanisms~\\cite{hinarejos2014}. \nNear $\\phi=0,\\pi\/2, 3\\pi\/2, 2\\pi$, where bound states are weakly localised, we find that other orthogonal pairs actually maximize the BLP measure. However these regions get smaller as we consider longer time evolutions. The sudden drop in BLP at $\\phi=\\pi\/2,3\\pi\/2$ is related to the fact that oscillations take place about similar mean values. More importantly, we establish that the BLP measure of non-Markovianity increases with the emergence of bound states and reaches its maximum value at $\\phi=\\pi$ when the number and localisation of bound states assumes their maximum, as demonstrated by the effective localisation length in figure~\\ref{fig:bandStr}(b). The relation of non-Markovianity and localisation is also apparent comparing the BLP curve with the average PR shown in figure~\\ref{fig:average_loc}, which is maximum at $\\phi=\\pi$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.80]{fig6.pdf}\n\\caption{(a) Long-time limit time average of the reduced coin density matrix parameter $r_x$ for reflection symmetric ($\\ket{\\Psi_\\text{S}}$) and anti-symmetric ($\\ket{\\Psi_\\text{A}}$) initial states as a function of $\\phi$. (Time average is taken over 100 steps between $t=400$ and $t=500$.) Instantaneous values at even and odd time steps are shown by square and triangle markers, respectively. (b) Trace distance oscillation amplitudes between initial states $\\ket{\\Psi_\\text{S}}$ and $\\ket{\\Psi_\\text{A}}$ at different times show that they quickly converge to their long-time limit values for all $\\phi$. (c) BLP measure $\\mathcal{N}$ (\\ref{eq:nonmarkov}) at three different times. The maximization is performed over all the initial coin states for quantum walks starting at the impurity site. The linear increase in time reflects trace distance oscillations with constant amplitude. (See (b).) \n\\label{fig:oscillations}}\n\\end{figure}\n\nNext, we consider \\ALS{the RHP}{Rivas-Huelga-Plenio (RHP)}~\\cite{rivas10} measure of non-Markovianity, which is based on the dynamics of entanglement between the system of interest and an ancillary system. The ancillary system $A$ is assumed to have no dynamics of its own and is completely isolated so that any initial entanglement between the system and the ancilla can be affected by the open system dynamics only. In fact, similar to the BLP measure, this measure is also a witness for the violation of the divisibility. Considering the fact that no entanglement measure $E$ can increase under local CPTP maps, it is rather straightforward to observe that \n\\begin{equation}\n\tE[(\\Lambda(t,0) \\otimes I) \\rho_{\\mathrm{coin},A}] \\leq E[(\\Lambda(s,0) \\otimes I) \\rho_{\\mathrm{coin},A}]\n\\end{equation}\nfor all times $0\\leq s \\leq t$. Hence, any increase in the entanglement between the open system and its ancillary can be understood as a signature of non-Markovian memory effects in the time evolution. In other words, while the entanglement contained in $\\rho_{\\mathrm{coin},A}$ decreases monotonically for all Markovian processes, non-Markovian behaviour in the dynamics can be captured through the temporary increase of entanglement. In the same spirit of the BLP measure, one can then measure the degree of non-Markovianity using the following quantity:\n\\begin{equation}\n{\\cal{I}}^{(E)}\n=\n\\max_{\\rho_{CA}}\n\\sum_{t,\\Delta E_\\mathrm{CA}>0}\n\\Delta E_{\\mathrm{CA},t}\n\\end{equation}\nwhere $E_\\mathrm{CA}$ denotes the entanglement between the coin and a two level ancillary system. For any entanglement measure $E_{CA}$, the RHP measure is found by maximizing ${\\cal{I}}^{(E)}$ over all initial reduced density matrices $\\rho_{CA}$ of the composite coin-ancilla system. In order to calculate this measure, we start the evolution from composite initial state $|\\Phi^+ \\rangle \\vert 0 \\rangle = \\frac{1}{\\sqrt{2}}(|\\leftarrow \\rangle_C|\\downarrow\\rangle_A+|\\rightarrow \\rangle_C |\\uparrow \\rangle_A)\\vert 0\\rangle$ and use concurrence~\\cite{wooters97} as the entanglement measure. It has been shown that when concurrence is used as entanglement measure, the optimum initial state maximizing the RHP measure is a Bell state, for a single qubit interacting with an environment~\\cite{neto16}. \n\nFigure~\\ref{fig:non_makovianity}(a) shows the variation of the concurrence in time which is calculated from the reduced coin-ancilla state after tracing out the spatial degrees of the walker during the evolution. For the standard quantum walk, the entanglement oscillations with period of four steps are damped and slowly die out with time. Therefore, the RHP measure accumulates a finite amount of non-Markovianity in the long time limit which is similar to the behaviour of the BLP measure for the standard walk. On the other hand, in contrast to the BLP measure, the nature of bound states emerging with non-zero phase $\\phi$ plays a key role for the coin-ancilla entanglement. In the presence of reflection symmetric or anti-symmetric bound states only, the concurrence dies out very quickly. This is due to the fact that the symmetric and anti-symmetric states couple to different environmental degrees of freedom. For example, with only symmetric bound states present, the symmetric part of the coin-position state remains mostly localised in the vicinity of the impurity site whereas the anti-symmetric part moves away from the origin. Hence, the coin-ancilla entanglement is quickly destroyed upon tracing out the environmental degrees of position, as the coin-ancilla state becomes an incoherent mixture. An example of this situation is displayed in figure~\\ref{fig:non_makovianity}(a) for $\\phi=\\pi\/3$. It is only when both reflection symmetric and anti-symmetric stationary states exist that some entanglement can survive which shows non-decaying oscillations. These oscillations are due to the finite dimension of the bound state subspace and the frequencies of concurrence oscillations can easily be obtained from the quasi-energy differences. Such a case is displayed in figure~\\ref{fig:non_makovianity}(a) for $\\phi=\\pi$ with two dominant periods. One period is of two steps due to the sublattice symmetric bound states with quasi-energy difference $\\pi$ and another one is approximately ten steps due to the quasi-energy difference of $\\Delta E \\approx 0.205\\pi$ between reflection symmetric and anti-symmetric states. The latter dependence again shows the importance of bound states of both parities for the RHP measure. The energy difference $\\Delta E$ does not change much as $\\phi$ changes in the domain of four bound states unless one group of bound states is very weakly bound. (See figure~\\ref{fig:bandStr})\n\nUsing the time evolution of the coin-ancilla entanglement as shown in figure~\\ref{fig:non_makovianity}(a), we evaluate the RHP measure for all values of the impurity phase $\\phi$. The results are plotted in figure~\\ref{fig:non_makovianity}(b) for three increasing values of the final time. The amount of non-Markovianity measured by the RHP measure drastically depends on whether the reflection symmetric and anti-symmetric bound states are both supported for a given $\\phi$ or not. In the interval $\\phi \\in (0, \\pi\/2)$ where only the symmetric bound states exist, the concurrence vanishes quickly in time since the coin-ancilla Bell state can only be supported if both symmetric and anti-symmetric bound states exist. Therefore, the coupling of the symmetric and anti-symmetric coin states to different environmental degrees of freedom completely destroys the Bell state of the coin-ancilla system and results in a vanishing value for the RHP measure. A similar situation occurs in the interval $\\phi \\in (3\\pi\/2, 2\\pi)$ where only reflection anti-symmetric bound states exist and coin-ancilla entanglement is destroyed. In the interval $\\phi \\in (\\pi\/2, 3\\pi\/2)$ where bound states of both symmetries exist, the coin-ancilla entanglement is more robust and the RHP measure captures the non-Markovianity increasing linearly with $t$ in the long time limit due to non-decaying oscillations in the coin-ancilla entanglement. In this $\\phi$ interval, the RHP displays the same behaviour as seen for the BLP measure in figure~\\ref{fig:oscillations}(c). \n\n\\section{\\label{sec:conc}Conclusion}\n\nWe have provided a comprehensive and systematic analysis of non-Markovianity in a quantum walk model with a phase impurity in relation with the phenomenon of localisation. At the heart of analysis lies the manifestation of bound states emerging due to the existence of the phase impurity at the starting site of the walker. We have first presented a technique to analytically obtain the bound states of the model making use of the transfer matrix method. These bound states emerge in one or two sublattice symmetric pairs possessing definite reflection symmetry. With this knowledge at hand, we have explored the localisation properties of the walker in the position space. To this end, we have adopted two initial state independent quantities to measure the degree of localisation, namely, the effective localisation length for all eigenstates and an average participation ratio after time evolution over all initial states starting at the impurity site. Our analysis clearly demonstrates that the degree of localisation of the walker is directly determined by the properties of the bound states.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.8]{fig7.pdf}\n\\caption{(a) Concurrence between the coin and the ancilla qubit as a function of time for representative values of the phase parameter $\\phi$. When bound states with both positive and negative reflection parity exist, the concurrence shows oscillations. (See text for the involved frequencies.) (b) Concurrence based RHP measure as a function of $\\phi$ at three different time steps showing linear increase with time for $\\phi \\in (\\pi\/2,3\\pi\/2)$. RHP has a vanishing value when well-formed bound states of only positive or negative reflection parity exist.\n\\label{fig:non_makovianity}}\n\\end{figure}\n\nMore importantly, our main contribution in this work is the unveiling of an intrinsic relation between the emergence of bound states and the degree of non-Markovianity of the dynamics of the walker. In order to study non-Markovian behaviour in the time evolution of the walker, after tracing out the spatial degrees of freedom, we have utilized two distinct measures of quantum non-Markovianity, i.e., the BLP and the RHP measures based on the dynamics of trace distance and entanglement, respectively. These measures help us to understand the information flow between the principal coin system and the position system forming the environment from different perspectives. We show that, in the case of the existence of spatial decoherence in the form of a phase impurity, the BLP measure is optimized by the eigenstates of the coin operator for almost all values of the phase $\\phi$. Note that when one has decoherence in terms of broken links instead, the degree of decoherence does not change the optimal state maximizing the BLP measure~\\cite{hinarejos2014}. Our investigation also proves that phase impurity amplifies the degree of non-Markovianity quantified by the BLP measure.\\ALS{, similar to the disorder model studied in~\\cite{kumar2018}}{}\nThe underlying reason behind this behaviour is the oscillations in the state of the coin which essentially takes place between the sublattice symmetric bound state components with a period of two steps. Then, in general, increasing overlap between the initial and the bound states implies a greater degree of non-Markovianity. However, also note that when the time average of the reduced coin states corresponding to two orthogonal initial states are close to each other, the BLP measure drops abruptly.\n\nNext, we employed the RHP measure to analyse the degree of non-Markovianity in the dynamics of the walker. When the coin state is maximally entangled with an ancillary system initially, the amount of entanglement is known to oscillate in time for the standard walk. However, our examination demonstrates that, in case of the existence of a phase impurity, if the bound subspace supports only one type of reflection symmetric state, the coin-ancilla entanglement vanishes after a few time steps and the RHP measure becomes very small compared to the standard walk case. On the other hand, when both reflection symmetric and anti-symmetric bound states are present, the entanglement oscillations are persistent in time, leading to high values of RHP measure. Thus, while the RHP measure is generally in good agreement with the BLP measure when both even and odd parity bound states exist, the RHP measure fails to reliably detect the non-Markovian behaviour when only symmetric or anti-symmetric bound states are present. Most importantly, as can be clearly seen from both measures, maximum non-Markovianity is reached where our localisation measures determined by the bound states become also maximum.\n\\ALS{}{Relationship between non-Markovianity and localisation have been discussed in random static disorder models~\\cite{lorenzo2017quantum,kumar2018} where non-Markovianity increases with disorder.\nWe observe more nuanced behaviour between bound states and non-Markovianity as discussed above.} \n\nWe would like to indicate that the experimental realization of the model we presented here is quite feasible with today's technology. The time-multiplexing quantum walk employs laser light pulses going successively around a fiber loop where the position space is effectively encoded in the time domain from the point of view of the detectors \\cite{schreiber2010}. The main advantage of this setup is it's scalability and it's long coherence times, i.e., it only requires a fixed number of optical elements to realize the quantum walk for relatively large number of steps. The recent developments in the setup allow deterministic out-coupling of the light pulses from any site by utilizing electro-optic modulators \\cite{nitsche2018}. It is also possible to introduce arbitrary phases specific to any site by programming of the electro-optic modulators accordingly, which actually would allow the realization of the model we provided here \\cite{schreiber2011, nitsche2016}. \n\nAs a concluding remark, it would be interesting to study whether the oscillations due to the bound states become robust in the case of many-body interactions with more degrees of freedom in the context of quantum walks as a future work.\n \n\n\\ack{\n\\.{I}.Y. is supported by M\\v{S}MT under Grant No. RVO 14000 and the Czech Science Foundation under Grant No. GA CR 19-15744Y.\nG.K. is supported by the BAGEP Award of the Science\nAcademy and the TUBA-GEBIP Award of the Turkish Academy of Sciences. G.K. is also supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under\nGrant No. 117F317.\nB.D. and A.L.S. are supported by Istanbul Technical University Scientific Research Projects Department (ITU BAP No. 40881). A.L.S. would like to acknowledge useful discussions with {\\c S}.E. Kocaba{\\c s} at earlier stages of this work.\n}\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Background \\& Summary}\n\nMany assessments of future electricity demand in India project large increases in electricity consumption from adoption of air conditioning technologies in the buildings sector over the next two decades \\cite{weo20, ev, iea}. This large growth is likely to make India among the top nations in terms of electricity consumption, implying that technology choices related to energy consumption and production in India are likely to play a significant impact on global climate change mitigation efforts. Additionally, the Indian government has been pushing for the transportation sector's electrification, starting with two- and three-wheel vehicles,which is further likely to increase overall electricity demand. As of 2020 in India, there are 152,000 registered electric vehicles \\cite{ev}. Air conditioning (AC) related electricity demand accounted for 32.7 TWh, contributing to less than 2.5\\% of the total demand in 2019 \\cite{iea}. However, both air conditioning and transport electrification are anticipated to introduce structural changes in the temporal and spatial trends in electricity consumption patterns, that has important ramifications for long-term resource planning for the electricity sector \\cite{teri}. This paper presents an bottom-up approach to estimate electricity consumption in India for various scenarios of technology and policy adoption with a specific focus on providing aggregated consumption estimates as well as spatio-temporally resolved consumption profiles that would be relevant for regional and national electricity system planning studies. The approach enables quantifying the impact of various growth and technology adoption scenarios on quantity and pattern in electricity consumption. The datasets detailed in this paper include annual energy consumption at India's state, regional, and national levels as visualized in Fig. \\ref{fig:demand}, as well as underlying consumption profiles at an hourly time resolution. The annual energy consumption is forecasted on a five-year increment to 2050. Fig. \\ref{fig:summary_results} shows one scenario of national electricity demand forecast. In addition to the snapshot of annual consumption, hourly load profiles are developed at the same resolution as seen in Fig. \\ref{fig:profile_results}. \n\nThe forecasting is divided into two steps: business-as-usual and technology. Business-as-usual is a statistical model that infers data it can be trained on i.e. historical electricity demand. The technology model is a bottom-up approach that adds new loads to the total demand. Among new loads, we focus on residential and commercial cooling as well as various electric vehicles (EV). Some key insights from cooling\\cite{iea} and EV\\cite{ev} studies highlighting peak demand development motivate the need for demand forecasting at the hourly resolution. Cooling demand due to mainly split unit air conditioning installation in India is expected to increase the peak to mean ratio (also sometimes referred to the \"peakiness\") of electricity demand in India as well as shift the timing of peak demand from evenings to midnight\\cite{iea}. While electric vehicles do not constitute a large portion of the total demand, certain charging schemes can contribute significantly to the peak demand\\cite{ev}. Numerous energy demand forecast for India have recently been published as decadal snapshots \\cite{weo20, teri, brookings}, however granularity of demand at an hourly resolution has not been presented in these studies. Our approach enables quantifying the impact of different technology and structural elements, such as adopting energy efficient vs. baseline cooling technology or work-place charging vs. home charging for EVs, on the hourly electricity consumption profiles. These insights and the accompanying data sets are essential to carry out generation and transmission expansion as well as distribution network planning,and are thus essential for a sustainable energy infrastructure development in the Indian context. \n\n\nSimilar to other forecasting studies, we model Gross domestic product (GDP) growth \\cite{mospi} to be the main econometric driver of the business-as-usual demand forecasting, and thus three scenarios are introduced: slow, stable, and rapid GDP growth. We examine two AC load scenarios: energy-efficient equipment and baseline equipment per the International Energy Agency's Future of Cooling study \\cite{iea}. Finally, we evaluate three EV charging mechanisms: home, work, and public charging. This totals the number of data sets spanning three input dimensions to 18 scenarios. Technology adoption growth has been correlated with economic growth under the assumption that new technologies are adopted faster when the economy is growing faster and vice versa. We present two cooling scenarios to highlight the difference in energy-efficient and regular air conditioning units and bring attention to the need for policy and programs that favor energy-efficient cooling unit sales. Furthermore, we present various EV charging mechanisms to inspect the demand impacts that electric vehicle charging can have on the electric grid at different times. The produced data can be used as input to electricity infrastructure planning both at the distribution and transmission level. \n\n\\section*{Methods}\nFig. \\ref{fig:schematic} illustrates the major steps of our proposed demand forecasting approach. We use two models to estimate future electricity demand in India. In the first model --- business-as-usual --- we use a linear regression model to project daily peak and consumption on a regional basis; this is the business-as-usual scenario. We then add natural variation to the projections by finding the error between the training data and results and scaling it to every region based on seasonality. Then we fit the projected peak and total consumption to an annual hourly load profile for 2015 \\cite{shakti} featuring an evening peak \\cite{ivan}. In the second model --- technology model --- we take AC and EV adoption into account as an additive component on top of the business-as-usual predictions. GDP data, which is an independent variable in the model, is chosen to be the main driver of growth of the business-as-usual scenario as well as technology adoption rates. The input data used are publicly available and are referenced in Table \\ref{table:data}.\n\n\\subsection*{Input data processing}\n\nAlthough GDP is widely used for forecasting energy demand, it is specifically essential in the case of India, where economic growth is expected to ramp up over the next few decades similar to the recent trends in China \\cite{mckinsey}. We based our demand forecast on GDP projections from a PricewaterhouseCoopers (PwC) report \\cite{pwc}, that projected India's GDP to grow from 3.6 trillion in 2020 to reach 28 trillion USD in 2050. Considering the historical national GDP data for India starting in 1990, we fit and project an exponential curve for rapid growth and an Gompertz curve for slow growth \\cite{gompertz} as detailed in Table \\ref{table:gdp}. We use PwC's projections to define the stable GDP growth scenario. Curve fitting and projection results are illustrated in Supplementary Fig. \\ref{fig:sup-gdp}. The rapid growth scenario produces an annual average growth rate of 9.5\\% , PwC's growth rates start at 7.8\\% for the first projected decade and ends at 6.2 \\% in the final projected decade. The slow growth scenario starts at 7.2\\% growth rate in the first projected decade and ends at 3.9\\% in the final projected decade. To break down the regional energy consumption projections to state level we use the ratio of GDP per capita of the corresponding state to the GDP per capita of the region it is in. For each GDP growth scenario, we fit the same functions given state-wise data to produce GDP forecast at the same resolution. GDP per capita at state-level is computed using the projected GDP data and state level population projections \\cite{ssrn}.\n\n\\subsubsection*{GDP dependence and limitation}\n\nRelating growth in electricity demand to GDP is a strong generalization, however it is not a novel one in the case of India. Strong correlation between economic growth and energy consumption has been established in the Indian context in this study and other studies \\cite{eia} given data from the past two decades \\cite{mospi}. We recognize that GDP as a metric of economic growth has several limitations particularly related to projecting how economic growth is distributed among society within a state or nation. This may be the strongest limitation of the data we are presenting in the manuscript. However, lack of historical record and long-term projections of alternative open-access economic data at the desired spatial and temporal resolution limit the development of a framework to project energy consumption with other metrics. While GDP and energy consumption growths may differ in the long-run, there is an evident correlation between the two that can be used to estimate long-run energy consumption growth. Deviating away from linear regression may yield better results, however, data scarcity is again a limitation to the development of more complex models. Furthermore, this manuscript motivates the need for more bottom-up projections and not just regression models because historical consumption cannot infer consumption trends from new demand sources such as cooling and EVs.\n\nAdditionally, since the Future of Cooling study by the International Energy Agency relies on GDP forecasts developed by the International Monetary Fund\\cite{iea}, we elected to use a similar metric. We intentionally develop a large bandwidth of projection scenarios to mitigate the limitation of an individual snapshot representing a singular assumption. The motivation behind presenting the described results is ability to compare different scenarios and post-analyze the demand growth and the trade-offs. To produce a large bandwidth of growth scenarios we needed to use a straightforward metric that has enough historical data to produce various fitted curves for projections.\n\n\\subsection*{Business-as-usual model}\n\nThe business as usual projections are modeled with a linear regression considering weather and economic growth features. The ground truth historical daily peak and total consumption for each electric grid were obtained from the Power System Operation Corporation (POSOCO) for 2014-2019 \\cite{posoco}. The GDP used in the model was obtained, as explained in the previous section. Weather data was secured from the NASA Merra-2 data set \\cite{nasa}. The choice of features for the regression model is limited to GDP and weather variation due to the limitation in availability of data, both historical and future projections, at the desired spatial and temporal resolution. GDP is identified as a long-term parameter driving growth in year over year demand projections as highlighted in Fig. \\ref{fig:longrun}. Weather data is identified as a short-term parameter driving seasonal variation within a year's demand projections as highlighted in Fig. \\ref{fig:shortrun}. Previous parametric analysis on these features and their coefficient for short and long term demand forecasting in both time and frequency domain \\cite{meia} reinforce their use as features for the business-as-usual regression model. We present detailed outcomes for the Southern region, with further details available in \\cite{meia}.\n\n\\subsubsection*{NASA Merra 2 data acquisition}\n\nFor each of the five electric grid demand regions highlighted in right panel of Fig. \\ref{fig:demand}, the largest cities in each region were identified using population data made available by the United Nations\\cite{pop}. Then, the city's latitude and longitude were used to pull down the corresponding environmental data from the Nasa Merra-2 data set. The cities used for each of the five regions are listed here:\n\\begin{itemize}\n \\item Northern: Delhi, Jaipur, Lucknow, Kanpur, Ghaziabad, Ludhiana, Agra\n \\item Western: Mumbai, Ahmadabad, Surat, Pune, Nagpur, Thane, Bhopal, Indore, Pimpri-Chinchwad\n \\item Eastern: Kolkata, Patna, Ranchi (Howrah was ignored because the environmental factors are the same as Kolkata)\n \\item Southern: Hyderabad, Bangalore, Chennai, Visakhapatnam, Coimbatore, Vijayawada, Madurai\n \\item Northeast: Guwahati, Agartala, Imphal\n\\end{itemize}\n\nFrom the NASA set, 11 variables were included for each city: specific humidity, temperature, eastward wind, and northward wind (all 2m above the surface and 10m above the surface - eight total variables), precipitable ice water, precipitable liquid water, and precipitable water vapor. In particular, the instantaneous two-dimensional collection \"inst1\\_2d\\_asm\\_Nx (M2I1NXASM)\" from NASA was used. Detailed descriptions of these variables are available in the Merra-2 file specification provided by NASA\\cite{nasa} . The environmental variables available from the NASA MERRA-2 dataset were given on an hourly basis. The daily minimum, daily, maximum, and daily average was calculated for each of the 11 variables for each day.\n\n\\subsubsection*{Forecasts}\nThe business-as-usual demand forecasting problem was divided into ten separate problems,corresponding to one problem each peak and total consumption for each of the five regional grids shown in Figure \\ref{fig:demand}. To ensure the model would not overfit the data, the model was trained with Elastic Net \\cite{scikit-learn} to regularize results, and validated on held out 2019 data. An L1 ratio (Lasso) of .9 was chosen to minimize error in 2019 as the validation set. Then all of the models were trained with .9 L1 ratio on the full dataset.\n\n\\subsubsection*{Addition of natural variation}\nThis step aimed to match the statistical characteristics of an actual load year with the projected year. 2019 was used to derive the differences. Natural variation was estimated by a distribution characterized by the mean and standard deviation of the differences (in absolute value). Then, a natural variation adjustment was added to that day (with a random true\/false bit for positive or negative variation). The noise was calculated for each region and peak demand and daily consumption separately. The natural variation (noise) vectors used are on the Github repository for this paper \\cite{git}. This part of the process is non-deterministic and replication of the results requires using the same natural variation vector used in our projections.\n\n\\subsubsection*{Hourly profiles}\nThe statistical inference model presented above forecasts daily consumption driven by state-level economic parameters and weather data. The produced projections are at a daily resolution. We downscaled the data to hourly load profiles based on the 2015 hourly load profile data \\cite{shakti}. The result of the regression model is at regional level, breaking it down state-wise is pro-rated based on state-wise to region-wise GDP per capita projections ratios for the respective year. To do so, we tag each day of the year by the month it corresponds to and whether it is a weekday or weekend. We cluster demand for each hour by month and day. Each hour of the day then has its own cluster of demand data from 2015 based on the assumption that the same hour of the day for a given month and the same day type will exhibit similar demand behavior. This biases the construction of the profiles to demand patterns from 2015 only. To minimize the impact of this bias, we use the historical weather data\\cite{nasa} of the testing data years (2014-2019) for each day to simulate daily temperatures variations that are reflected in higher or lower demand. We sample weather data for each day and compare it to 2015, and subsequently use normalized the difference to scale the demand on a daily basis. Finally, we sample demand for each hour of the year from the corresponding cluster (defined by month and weekend or weekday) and scale it accordingly. Constructing the hourly load profile and fitting them to match the projected daily consumption and the projected daily peak demand then becomes a trivial exercise of sampling and fitting from the corresponding clusters and weather data space. The 2015 hourly demand data used in this study is documented in detail elsewhere and has been used in projecting demand for supply-side modeling efforts \\cite{ivan}. Limited availability of complete hourly data at state and regional level in India biases the hourly profiles to the 2015 datasets. However, the business-as-usual projections are for existing demands composed mainly of lighting and appliance at the residential level and large daytime loads at the commercial level \\cite{usaid}. Our approach implicitly assumes that energy consumption trends for these loads will follow historical patterns and therefore sampling from a given year with post-processed noise variation can yield reasonable results.\n\n\\subsubsection*{Impact of Climate change on business-as-usual demand}\nAs per the International Energy Agency (IEA) World Energy Outlook (WEO) 2019\\cite{weo19} only 5\\% of households in India currently own air conditioning units and 2.6\\% of commercial building energy use is from space cooling. Historically, electricity consumption in India has been driven by lighting and appliances in the residential sector \\cite{usaid} with commercial and industrial sector contributing via larger daytime loads. Since cooling demand is not historically available in the data that the business-as-usual regression model is learning from, there is no parametric value to projecting increase in temperatures since there is no evident correlation between temperature increase and lighting or appliance use. Moreover, since space cooling is a small percentage of current electricity demand in India, no major trends can be identified given the limited daily training data that is being used for the business-as-usual regression. It is then safe to assume that weather remains constant for the business-as-usual demand.\n\n\\subsection*{Technology model}\nSince a regression model can only produce forecasts of data it can learn from, additional bottom-up processing must be carried out to get a full picture of India's demand in the future. We identify trends and data points at the state level of the country to build a regional profile as well as the national one. \n\n\\subsubsection*{Cooling}\nCooling is divided into two main categories: residential and commercial. The ratio of commercial to residential consumption is computed from state-level data \\cite{stats} and is used as the ratio of commercial to residential cooling demand. Using the IEA's baseline and efficient cooling projections from the Future of Cooling study \\cite{iea}, we use the annual sales and unit types to calculate the energy consumption and growth rate at a national level and pro-rate it down to state level given GDP per capita. Surveyed hourly demand profiles \\cite{usaid} are indicators of behavioral cooling energy consumption patterns as exemplified in Supplementary Fig. \\ref{fig:sup-ac-res} and \\ref{fig:sup-ac-com}. The survey produce various profiles given climate seasons, household income and size. We apply a time-domain convolution of these profiles to generate a representative profile for each state for the various climates and seasons.\n\nWe can generate the air conditioning demand profiles for two weather seasons (winter and summer) by convolution of the sample profiles to generate a smooth aggregated demand profile. Moreover, coincidence factors must be applied to properly estimate the simultaneity of the demand and its peak. Two coincidence factors are identified: weekday and weekend, values are extracted from a Reference Network Model Toolkit \\cite{5504171}. We break down the national cooling demand to residential and commercial at state level by identifying state-level sector size and growth trends. Scaling the profiles to match the projected cooling energy demand produces hourly energy consumption profiles from residential and commercial cooling. Aggregating the appropriate states together will produce the same results at the regional level.\n\nMore importantly, the IEA's future of cooling study \\cite{iea} stresses the usage of Cooling Degree Days (CDD) to project cooling demand dependency on temperature. The unit consumption pattern and projections of capacity for India's share of global cooling demand is based on growth in electrification, urbanization as well as Purchasing Power Parity. The IEA future of cooling study estimates that a 1-degree Celsius increase in decadal average temperature in 2050 will to lead to 25\\% more CDD and a 2-degree Celsius increase will lead to 50\\% more CDD. Climate change impacts are considered in the unit sales and energy consumption data used from the IEA's future of cooling study. In our analysis, we use IEA's 50\\% increase in CDD to model cooling demand in 2050. For prior periods, we interpolate CDD between 2018 and 2050 to model cooling demand. The increase in CDD and the addition of noise variation are introduced for the purpose of modeling the projected increase in peak demand due to climate change. Specifically, this analysis does not consider frequency nor forecast of extreme weather events.\n\n\\subsubsection*{Electric vehicles}\nThe second component of the technology model projects EV demand in India. The data presented here considered electric two, three, and four-wheel vehicles. Two-wheelers, being the dominating vehicle in terms of annual sales in India \\cite{vehicle_sales}, are expected to be electrified first, followed by the three-wheelers and regular cars \\cite{ey}. The Indian government has set a goal of converting 100\\% of two-wheeler sales and 30\\% of all vehicle sales to electric by 2030 \\cite{nitiaayog}, so the starting point is vehicle sales at the state level \\cite{vehicle_sales}. Using the regression equations of the corresponding GDP growth scenarios, we can project car sales with the EV targets by 2030 met in the rapid growth scenario. From vehicle sales and conversion rates, we get an estimate of the number of EV that will require charging. From a market survey on the average commute distance of vehicles in urban areas and rural areas \\cite{ey}, long and short-range battery capacity and EV energy can be estimated. We introduce a mix of EV sales starting with short-range as the dominant market product and shifting to long-range, a market-dominant market in 2050. This trends reflects the current economic competitiveness of short-range EVs vs. existing internal combustion engine vehicles as well as the long-term competitiveness of long-range EVs with declining battery costs.\n\nSimilar to the construction of the cooling profiles, a coincidence factor must be implemented, so as to not over-predict peak EV charging demand. Since this is a new consumption behavior and given the relatively small batteries of two-wheelers and three-wheelers, it is assumed that every vehicle needs to charge every other day on average for urban drivers and every day for rural ones. This yields an average daily consumption from EV charging. As shown in Supplementary Fig. \\ref{fig:sup-ev-profiles}, three different charging profiles --- home, work, public -- are identified in an EV pilot project study in Mexico City \\cite{berkeley}. While Mexico and India differ greatly in many socio-economic aspects. The different hourly EV charging profiles collected were for a pilot project to deploy electric two-wheelers and small sedans in the metropolitan area of Mexico City. This presents two synergies enabling the usage of the charging profiles in India. Under the assumptions that EV deployment will be more prevalent in urban areas in India with initial conversion of smaller vehicles (two-wheelers and three-wheelers), the charging data collected \\cite{berkeley} is a suitable fit for potential EV charging schemes in India. Energy consumption is computed from vehicle sales, projections, and electrification conversion. That calculated number is then fitted under the chosen charging profile. Time domain convolution of the profiles is applied to smoothen the peakiness of the total constructed hourly time series.\n\n\\subsubsection*{Data Dependence}\n\nThe technology model relies heavily on surveyed data to produce the representative hourly profiles for cooling and electric vehicle demands at state levels. This is indeed a limitation, and our projections assumes that future technology adopters will behave just like initial adopters. In the absence of a better alternative at a similar spatial and temporal resolution, the bottom-up modeling effort provides a reasonable estimate of temporal patterns expected from these new demand sources. For the hourly sample cooling profiles, the main assumption is that cooling demand consumption is only dependent on weather patterns and econometric patterns. Specifically, we apply a weighted sum convolution of the income level cooling profiles based on the states' GDP per capita ranking. For the total cooling demand at national level, we depend on the air cooling unit sales projection as well as break down of unit energy consumption under baseline and efficient scenarios of the IEA's Future of Cooling report \\cite {iea}. We pro-rate residential cooling at state level using the GDP per capita projections. For commercial cooling we use the state-wise sector growth trends \\cite{energystatistics}. A sanity check for this break down is to sum both residential and commercial state-wise cooling demand and compare to the IEA's all India cooling demand annual electricity consumption projections to 2050, the difference is highlighted in Supplementary Fig. \\ref{fig:sup-ac_compare} and \\ref{fig:sup-cooling}. Regarding the EV profiles, while there are alternative choices of charging schemes, we identified the synergies with the Berkeley study \\cite{berkeley} to be best reflective of the bookend EV charging scenarios across India.\n\n\\section*{Data Records}\nThe data is uploaded on Zenodo \\cite{marc_barbar_2020_4564581} and is available to download at \\hyperlink{https:\/\/doi.org\/10.5281\/zenodo.4564581}{https:\/\/doi.org\/10.5281\/zenodo.4564581}. The path leading to a CSV file indicate the scenario corresponding to the results of that file. Breakdown of the folder hierarchy listed as:\n\n\\begin{enumerate}\n \\item GDP Growth: slow, stable, rapid\n \\item EV charging: home, work, public\n \\item Cooling: baseline, efficient\n \\item Type: detailed, summary\n\\end{enumerate}\n\nThe \\textit{detailed} results are tables of the itemized hourly demand profile of each considered scenario; all files will produce 8760 rows (number of hours in a year). The \\textit{summary} are tables of the itemized annual energy consumption for the considered years; all files will produce seven rows (number of considered future years). Both file types are itemized the same way as per Table \\ref{table:headers}. The path of each file is the reference to the specific scenario the data in the tables represents. For example \\textit{SR.csv} file under \\textit{slow\/home\/efficient\/summary} is the summary file of the case of slow economic growth, home EV and energy efficient air conditioning consumption.\n\n\\section*{Technical Validation}\nThe Business-as-usual statistical model is validated using standard statistical metrics when backtesting is applied. Further details on the backtesting are available elsewhere \\cite{meia}. For the technology model, we compare our estimates to the IEA's WEO \\cite{weo20,weo19,weo18,weo17} and Brookings India \\cite{brookings}. Furthermore, our projections compare favorably against the EV projections to the IEA's Global Electric Vehicle Outlook 2020 \\cite{ev}.\n\n\\subsection*{Back testing}\n\nDaily consumption and peak are projected for all five regions, we show the daily consumption back tests of the Southern Region in Fig. \\ref{fig:regression}. More results can be found on the GitHub repository. It is important to note that the regression model captures the organic growth of the historical demand as well as the seasonal variation in demand but is not accurate at predicting daily variation. This shortcoming can be attributed to the small training dataset that is available. To compensate for this short-coming, we add additional noise variation as discussed earlier in the Methods section. We compare the R-squared value of the regression only versus the regression and noise time series as shown in Table \\ref{table:r2}. Additionally, selected parameter performance metrics of the model for the Southern Region are presented in Table \\ref{table:params}. The model's independent variables are the 2 meters and 10 meters elevation historic temperature and humidity data for the selected cities and GDP data for the state. Various weather parameters will have a higher coefficient then GDP since the latter is not as granular as a metric but will still be factored in for longer term growth as interpreted by its Fourier component \\cite{meia}.\n\n\\subsection*{Cross-comparison}\nSupplementary Fig. \\ref{fig:sup-stated-weo} and \\ref{fig:sup-sustainable-weo} compare the forecasting results to the WEO 2020 projections of India's Energy Demand to 2040. Our band of projections is notably wider due to the large number of scenarios that are combined to forecast energy demand. We further compare our results to Brookings India's study in Supplementary Fig. \\ref{fig:sup-brookings}. We also compare our electric vehicle projections to those of the Global EV Outlook in Supplementary Fig. \\ref{fig:sup-ev}. Finally, we compare our air conditioning demand contribution to the peak demand to the Future of Cooling study in Supplementary Fig. \\ref{fig:sup-ac_compare}.\n\n\\subsection*{COVID-19 pandemic impact on year 2020}\nThe COVID-19 pandemic has drastically affected the global population in various ways. Energy consumption dropped severely as people were advised to stay at home. While it is not possible to project such \"Black Swan\" events from historical data, their long-term effects can be modeled as delayed growth under various recovery schemes. Fig. \\ref{fig:comparison} shows that our projections for the month of January 2020 align with the realized demand, which is prior to the global outbreak of COVID-19. Evidently, there is a strong mismatch in the following months as the outbreak developed into a global pandemic. However, in the later part of the year, signs of recovery are noticed where the historical daily consumption once again reaches projected levels.\n\nThe impact of extreme events on energy consumption are difficult to predict at a granular level. Our projections are at a five year increment so that such yearly variations are smoothed out and the regression towards the mean phenomenon is observed. Moreover, the recovery from extreme events and their long-term impact can depend on many factors: economic, social, scientific and more. Without modeling those events in detail, projected growth can model the long-term average growth rate. In case of a negative extreme event, a smaller growth rate can model the long-term impact caused by the slow down. Similarly, a positive extreme event can be modeled as larger growth rate to include the long-term impact by the rapid growth. With signals of a fast recovery in total daily consumption for most regions, we elected to disregard projections that model long-term COVID-19 pandemic impact to avoid confirmation bias. Moreover, there is little data to support projections modeling a long-term impact on Indian energy consumption. We believe that the model and data presented in this paper are valid beyond the COVID-19 pandemic.\n\n\\section*{Usage Notes}\n\nThe format of the results is comma-separated values (CSV). All the results are available on the Zenodo Open-Access repository \\cite{marc_barbar_2020_4564581}.\n\n\\section*{Code availability}\n\nThe code used in the generation of the data sets is open-sourced on Github repository \\cite{git}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n \\input{tex\/11_intro}\n\n\\section{Background}\n\t\\subsection{Generative Design}\\label{sec:sec31}\t\n\t \\input{tex\/21_gd}\n\t\\subsection{Exploration and Optimization with Non-Objective Criteria}\\label{sec:sec32}\t\n \\input{tex\/22_mcx}\n\n\\section{Method}\n \\input{tex\/31_method}\n \t \n\\section{Benchmarks}\n \\subsection{Setup}\n \\input{tex\/41_exp1_setup}\n \\subsection{Result} \n \\input{tex\/42_exp1_result}\n\\section{Case Study}\n \\subsection{Setup}\n \\input{tex\/51_exp2_setup}\n \\subsection{Result}\n \\input{tex\/52_exp2_result}\n \n\\section{Discussion}\n \\input{tex\/60_discussion}\n \n\\section{Acknowledgements}\n \\input{tex\/62_ack} \n \n\\section{Supplemental Material and Code}\nSupplemental material and code available at:\\\\ \\href{https:\/\/github.com\/agaier\/tdomino_ppsn}{https:\/\/github.com\/agaier\/tdomino\\_ppsn}\n\n\n\n\n\n\\subsubsection{Benchmark Functions}\\hfill\\\\\n\\indent\\textit{RastriginMOO}. To judge the performance of T-DominO on Multi-Objective QD problems, we test on a version of RastriginMOO as introduced in~\\cite{moo_qd}. The Rastrigin function is a classic optimization benchmark, often used to test QD algorithms because it contains many local minima~\\cite{cmame,cully2021multi}. Here it is converted into a multiobjective benchmark by optimizing a pair of Rastrigin functions with shifted centers. We use a 10-D version with constants added so that every discovered bin has a positive effect on the aggregate QD Score. These objectives can be explicitly defined as:\n\\begin{align}\n \\begin{cases}\n f_1(\\mathbf{x}) = 200 - (\\sum\\limits_{i=1}^n [(x_i - \\textcolor{blue}{\\lambda_1})^2 - 10\\cos (2\\pi (x_i - \\textcolor{blue}{\\lambda_1}))]) \\\\\n f_2(\\mathbf{x}) = 200 - (\\sum\\limits_{i=1}^n [(x_i - \\textcolor{blue}{\\lambda_2})^2 - 10\\cos (2\\pi (x_i - \\textcolor{blue}{\\lambda_2}))])\n \\end{cases} \n\\end{align}\nwhere $\\lambda_1 = 0.0$ and $\\lambda_2 = 2.2$ for $f_1$ and $f_2$. All parameters are limited to the range $[-2, 2]$, with the feature space defined by the first two parameters.\n\n\\textit{ZDT3}.\nWhen spread across the objective space is desired, objectives themselves could be used as features. This use case is demonstrated with the ZDT3 benchmark, a 30 variable problem from the ZDT MOO benchmark problem suite ~\\cite{zitzler2000comparison} whose hallmark is a set of disconnected Pareto-optimal fronts, and whose first parameter is value of the first objective. Parameter ranges span 0-1 with the first two parameters used as features, enforcing a spread of solutions across the range of the first objective.\n\n\\textit{DTLZ3}. To illustrate T-DominO's bias toward balanced solutions we analyze its performance on DTLZ3, a many-objective benchmark with a tunable number of objectives and variables\\cite{deb2002scalable}. We test with 10 parameters and 5 objectives, with the 6th and 7th parameters use as features.\\footnote{The first $n$ parameters are explicitly linked to the first $n$ objectives as in ZDT3 -- later parameters are used to avoid explicitly exploring the objective space.}. \n\n\n\\subsubsection{Baseline Approaches}\\hfill\\\\\n\\indent\\textit{ME Single.} MAP-Elites~\\cite{mapelites} optimizing only a single objective is used to establish an upper and lower bound of performance we can expect from MAP-Elites. Blind to the second objective we can expect it to find the top performing solutions for the first. Equally important, the exploration of all bins without regard to the performance on the second objective establishes a floor for performance -- the performance we could expect for having any solution in the bin.\n\n\n\\textit{ME Sum.} \nWe compare using the T-Domino objective with MAP-Elites~\\cite{mapelites} using the most naive way of combining multiple objective -- simply adding them. Our benchmarks all have well-scaled objectives, but this is typically not the case. To simulate this difficulty we use a weighted sum, with each additional objective values increased by an order of magnitude (e.g $\\times$1, $\\times$10, $\\times$100...). \n\n\\textit{NSGA-II.}\nNSGA-II~\\cite{nsga2} is used as a benchmark for conventional multi-objective optimization without feature space exploration, reaching near the Pareto front on these simple benchmarks. Though it is not our goal to compete with MOO algorithms, they provide a useful metric to contextualize the difference between exploratory approaches and pure optimizers.\n\n\\subsubsection{Settings.} In all MAP-Elites approaches the feature space is partitioned a 20x20 grid, with 2 CMA-ME improvement emitters~\\cite{cmame} performing optimization. T-Domino was computed using the neighbors from 4 bins away, using a history of the 10 most recent elites in each bin. Hyperparameters for NSGA-II were kept comparable, a population of 400 matched the 400 bins of the MAP-Elites grids, with the same number of new solutions generated per generation for the same number of generations. A standard implementation of NSGA-II from the PyMoo library~\\cite{pymoo} is used, as well as the library's formulations for the ZDT3 and DTLZ benchmarks whose the exact formulation is included in the Supplemental \\ref{sssec:zdt3}. The PyRibs~\\cite{pyribs} library was used as a basis for all MAP-Elites experiments, with T-DominO implemented as a specialized archive type. All experiments were replicated 30 times, additional plots are provided in the Supplemental.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Supplemental Material}\nUpon publication all supplemental material, along with all source code used to produce the results in this paper will be published online.\n\n\n\\subsection{Building Layout Objectives, Features, and Constraints}\n\\input{tex\/53_exp2_table}\n\n\\newpage\n\\subsection{Wave Function Collapse Tiles Set and Seed Examples}\n\\input{tex\/fig_supp_01_tiles}\n\n\\newpage\n\\subsection{Single Building Example Outputs of Wave Function Collapse}\n\\input{tex\/fig_supp_02_example}\n\n\\newpage\n\\subsection{QD Score}\n\\input{tex\/fig_supp_03_qdscore}\n\n\\newpage\n\\subsection{MOO Benchmark Functions}\n\\input{tex\/supp_moo_obj}\n\n\n\n\n\\subsubsection{ZDT3}\\label{sssec:zdt3}\nThe ZDT3 benchmark objective function is defined as:\n\n$\n\\begin{aligned}\nf_{1}(x) &=x_{1} \\\\\ng(x) &=1+\\frac{9}{n-1} \\sum_{i=2}^{n} x_{i} \\\\\nh\\left(f_{1}, g\\right) &=1-\\sqrt{f_{1} \/ g}-\\left(f_{1} \/ g\\right) \\sin \\left(10 \\pi f_{1}\\right) \\\\\n0 & \\leq x_{i} \\leq 1 \\quad i=1, \\ldots, n\n\\end{aligned}\n$\n\n\n\\subsubsection{DTLZ3}\\label{sssec:dltz3}\nThe DTLZ3 benchmark objective function is defined as:\n\nMin. $f_{1}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\cos \\left(x_{1} \\pi \/ 2\\right) \\cdots \\cos \\left(x_{M-2} \\pi \/ 2\\right) \\cos \\left(x_{M-1} \\pi \/ 2\\right)$,\n\nMin. $f_{2}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\cos \\left(x_{1} \\pi \/ 2\\right) \\cdots \\cos \\left(x_{M-2} \\pi \/ 2\\right) \\sin \\left(x_{M-1} \\pi \/ 2\\right)$,\n\nMin. $f_{3}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\cos \\left(x_{1} \\pi \/ 2\\right) \\cdots \\sin \\left(x_{M-2} \\pi \/ 2\\right)$,\n\n$\\vdots \\quad \\vdots$\n\nMin. $f_{M}(\\mathbf{x})=\\left(1+g\\left(\\mathbf{x}_{M}\\right)\\right) \\sin \\left(x_{1} \\pi \/ 2\\right)$,\n\nwith $g\\left(\\mathbf{x}_{M}\\right)=100\\left[\\left|\\mathbf{x}_{M}\\right|+\\sum_{x_{i} \\in \\mathbf{x}_{M}}\\left(x_{i}-0.5\\right)^{2}-\\cos \\left(20 \\pi\\left(x_{i}-0.5\\right)\\right)\\right]$,\n$0 \\leq x_{i} \\leq 1, \\quad$ for $i=1,2, \\ldots, n$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}