diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgorf" "b/data_all_eng_slimpj/shuffled/split2/finalzzgorf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgorf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{FingerAndAttack.pdf} \n \\caption{Different attacks on the fingerprint recognition systems shown as photographs~\\cite{9}, and as fingerprints~\\cite{LivDet2015}).}\n \\label{fig:fingersandattacks}\n\\end{figure}\nBiometrics based authentication systems provide more security than traditional information security-based systems based on passwords\/Personal Identification Number (PINs), and keys\/cards~\\cite{1}. The primary limitations with traditional information security methods are that they lack good user experience, using the same security measure with multiple applications, and forgetting\/losing the password\/PINs~\\cite{2}. Especially for keys\/cards, they can be duplicated apart from the previously mentioned limitations. Since biometric systems are based on human characteristics such as the face, fingerprint, or iris, which are unique for every individual, they have a definite advantage over information security-based systems. Due to these advantages, biometric systems are widely deployed in smartphones, border control (both in automated, and attended scenarios), and national identity cards. However, biometric systems are vulnerable to Presentation Attacks (PA)~\\cite{3}, due to which some crimes have been reported in the media, where the biometric systems were spoofed~\\cite{4,5,6}. An attacker can perform the attack on the biometric system by presenting a biometric artefact or a Presentation Attack Instruments (PAIs) ~\\cite{RaghuSurvey}. PA can be performed in different biometric modalities, including the face, fingerprint, and iris. Since fingerprint recognition systems are widely deployed in critical security systems, it is essential to develop fingerprint PAD.\n\n\nPAIs for fingerprint can either be an artificial object such as a gummy finger (made from play-doh, silicone, or gelatine) or a 2D\/3D printed photo. In terms of implementation, PAD systems can be either a hardware-based or a software-based, whose main task is to distinguish between a real (bona fide) user or a malicious (imposter) attacker~\\cite{8}.\nA summary of existing fingerprint PAD methods can be found in Marcel et al.~\\cite{10}, Marasco et al.~\\cite{RossSurveyFPAD}, Galbally et al.~\\cite{Galbally2019}, and Sousedik et al.~\\cite{16}. In the current scenario, the majority of the existing PAD methods consist of training a classifier to accurately model the characteristics of the PAI. However, such an approach suffers from the problem of generalization to detect unknown attacks~\\cite{10}. Thus, developing a reliable PAD technique for unknown attacks is a significant problem that can also be posed as anomaly (outlier) detection. Fingerprint recognition systems have been widely deployed, as mentioned earlier, and are prone to PA. Since the attacks cannot be listed in advance, detecting unknown attacks for the fingerprint is critical.\nOur survey on fingerprint Presentation Attack Detection (FPAD) presents the following:\n\\begin{itemize}\n \\item Comprehensive survey of existing methods for FPAD for unknown attacks.\n \\item Categorization of existing methods for the FPAD of unknown attacks.\n \\item Discussion on advantages\/disadvantages of existing methods for FPAD, especially for unknown attacks.\n \\item Concluding remarks with future directions for the area of FPAD.\n\\end{itemize}\n\nIn the rest of the paper, a comparison between traditional PAD done in a supervised manner, and anomaly detection based FPAD in Section~\\ref{sec:2}, which is followed by Section~\\ref{sec:3} summarizing related work in FPAD which includes their categorization, advantages, and disadvantages in terms of generalization, and finally we present conclusions \\& future directions for FPAD in Section~\\ref{sec:4}.\n\n\n\\section{Traditional PAD \\& Anomaly Detection based PAD}\n\\label{sec:2}\n\\begin{figure}[htbp!]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{TaxonomyFigure.pdf} \n \\caption{Illustration of taxonomy for fingerprint presentation attack detection.}\n \\label{fig:taxonomyfigure}\n\\end{figure}\n\\begin{table}[t!]\n\\centering\n\\resizebox{1\\linewidth}{!}\n{\\begin{tabular}{|p{2.0cm}|l p{4.3cm}|l p{4.3cm}|}\n\\hline\n & & {\\bf{Traditional PAD}} & & {\\bf{Anomaly detection based PAD}} \\\\\n\\hline\n{\\bf{Characteristics}} &-& Information about PAI is gathered and known in advance. & - & Establish profiles of normality features which are extracted from regular data. \\\\\n &-& Look for PAIs' features each time a presentation occurs. & - & Compares the normality features of each new presentation against the established profiles. \\\\\n&-& Alerts for PA if any PAI is found to be in the new presentation. & - & Alerts for PA if a deviation from normality is detected in the new presentation based on threshold. \\\\\n\n\\hline\n{\\bf{Advantages}} &-& Possibility to detect known PAs. &-& Possibility to detect known and unknown PAs. \\\\\n &-& There is a possibility of using existing knowledge to recognize new forms of old PAs. &-& Does not care about the used PAI during the attack. \\\\\n\n\\hline\n{\\bf{Drawbacks}} &-& For each novel PA, PAD methods should be updated and tested with the new PAI. &-& Hard to define a profile of normality features for each bona fide presentation. \\\\\n &-&\n\n As the number of PAs increases, and correspondingly PAIs increase, the complexity of PAD increases.\n \n &-& Higher false-positive for PAs depending on accessibility or usability. \\\\\n &-& Hard to detect previously unseen PAs. &-& Hard to set the optimal threshold value for PAD.\\\\\n &-& Simple changes to PAI in a known PA can be enough to miss the detection of the PA. &-& The size of normality feature can be very large, which leads to a high false-positive rate.\\\\\n &-& A leak of PAIs' list that a system maintains could help attackers bypass the system's PAD method. && \\\\\n\\hline\n\\end{tabular}}\n\\caption{Characteristics, advantages and disadvantages of Anomaly detection based PAD as compared to Traditional PAD for biometrics.\\label{tab1}}\n\\end{table}\nIn this section, we present a comparison between traditional PAD (a form of supervised classification) and anomaly detection (supervised\/unsupervised classification) based PAD, as shown in Figure~\\ref{fig:taxonomyfigure}. Since we are interested in unknown attack detection for fingerprint, this can be achieved by Anomaly detection \\cite{11}. We now briefly review Anomaly detection in the following subsection:\n\\subsection{Anomaly Detection}\nAnomaly Detection refers to the determination of irregularity in a dataset. The dataset contains a set of records (aka., instances, objects, or entities), where each record includes a set of attributes (aka., characteristics or features), as pointed out by Chandola et al.~\\cite{11}. In general, an anomaly detection method is provided with a record\/set of records as an input, where no information about either anomalies or regular classes is known to the detection method in advance~\\cite{14}.\nThe three modes of anomaly detection methods, according to Chandola et al.~\\cite{11} are as follows:\n\\begin{itemize}\n \\item \\textit{\\bf{Supervised anomaly detection}}: \\\\\n Anomaly methods that are based on a predictive model which is trained by a labeled dataset of two classes (i.e., normal and anomaly records). Any unseen record is then compared against the model to determine whether it is a normal or an anomaly record. This can be achieved by using publicly labeled fingerprint datasets for training. This form of anomaly detection is used in traditional PAD and in unknown attack detection where the sensor is known in advance for FPAD. \n \\item \\textit{\\bf{Semi-supervised anomaly detection}}:\\\\\n Anomaly methods that are based on a single classifier trained using only normal behavior records from a dataset, as only those are labeled. This form of anomaly detection is used for unknown attack detection both for the known sensor, \\& unknown sensor for FPAD.\n \\item \\textit{\\bf{Unsupervised anomaly detection}}:\\\\\n Anomaly methods do not require training data, but no records are labeled in the dataset if training is applied. This method is based on the assumption that regular records are far more frequent than anomalies in both training and testing datasets, and can lead to high false reject rate if this assumption is violated. This form of anomaly detection is used for unknown attack detection for the known sensor, and the unknown sensor for FPAD.\n\\end{itemize}\nTable~\\ref{tab1} shows a description of traditional and anomaly detection based PAD. Theoretically, the main advantage of anomaly-based PAD methods over traditional methods is capturing both the known and the unknown PAs. In contrast, the traditional PAD methods can detect known PAs, and maybe new forms of these attacks. For instance, if a traditional PAD method is trained only to detect gummy fingers of play-doh, it may not detect gummy fingers of other materials like silicone or gelatine. This requires the traditional PAD methods to make a long list of PAIs gathered from known PAs, and the methods should be updated and re-trained each time a new unknown PA is revealed. Consequently, the list of PAIs and known PAs can become long and hard to maintain. Moreover, if an attacker gets access to the list of PAIs used to train a biometric system, the attacker will be able to conduct a PA using a novel PAI that is not known to the systems.\n\nEven if anomaly PAD methods solve several drawbacks in traditional PAD methods, they come with high risks, implementation difficulties, and critical disadvantages. In general, it is difficult to define and extract all features of bona fide presentations (i.e., normality features), because these features can have a broad scope and thus become hard to use for an implementation of a PAD method. Moreover, the threshold used to distinguish between PAs and bona fide presentations is affected with accessibility or usability issues between the subject, and the capture device, which makes it hard to define. Thus, the size of the normality features will be large, and it may require prioritizing some features over others during the feature selection. Nevertheless, size reduction methods can be used to reduce the number of features normality. However, this will lead to more false-positive alarms as the normality features are not precise enough to distinguish between all the cases of PAs and bona fide presentations.\n\n\\section{Known \\& Unknown Presentation Attack Detection for fingerprints}\n\\label{sec:3}\n\\begin{table}[t!]\n \\centering\n \\resizebox{1\\linewidth}{!}\n {\\begin{tabular}{|l|l|l|l|l|l|l|l|}\n \\hline\n \\textbf{Ref.} & \\textbf{S\/H\/W} & \\textbf{Dataset} & \\textbf{Pre-processing} & \\textbf{Post-processing} & \\textbf{\\# unknown PAs} & \\textbf{A. D. methods} & \\textbf{A. D. mode}\\\\\n \\hline\n \\cite{fPA10} & S & LivDet 2009 & - & - & 1 & - & S \\\\ \\hline\n \n \\cite{fPA12} & S & LivDet 2011 & GLCM, HOG, BSIF & - & 2 & SVM, & S \\\\ \n & & & LPQ, LBP, BGP & & & Rule-based & \\\\ \\hline\n \n \\cite{fPA13} & S & LivDet 2011 & LBP & Score fusion & 4 & SVM & S \\\\ \\hline\n \n \\cite{fPA11} & S & LivDet 2011 & BSIF, LBP, LPQ & - & 3 & SVM & S \\\\ \\hline\n \n \\cite{fPA3} & S & LivDet 2011, & Image segmentation & - & 4 & CNN & S \\\\ \n & & LivDet 2013, & (part of CNN) & & & & \\\\ \n & & LivDet 2015 & & & & & \\\\ \\hline\n \n \\cite{fPA5} & S \\& H & Own dataset & ROI segmentation & - & 6 & Pre-trained CNN & S \\\\ \\hline\n \n \\cite{fPA6} & S \\& H & Own dataset & ROI segmentation & Score fusion & 3 & SVM & S \\\\ \\hline\n \n \\cite{fPA7} & S \\& H & Own dataset & ROI segmentation & Score fusion & 5 & SVM, & S \\\\\n & & & & & & CNN, & \\\\ \n & & & & & & Pre-trained CNN & \\\\ \\hline\n \n \\cite{fPA9} & S \\& H & Own dataset, & ROI segmentation, & Score fusion & 5 & SVM, & S \\\\ \n & & LivDet 2017 & RGB image creation & & & CNN, & \\\\ \n & & & & & & Pre-trained CNNs & \\\\ \\hline\n \n \\cite{fPA2} & S & MSU-FPAD, & Minutiae detection, & Score fusion & 6 & Pre-trained CNN & S \\\\ \n & & PBSKD & Patches creation, & & & & \\\\\n & & & Patches alignment & & & & \\\\ \\hline\n \n \\cite{fPA1} & S & LivDet 2011, & Dense-SIFT & Score fusion & 8 $\\leq$ & SVM, & S, U \\\\ \n & & LivDet 2013, & & & & K-means, & \\\\\n & & LivDet 2015, & & & & PCA & \\\\\n & & LivDet 2019 & & & & & \\\\ \\hline\n \n \\cite{fPA8}, & W & MSU-FPAD v2, & Patches extraction & Score fusion & 3 & Pre-trained CNN & S \\\\ \n \\cite{fPA8_1} & & LivDet 2015 & & & & & \\\\\n & & LiveDet 2017 & & & & & \\\\ \\hline\n \n \n \\end{tabular}}\n \\caption{Overview of Fingerprint PAD using anomaly detection for unknown PAs. (where the abbreviations used are Anomaly Detection (A. D.), Software\/Hardware\/Wrapper (S\/H\/W), Supervised (S), Semi-Supervised (SS), and Unsupervised (U))\\label{tab2}}\n\\end{table}\nWe now review the related work for FPAD in general, and specifically for unknown attack detection of fingerprints. Many software and hardware PAD methods are presented in the literature to detect PAs against fingerprint recognition systems. PAs can be conducted using PAIs in two fingerprint forms (e.g., overlays), and additionally using 3d printed fingers~\\cite{16}. Software approaches make use of features extracted by standard sensing technologies, which can further be divided into static (e.g., sweat pores and texture of ridges and valleys) and dynamic features (e.g., skin color change over time due to pressure). Software approaches are usually cheaper to implement (as no extra hardware is needed), and less intrusive to the user~\\cite{Galbally2019}. Hardware approaches introduce a new device to the sensing technology to capture more details than standard sensors (e.g., fingerprint sweat, blood pressure, or odor). Keeping in mind that hardware solutions are only used to capture data, and they usually have associated software solutions with them that distinguish between bona fide and PAs, which can either be inbuilt in the sensor or as stand-alone software. So, in theory, if two different hardware approaches as in \\cite{fPA7} and \\cite{EgySWIR} use Short Wave Infrared (SWIR) and Laser Speckle Contrast Imaging (LSCI) techniques respectively, they can still process each other datasets using the same software in their approaches. According to Galbally et al.~\\cite{Galbally2019} hardware-based approach introduces a higher fake detection rate than a software-based approach. This survey paper considers the type of approach (i.e., hardware and software) as a comparison factor, as shown in Table~\\ref{tab2}.\n\n\n\n\\subsection{Pre-processing techniques (Software-based)}\nWe now briefly review the pre-processing techniques in the literature attached to the PAD methods presented in Table~\\ref{tab2}. These can be texture-based descriptors such as Local Binary Pattern (LBP)~\\cite{LBPPaper}, Grey Level Co-occurrence Matrix (GLCM)~\\cite{GLCM}, Histogram of Oriented Gradients (HOG)~\\cite{HOGPaper}, Binary Statistical Image Features (BSIF)~\\cite{fPA12}, Local Phase Quantization~\\cite{LPQPaper}, Binary Gabor Patterns (BGP)~\\cite{LBPBGP}, Dense-SIFT~\\cite{fPA1} or techniques such as Image Segmentation, Region of Interest (ROI) Segmentation or Finger-print Minutae detection.\n\\subsection{Convolutional Neural Network (Software-based)}\nWe now briefly review the deep learning-based approaches; Park et al.~\\cite{fPA3} presented a supervised software-based approach using a convolution neural network (CNN), which did not use the PAD of unknown PAs. However, they tested the approach on the LivDet 2015 data sets that contains four unknown PAs~\\cite{LivDet2015}. The CNN network devised by them takes the full image of a fingerprint. It outputs a three-dimensional tensor that is used to determine the probability of the image being a bona fide or an attack presentation. The liveness probability is compared to an optimal threshold obtained from the training phase, where they achieved an average classification error of 1.5\\% for the unknown PAs. The usage of deep learning approaches has become a trend in the last decade, which is mainly due to the freely available pre-trained networks such as VGG~\\cite{VGGPaper}, GoogleNet~\\cite{GoogleNetPaper}, and ResNet~\\cite{ResnetPaper}. \nTolosana et al.~\\cite{fPA9} published a new experiment, where a PAD method relies on the use of SWIR and RGB images. Deep features from RGB images are extracted via two pre-trained CNNs, namely VGG19 and MobileNet, and a ResNet network trained from scratch. The features output by the CNNs is feed to an SVM. Additionally, handcrafted features as spectral signatures were extracted from SWIR images. For the final evaluation, a score fusion applied, and the reported D-EER for this experiment was 1.36\\%.\n\n\n\\subsection{Known Sensor \\& Known Attacks}\nMarasco et al.~\\cite{RossSurveyFPAD} provided an overview of PAD methods in the literature for fingerprint recognition systems, and they specifically point out that commercial fingerprint recognition systems can be spoofed. Most of these approaches test their performance on a test dataset with the same PAs as used during the training. Thus, these PAs are considered known to the PAD-method, which is a less realistic scenario than a real-world environment setup where additional PAIs may be used to conduct PAs (i.e., unknown attacks). \n\\subsection{Known Sensor \\& Unknown Attacks}\nTo the best of our knowledge, Tan et al.~\\cite{fPA00} were the first to point to the effect of environmental conditions and new PAI materials on PAD methods for fingerprints. They showed that new PAI to increases the error rate by at least 14\\% and up to 55,6 \\% on different fingerprint scanners as Identix, Crossmatch, and Digital Persona. Moreover, their experiment showed that the error rate drops back into an acceptable range once new PAIs are used in the training phase. This was later confirmed by Marasco et al. in~\\cite{fPA10}, in which they experimented the increase of spoof detection error rates of five fingerprint liveness detection methods (given by Marasco et al.~\\cite{1of5}, Moon et al.~\\cite{2of5}, Shankar et al.~\\cite{3of5}, Abhyankar et al.~\\cite{4of5}, and Tan et al.~\\cite{5of5}) when tested on the new PAIs that were not used during training. Marasco et al.~\\cite{fPA10} used the leave-one-out approach in their experiment, where only one PAI out of gelatine, play-doh, and silicone is used for testing, and the other two are used for training as they train the PAD methods using both PAs and bona fide presentations and can be classified as supervised anomaly detection approach.\nTo solve the problem of unknown PAIs, Rattani et al.~\\cite{fPA12} proposed a scheme for automatic detection and adoption of the liveness detector to new PAIs. Their liveness detection is a combination of a multiclass-SVM and rule-based approaches that form an AdaBoost-based classifiers\\cite{ada}. The Adaboost classifiers are used to detect novel PAs, and new PAIs used in each attack, followed by a binary classification SVM that corresponds to live and spoof classes, where the thresholds are maintained by multi-class SVM. In a case where a novel PA is presented to the detector, two rules apply to determine whether the PA is novel or already known. The first rule computes the maximum posterior probabilities for each known PA and bona fide. So, PA is considered novel if it overcomes a defined threshold else it is regarded as a known PAs and belongs to the corresponding class value. The second rule estimates the standard deviation of the posterior probabilities computed in the first rule. A low standard deviation value indicates doubt in classifying the PA as a known.\nAdditionally, they state the possibility of their PAD method to update the maintained binary classification SVM automatically, thus that it is always considered learned to known PA materials. This method is considered supervised because two out of four materials in the LiveDet 2011 dataset were used for training (i.e., two known PAIs and two unknown PAIs). The published results mentioned up to 46\\% improvements in detecting unknown PAIs. Rattani et al.~\\cite{fPA13} published a study where they tried to reduce the material-specific noise and apply a software-based method that learns the general artifacts in images from PAIs that correspond to different materials. This is done by using two SVMs that combine linear filtering and non-linear denoising using wavelet decomposition of PAIs on an LBP-based textural-analysis liveness detector. Their experimental results gained up to 44\\% improvements in detecting unknown PAs on LiveDet 2011 dataset. The training phase during the experiment is done using one material out of five. Thus, the method is tested on four unknown attacks. Rattani et al.~\\cite{fPA12} used Weibull-calibrated SVM (W-SVM) can be used both for the detection of liveness and spoofs, and discovery of new novel PAs and PAIs. Also, they claim W-SVM that supports interoperability between individual detectors. The results show 44\\% improvements in detecting novel materials on Livedet 2011 dataset. Tolosona et al. ~\\cite{fPA5} used a VGG pre-trained network as a PAD method in the finger recognition system. They use ShortWave Infrared Imaging (SWIR) images since the skin reflection within the SWIR spectrum of 900\u20131700 nm is independent of the skin tone as analyzed by the National Institute of Standards Technology (NIST). Thus, they used a hardware sensor approach to capture SWIR images of bona fide and PAs (i.e., own dataset), and a software-based approach for PAD. A total number of six unknown PAIs were detected by their PAD method, giving high convenience and secure, supervised PAD method. The same hardware developed by~\\cite{fPA5} is capable of capturing finger vein images (i.e., Visible Light Images, VIS) and speckle contrast images (LSCI) in addition to SWIR images.\nGomez-Barrero et al.~\\cite{fPA6} proposed a multi-modal finger PAD method where they use different ad-hoc approaches in parallel for each image type, and several SVM classifications are set to output a score of each ad-hoc approach where the final score is given by the weighted sum of all individual scores obtained. The evaluation in this approach is applied to both known and unknown PAIs (in total 35, three are unknown), resulting in a Detection Equal Error Rate (D-EER) of 2.7\\%. Gomez-Barrero et al. proposed another multi-modal approach~\\cite{fPA7}, in which the proposed PAD method relies on a weighted sum of two CNN networks based on SWIR images and textural and gradient information from averaged LSCI images. They applied a pre-trained VGG19 network and a ResNet network that was trained from scratch for the CNNs. The textural and gradient information extracted from averaged LSCI images is passed into three SVMs for classification. They used the dataset from~\\cite{fPA6}, increasing the number of unknown attacks to five PAIs, and reporting a decrease in the D-EER from 2.7\\% to 0.5\\%. Chugh et al.~\\cite{fPA2} proposed a software-based FPAD method with a generalization against PAIs not seen during training. They studied the characteristics of twelve different PAIs and bona fide presentations using deep features extracted by a pre-trained CNN, namely, MobileNetv1.\nFurther, they applied an agglomerative clustering based on the shared characteristics of PAIs. Thus, they concluded that a subset of PAIs, namely silicone, 2D paper, play-doh, gelatine, latex body paint, and monster liquid latex, are essential PAIs to include during the training to achieve a robust PAD. An android smartphone application is presented without a significant drop in performance from the original PAD method. They achieved a True Detection Rate (TDR) of 95.7\\% and False Detection Rate (FDR) of 0.2 \\% when the generalization set is used for training (i.e., the six PAIs). \n\n \n \n\\subsection{UnKnown Sensor \\& Unknown Attacks}\nRattani et al.~\\cite{fPA11} declared the need for fingerprint PAs detection to be considered as an open set recognition problem. Thus, incomplete knowledge about neither PAs nor PAIs is known to the PAD method during training. Therefore, they adopted W-SVM, which uses recent advances in extreme value theory statistics for machine learning to directly address the risk of the anomalies in an open set recognition problem. Ding et al.~\\cite{Ross2016} proposed the use of an ensemble of One-Class Support Vector Machines (OC-SVM) using bona fide samples to generate a hypersphere boundary which is refined by a small number of spoof samples, for classification of unknown PAIs. Jain et al.~\\cite{JainOneClass19} developed a one-class classifier that is based on training on learning of bona fide samples using multiple GANs (Generative-Adversarial Networks) which can reject any PAI. Gonz{\\'a}lez-Soler et al.~\\cite{fPA1} proposed a software-based PAD method and achieved an overall accuracy by 96.17\\% on the LivDet2019 competition. This method relied on three image representation approaches, which combine both local and global information of the fingerprint, namely Bag-of-words (BoW)~\\cite{bag}, Fisher Vector (FV)~\\cite{fisher}, and Vector Locally Aggregated Descriptors (VLAD)~\\cite{vlad}. They computed Dense-SIFT descriptors at different scales, and the features are then encoded using a previously learned visual vocabulary using the previously mentioned image representation approaches. A linear SVM classifier is applied to classify the fingerprint descriptor in each method. A weighted sum computes the final decision score. BoW approach uses K-means clustering local features and presents it as a pyramid of spatial histograms. FV approach is based on statistical and spectral-based techniques, where the Gaussian Mixture Model (GMM) locates local features that lie under the same distribution. Then, these features are presented in a lower dimension via Principal Component Analysis (PCA). VLAD approach, on the other hand, relied on non-probabilistic techniques and is used to reduce the high-dimension image representation in BoW and FV. They experimented with both scenarios of their PAD method, namely supervised (i.e., known PAs) and unsupervised (i.e., unknown PAs) scenarios. Chugh et al.~\\cite{fPA8} and Gajawada et al.~\\cite{fPA8_1} present a wrapper that can be adopted by any fingerprint PAD method to improve the generalization performance of the PAD method against unknown PAIs. These approaches are based on synthesizing fingerprint images that correspond to unknown PAs and bona fide images as well. The goal is to transfer the characteristics into a deep feature space so that a more precise presentation helps the PAD method increase its generalization performance. The method is based on multiple pre-trained VGG19 CNNs that encode and decode the content and style loss of the synthesized images, as they can be further used to train the PAD method. They use the same PAD software method as done by Chugh et al.~\\cite{fPA2} to experiment with the wrapper. Moreover, this approach is a supervised method in which they use the leave-one-out technique on each PAI for MSU-FPADv2, where the other PAIs are known in training. On the other hand, in LivDet 2017 dataset, three PAIs were considered unknown.\n \n\n\n\n\n\n\n\\section{Conclusions \\& Future Directions}\n\\label{sec:4}\nThis survey paper presented unknown attack detection for fingerprints, including a survey of existing methods summarized \\& categorized in Table~\\ref{tab2}, additionally a taxonomy of FPAD is presented in Figure~\\ref{fig:taxonomyfigure}. Currently, most unknown attack detection methods for fingerprints are solving the problem of known sensors and unknown PAIs, and there are only a few methods which are unknown sensor, and unknown PAI, including cross-dataset. \\\\\nUnknown attack detection with unknown sensors is a relatively new area of research for FPAD and should be the focus area in near-future. The first approach to solving it is of synthesis, as done by Jain et al.~\\cite{JainOneClass19}. The second approach is to arrive at a common deep-feature representation, such as the one used by Gonz{\\'a}lez-Soler et al.~\\cite{fPA1}. The challenge in synthesis based approach is to do high-quality synthesis of the bona fide samples, and the difficulty in arriving at a common deep-feature representation is the degree of invariance it can provide to sensor type and PAI.\n\\balance\n{\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and preliminaries}\n\n\n\n\n\nLet $[n]=\\{1,\\dots,n\\}$ be our underlying set. If $F\\subseteq [n]$,\nthen $\\overline{F}$ denotes the complement of $F$. Let\n$\\mathcal{F}$ be a family of subsets of $[n]$\n(i.e. $\\mathcal{F}\\subseteq 2^{[n]}$). Let $\\overline{\\cF}:=\\{F\\subset [n]: \\overline{F}\\in\\cF\\}$. A family is called \\emph{intersecting} if any\ntwo members have non-empty intersection. Intersecting families of sets have attracted a lot of researchers, see e.g. Chapter 2 of the book \\cite{book}. Let us start with a well-known and trivial statement.\n\n\\begin{prop} The maximum size of an intersecting family is\n$2^{n-1}$.\n\n\\end{prop}\n\nThe maximum size is achieved e.g. by the family of all subsets containing a given fixed element. A family is called \\emph{$k$-uniform}, if all its members have cardinality $k$. Let $\\mathcal{F}_k$ denote the subfamily of the\n$k$-element subsets in $\\mathcal{F}$:\\, $\\mathcal{F}_k=\\{F:\nF\\in\\mathcal{F}, |F|=k\\}$.\n\n\\begin{thm}[Erd\\H{o}s, Ko, Rado \\cite{ekr}] Let $k \\le n\/2$. Then the maximum size of a $k$-uniform intersecting family is $\\binom{n-1}{k-1}$.\n\n\\end{thm}\n\nLet us call an intersecting family \\emph{trivial} if all its members contain a given fixed element, and non-trivial otherwise. The maximum in the above theorem is again achieved by the largest trivial intersecting family.\n\n\\begin{thm}[Hilton, Milner \\cite{hm}] Let $k \\le n\/2$. Then the maximum size of a non-trivial $k$-uniform intersecting family is $1+\\binom{n-1}{k-1}-\\binom{n-k-1}{k-1}$.\n\n\\end{thm}\n\nThe maximum is given by the Hilton-Milner type family $HM(k)$, which we define next. $HM(k)$ contains $A=\\{2,\\dots,k+1\\}$ and every $k$-element set which contains $1$ and intersects $A$. Moreover, Hilton and Milner \\cite{hm} also showed that $HM(k)$ is the unique maximum if $3n_{1-i}>\\dots>n_j\\ge j\\ge 1$. This form is called the \\emph{cascade form} of $\\ell$. The cascade form can be found in a greedy way: we pick the largest $n_i$ such that $\\binom{n_i}{i}\\le \\ell$, then the largest $n_{i-1}$ such that $\\binom{n_i}{i}+\\binom{n_{i-1}}{i-1}\\le \\ell$, and so on.\n\nThe Kruskal-Katona shadow theorem \\cite{kat,kru} states that if $\\cF$ is a $k$-uniform family with $|\\cF|=\\ell$, then $|\\Delta\\cF|\\ge |\\Delta\\cC_k^\\ell|$. It is not hard to calculate the cardinality of $|\\Delta\\cC_k^\\ell|$: if\n$\\ell=\\binom{n_k}{k}+\\binom{n_k-1}{k-1}+\\dots+\\binom{n_j}{j}$, then $|\\Delta\\cC_k^\\ell|=\\binom{n_k}{k-1}+\\binom{n_k-1}{k-2}+\\dots+\\binom{n_j}{j-1}$.\n\nThere is a simpler version of the shadow theorem due to Lov\\'asz \\cite{lov}. It states that if $\\cF$ is a $k$-uniform family with $|\\cF|=\\binom{x}{k}$, then $|\\Delta\\cF|\\ge \\binom{x}{k-1}$. Here $x$ is not necessarily an integer and $\\binom{x}{k}$ is defined to\nbe $\\frac{x(x-1)\\ldots (x-k+1)}{k!}$. This is a weaker bound, but easier to use. We will use both versions of the shadow theorem later.\n\n\n\\subsection{Profile polytopes}\n\n\nThe profile polytopes were introduced by P.L. Erd\\H os, P. Frankl and\nG.O.H. Katona in \\cite{efk1}.\nRecall that $\\mathcal{F}_i$ denotes the subfamily of the\n$i$-element subsets in $\\mathcal{F}$. Its size $|\\mathcal{F}_i|$ is denoted\nby $f_i$. The vector ${\\bf p}(\\mathcal{F})=(f_0,f_1,\\dots,f_n)$ in\nthe $(n+1)$-dimensional Euclidian space $\\mathbb{R}^{n+1}$ is\ncalled the \\emph{profile} or \\emph{profile vector} of $\\mathcal{F}$.\n\n\n\nIf $\\Lambda$ is a finite set in $\\mathbb{R}^d$, its \\emph{convex\nhull} $\\conv(\\Lambda)$ is the set of all convex combinations of\nthe elements of $\\Lambda$. A point of $\\Lambda$ is an\n\\emph{extreme point} if it is not a convex combination of other\npoints of $\\Lambda$. It is easy to see that the convex hull of a\nset is equal to the convex hull of the extreme points of the set.\n\nLet $\\mathbf{A}$ be a class of families of subsets of $[n]$. We\ndenote by $\\Lambda(\\mathbf{A})$ the set of profiles of the\nfamilies belonging to $\\mathbf{A}$:\n\n\\[\\Lambda(\\mathbf{A})=\\{{\\bf p}(\\mathcal{F}):\n\\mathcal{F}\\in\\mathbf{A}\\}.\\]\n\nThe \\emph{profile\npolytope} of $\\mathbf{A}$ is $\\conv(\\Lambda(\\mathbf{A}))$.\nWe are interested in the extreme points of $\\Lambda(\\mathbf{A})$. We simply call them the extreme points of\n$\\mathbf{A}$.\n\n\n\n\n\n\n\n\nSuppose we are given a weight function\n$w:\\{0,\\dots,n\\}\\rightarrow\\mathbb{R}$, and the weight of a family\n$\\mathcal{F}$ is defined to be $\\sum_{F\\in\\mathcal{F}} w(|F|)$,\nwhich is equal to $\\sum_{i=0}^n w(i)f_i$. Usually we are\ninterested in the maximum of the weight of the families in a class\n$\\mathbf{A}$. So we want to maximize this sum, i.e. find a family\n$\\mathcal{F}_0 \\in \\mathbf{A}$ and an inequality $\\sum_{i=0}^n\nw(i)f_i=w(\\mathcal{F})\\le w(\\mathcal{F}_0)=c$. This is a linear\ninequality, and it is always maximized in an extreme point.\n\nGiven a class (or property) of families, the first natural question in extremal combinatorics is the maximum cardinality such a family can have. When it is answered, often some simple weight functions are considered and the maximum weight of such a family is studied. Determining the extreme points answers these questions for every (linear) weight function.\n\n\n\n\nP.L. Erd\\H os, P. Frankl and\nG.O.H. Katona \\cite{efk1} determined the\nextreme points of the intersecting Sperner\nfamilies. In their next paper \\cite{efk2}, the extreme points of\nthe profile polytope of the intersecting families were determined. Now we define these. Let coordinate $i$ of ${\\bf a}$ be $0$ if $in\/2$. Let $k \\le n\/2$. Coordinate $i$ of ${\\bf a}_k$ is $0$ if $in-k$. Let $\\Gamma_a$ be the set of vectors that we can get from any of the vectors ${\\bf a}_k$ and ${\\bf a}$, if we replace an arbitrary set of coordinates by $0$. Note that if $n$ is even, then ${\\bf a}={\\bf a_{n\/2}}$.\n\n\n\\begin{thm}[P.L. Erd\\H{o}s, Frankl, Katona \\cite{efk2}]\\label{metszo} The set of extreme points of the intersecting families is $\\Gamma_a$.\n\\end{thm}\n\nThe corresponding intersecting families are the following. $\\mathcal{A}_k$ consists of the sets which have sizes at least $k$ and contain the element $n$, and of every other set which has size greater than $n-k$. $\\mathcal{A}$ consists of all the sets with size greater than $n\/2$, and the sets which have sizes $n\/2$ and contain $n$. These families are obviously intersecting and their profile vectors are ${\\bf a}_k$ and ${\\bf a}$. We can delete full levels and the families are still intersecting; in the corresponding vectors some coordinates are changed to $0$.\n\nSince then several other classes of families have been considered, see e.g. \\cite{eng}, \\cite{gerbner}, generalizations have been studied \\cite{patk}, \\cite{patk2}, and profile polytopes were applied for counting subposets \\cite{gkp}. Note that most of the classes of families where the profile polytope has been studied are \\emph{hereditary}, i.e. if we remove some members of a family in the class, the resulting family still belongs to the class. It makes determining the extreme points easier, as we do not have to deal with negative weights, and all extreme points can be achieved by changing some coordinates of a few essential ones to $0$.\nHowever, in this paper we determine the extreme points of the non-trivial intersecting families, which is not a hereditary property. \n\nIn the next section we define what is needed to state our main theorem. We prove an important special case in Section \\ref{biz}, and finish the proof by a case analysis in Section \\ref{mainbiz}.\n\n\\section{The main theorem}\n\nLet us start with some simple observations. A non-trivial intersecting family cannot contain the empty set or a singleton. It might contain the full set, but that does not change the intersecting property, nor the nontrivial property. It means that for a weight function $w$ if $w(n)>0$, the maximum family contains the full set, if $w(n)<0$, it does not. Moreover, changing only $w(n)$ does not change the other parts of the maximum family, hence we can basically forget about $n$. More precisely, $(p_0,p_1, \\dots, p_{n-2}, p_{n-1},0)$ is an extreme point if and only if $(p_0,p_1, \\dots, p_{n-2}, p_{n-1},1)$ is an extreme point.\n\nNow we define several vectors, which are going to be the extreme points of the non-trivial intersecting families. Then we state our main theorem, and after that we show that these vectors indeed correspond to non-trivial intersecting families and are extreme points (note that for most classes of families where profile polytopes have been studied, these statements are trivial, but not for the non-trivial intersecting families). That part also makes it easier to understand where these definitions come from. All these vectors are in the $(n+1)$-dimensional Euclidean space, but coordinates 0,1 and $n$ are always 0. Let $H\\subset \\{2,3,\\dots,n-2,n-1\\}$ be a nonempty set of indices, $h$ be its smallest and $h'$ be its largest element.\n\nLet ${\\bf b}_H=(b_0,\\dots,b_n)$ with\n\\begin{displaymath}\nb_i=\n\\left\\{ \\begin{array}{l l}\n0 & \\textrm{if\\\/ $i\\not\\in H$},\\\\\n|HM(i,h')| & \\textrm{if\\\/ $i\\in H$ and $i< h'$},\\\\\n|HM(i,h')|+1 & \\textrm{if\\\/ $i=h'$}.\\\\\n\\end{array}\n\\right.\n\\end{displaymath}\nLet $\\Gamma_b=\\{ {\\bf b}_H: h+h' \\le n\\}$.\n\nLet ${\\bf c}_H=(c_0,\\dots,c_n)$ with \n\\begin{displaymath}\nc_i=\n\\left\\{ \\begin{array}{l l}\n0 & \\textrm{if\\\/ $i\\not\\in H$},\\\\\n\\binom{n-1}{i-1} & \\textrm{if\\\/ $i\\in H$ and $i\\le n-h'$},\\\\\n\\binom{n}{i} & \\textrm{otherwise}.\\\\\n\\end{array}\n\\right.\n\\end{displaymath}\nLet $\\Gamma_c=\\{ {\\bf c}_H: h+h' > n\\}$.\n\nLet ${\\bf d}_H=(d_0,\\dots,d_n)$ with\n\\begin{displaymath}\nd_i=\n\\left\\{ \\begin{array}{l l}\n0 & \\textrm{if\\\/ $i\\not\\in H$},\\\\\n|HM(i,h')| & \\textrm{if\\\/ $i\\in H$ and $i< h'$},\\\\\n1 & \\textrm{if\\\/ $i=h'$}.\\\\\n\\end{array}\n\\right.\n\\end{displaymath}\nLet $\\Gamma_d=\\{ {\\bf d}_H: |H|>1, h+h''\\le n\\}$, where $h''$ is the second largest element of $H$.\n\nLet us consider the set $P$ of vectors $(e_0,\\dots,e_n)$ satisfying the following properties.\n\n1, Every $e_i$ is a non-negative integer, $e_0=e_1=e_n=0$.\n\n2, $x:=\\sum_{i=2}^{n-1}e_i \\ge 3$.\n\n3, $\\sum_{i=2}^{n-1} ie_i \\le (x-1)n$.\n\nNow we show the connection between $P$ and non-trivial intersecting families. For two vectors ${\\bf p}=(p_0,\\dots,p_n)$ and ${\\bf p'}=(p_0',\\dots,p_n')$, we say that ${\\bf p'} \\le {\\bf p}$ if $p_i\\le p_i'$ for every $0\\le i\\le n$.\n\n\\begin{lem}\\label{ujabb}\n\n\\textbf{(i)} If a non-trivial intersecting family does not contain $[n]$, its profile is in $P$.\n\n\\textbf{(ii)} If ${\\bf p} \\in P$ and there is no ${\\bf p'} \\in P$ different from ${\\bf p} $ with ${\\bf p'} \\le {\\bf p}$, then ${\\bf p}$ is the profile of a non-trivial intersecting family.\n\n\\end{lem}\n\n\\begin{proof}\n\nTo show \\textbf{(i)}, observe that for the profile of a non-trivial intersecting family obviously $e_0=e_1=0$ holds, and also we need at least three members in the family, as any two members trivially intersect. The third property is needed, otherwise an element of the underlying set would be covered $x$ times, i.e. by every set, contradicting the non-triviality.\n\nLet us prove now \\textbf{(ii)}. We are given a vector ${\\bf p}$ and we are going to construct a non-trivial intersecting family $\\cF$ with profile ${\\bf p}$. Observe that ${\\bf p}$ shows how many $k$-element sets must be in the family for every $k$. Let us denote the sizes of the sets by $a_1,\\dots,a_\\ell$ in decreasing order. We choose the first (the largest) set $F_1$ of size $a_1$ arbitrarily. Let $B_i$ be the set of vertices which are not covered by each of the first $i$ sets $F_1,\\dots,F_i$ (only by at most $i-1$ of them), then $B_1=\\overline{F_1}$ and $B_i\\supset B_{i-1}$ for every $i>1$. We choose the second set $F_2$ of size $a_2$ in such a way that $F_2$ intersects $F_1$ and also $F_2$ contains $B_1$, if possible.\n\nIf it is not possible, then we claim that we have $x=3$. Indeed, in that case we have $a_1+a_2\\le n$, thus they together with the next set $F_3$ of size $a_3$ have their profile in $P$, which means no other set can be in the family because of our assumption on the minimality of ${\\bf p}$.\nThen we pick $F_2$ of size $a_2$ such that it intersects $F_1$ in a single element, and then we pick $F_3$ of size $a_3$ such that it contains an element of $F_1\\setminus F_2$ and an element of $F_2\\setminus F_1$. This is doable as $a_3\\ge 2$. The resulting family is clearly non-trivial intersecting.\n\nIf $x>3$, we choose every $F_i$ of size $a_i$ in such a way that it contains $B_{i-1}$, if possible. Note that in this case it automatically intersects $F_1, \\dots, F_{i-1}$. Indeed, $F_i$ contains $B_1$, which is also contained in $F_2, \\dots, F_{i-1}$. $F_i$ also contains $B_2$, which intersects $F_1$ (we also use that $B_1$ and $B_2$ are not empty).\n\n\n\n\nNow assume that when we add a set $F_i$, it is too small to cover every vertex in $B_i$, i.e. $a_i< |B_i|$. Then $i=\\ell$, i.e. $F_i$ is the last set (as the resulting profile vector is in $P$). We have to choose $F_i$ in such a way that it intersects the other sets. As every vertex is covered at least $i-1$ times, all we have to do is to put an arbitrary vertex of $B_{i-1}$ in $F$, then the new set intersects all but one of the earlier sets, say $F_j$. We have to choose a vertex in $B_{i-1}$ contained in $F_j$, and then other vertices from $B_{i-1}$ arbitrarily. As only vertices in $B_{i-1}$ are used, no vertex is covered $i$ times, hence the family is non-trivial.\n\\end{proof}\n\nLet $\\Gamma_e$ be the set of the extreme points of $P$. Now we can state our main theorem.\n\n\n\n\n\n\n\n\n\\begin{thm}\\label{main}\n\nThe extreme points of the profile polytope of the non-trivial intersecting families are the elements of $\\Gamma_b\\cup\\Gamma_c\\cup\\Gamma_d\\cup\\Gamma_e$, and additionally the vectors we get from these if we change the last coordinate from $0$ to $1$.\n\n\n\n\\end{thm}\n\nTo prove this statement, we have to show that the points listed are indeed extreme points, and that there are no other extreme points. The first part is the easier task, and we will deal with it in the rest of this section.\nWe give an example non-trivial intersecting family for each of the vectors ${\\bf v}={\\bf v}_H\\in \\Gamma_b\\cup\\Gamma_c\\cup\\Gamma_d\\cup\\Gamma_e$ and also show that ${\\bf v}$ is an extreme point, by showing a weight function such that ${\\bf v}$ is the unique maximum. \n\nLet us describe first the general approach to find such a weight function.\nWe start by assuming that if $i\\not\\in H$, then $w(i)$ is negative, moreover, $w(i)$ so small compared to the other weights $w(j)$, that if a family contains even one $i$-element set, its total weight is negative. On the other hand, there is a $10$, thus there is a family of positive weight. This shows that\nno $i$-element sets can be in the family of maximum weight. Similarly, we can say that for some $i\\in H$ its weight is very large compared to the other weights. It implies that the family of maximum weight contains as many $i$-element sets as possible, i.e. $|HM(i,j)|$, where $j$ is the largest non-zero coordinate of ${\\bf v}_H$. We describe these ideas in more details in the proof of the following lemma.\n\n\n\n\\begin{lem}\\label{gammab} The elements of $\\Gamma_b$ are extreme points of the non-trivial intersecting families.\n\n\\end{lem}\n\n\\begin{proof} For ${\\bf b}_H\\in\\Gamma_b$ we have to show a family $\\mathcal{B}_H$ which has ${\\bf b}_H$ as profile, and a weight $w$ which is maximized at ${\\bf b}_H$. Let $\\mathcal{B}_H=[h']\\cup\\left(\\bigcup_{i\\in H}HM(i,h')\\right)$, i.e. the union of $HM(i,h')$ for every $i\\in H$, and additionally $[h']$. This family is obviously non-trivial intersecting, as each of its members except for $[h']$ contains $n$ and intersects $[h']$.\n\nNow we are going to show a weight function that is maximized only by families with profile ${\\bf b}_H$.\nLet $w$ be a weight such that if $i\\not\\in H$, then $w(i)=-2^{2n}$. It is going to be so small compared to the other weights, that no $i$-element sets can be in the maximum family $\\cF$. All other sets have weight at most $2^n$, and there are less than $2^n$ sets in $\\cF$, hence positive weight can only be achieved without these negative sets. Let $w(h)=2^n$, it is very large compared to the other positive weights (but still very small compared to the absolute value of the negative weights), and all other weights are 1. Then a single $h$-element set has larger weight than all the other sets with positive weight, thus the maximum family $\\cF$ contains as many $h$-element sets as possible. If $h n\/2$ and $i+m \\le n$. For other values of $i$ and $m$, it is going to be easy to see that Theorem \\ref{main} holds (we do it inside the proof of the main theorem in Section \\ref{mainbiz}). Thus, the lemma below contains the most complicated part of the proof.\n\n\\begin{lem}\\label{ketto} Let $(f_0, f_1, f_2, \\dots, f_n)$ be the profile vector of a non-trivial intersecting family $\\mathcal{F}$. Let us assume that $m$ is the maximum cardinality in $\\mathcal{F}$, $m> n\/2$ and $i+m \\le n$. Then there is a $0\\le \\lambda\\le 1$ such that $(f_i,f_m) \\le \\lambda (0, \\binom{n}{m})+(1-\\lambda)(|HM(i,m)|,|HM(m,m)|)$.\n\\end{lem}\n\n\nWe will use the following simple observations.\n\n\n\n\\begin{prop}\\label{triv}\n\\textbf{(i)} If $x\\le y$, then $\\binom{x}{k-1}\/\\binom{x}{k}\\ge \\binom{y}{k-1}\/\\binom{y}{k}$.\n\n\\textbf{(ii)} Let $0\\le c'$, $0<\\alpha,a,b,c,b'$ with $bc'\\le cb'$, $b\/c\\le\\alpha$ and $c\\ge c'$. Then \n\\[\\frac{\\alpha a+b}{a+c}\\le \\frac{\\alpha a+b'}{a+c'}.\\]\n\\end{prop}\n\n\n\\begin{proof} The first statement easily follows from the definition of $\\binom{x}{k}$.\n\n\nBy rearranging the desired inequality of $\\textbf{(ii)}$, we obtain the equivalent form $\\alpha ac'+ab+bc'\\le \\alpha ac+ab'+cb'$. Recall that we have $bc'\\le cb'$. The other terms can be rewritten as $\\frac{b-b'}{c-c'}\\le\\alpha$. We have $\\frac{b-b'}{c-c'}\\le\\frac{b-bc'\/c}{c-c'}=\\frac{b(c-c')\/c}{c-c'}=b\/c\\le\\alpha$.\n\\end{proof}\n\nNow we are ready to prove Lemma \\ref{ketto}.\n\n\\begin{proof}[Proof of Lemma \\ref{ketto}]\n\nWe use induction on $n-m-i$. Observe that for the base case $i+m=n$ we have that $HM(i,m)\\cup HM(m,m)$ consists of all the $i$-sets and $m$-sets containing $n$, except that it contains $[m]$ instead of its complement. Thus $HM(i,m)\\cup HM(m,m)$ has $\\binom{n}{i}$ members, just like any maximal non-trivially intersecting family on these two levels. Let us choose $\\lambda=\\frac{|HM(i,m)|-f_i}{|HM(i,m)|}$, then by definition $f_i\\le (1-\\lambda)|HM(i,m)|$, and we need \\[f_m\\le \\lambda\\binom{n}{m}+(1-\\lambda)|HM(m,m)|=\\binom{n}{m}-\\frac{f_i\\binom{n}{m}}{|HM(i,m)|}+\\frac{f_i|HM(m,m)|}{|HM(i,m)|}=\\binom{n}{m}-f_i.\\]\nThis holds for every intersecting family, even the trivial one. For non-trivial intersecting families, we have $f_i\\le |HM(i,m)|$ by Lemma \\ref{observ}, thus we have $\\lambda\\ge 0$, completing the proof of the base step.\n\nLet us continue with the induction step. Let us consider $\\nabla\\cF_i$, which is the shade of $\\cF_i$ and let $g_{i+1}=|\\nabla\\cF_i|$. Then $\\nabla\\cF_i\\cup\\cF_m$ is obviously non-trivially intersecting, thus by the induction hypothesis there is a $0\\le \\lambda\\le 1$ such that $(g_{i+1},f_m) \\le \\lambda (0, \\binom{n}{m})+(1-\\lambda)(|HM(i+1,m)|,|HM(m,m)|)$. We will show that the same $\\lambda$ works for $f_i$, i.e. $(f_i,f_m) \\le \\lambda (0, \\binom{n}{m})+(1-\\lambda)(|HM(i,m)|,|HM(m,m)|)$. As the values in coordinate $m$ do not change, all we need to prove is that $f_i\\le (1-\\lambda)|HM(i,m)|$ if $g_{i+1}\\le (1-\\lambda)|HM(i+1,m)|$. It is enough to show that$f_i\/|HM(i,m)|\\le g_{i+1}\/|HM(i+1,m)|$, or equivalently $g_{i+1}\/f_i\\ge |HM(i+1,m)|\/|HM(i,m)|$. As $HM(i+1,m)=\\nabla HM(i,m)$,\nthe last of the above inequalities means that the size of the shade of $\\cF_i$ is proportionally the smallest if $\\cF_i$ is $HM(i,m)$.\n\nWe will use the Kruskal-Katona theorem. To use it in the form we have stated it, we will consider the complement family, as the shade of a family is the shadow of its complement.\n\nObserve that $\\overline{HM(i,m)}$ is an initial segment of the colex ordering if we reorder the elements of $[n]$. Indeed, members of $\\overline{HM(i,m)}$ completely avoid a given element $z$, and then we take all the $(n-i)$-sets but those that contain an $m$-element set $B$. By reordering, we can assume that $z=n$ and $B=\\{n-m,\\dots,n-1\\}$. The sets containing $n$ are the last in the colex order, and a superset $F$ of $B$ cannot be before a set $G\\in\\overline{HM(i,m)}$, as the largest element of $F\\setminus G$ is in $B$, while every element of $G\\setminus F$ is less than $n-m$.\n\n\nThe cascade form of $|\\overline{HM(i,m)}|$ is $\\binom{n-2}{n-i}+\\binom{n-3}{n-i-1}+\\binom{n-4}{n-i-2}+\\dots+\\binom{n-m}{n-i-m+2}=\\sum_{j=2}^m\\binom{n-j}{n-i-j+2}$. Let $\\cG$ be a non-empty $(n-i)$-uniform family \nwith $|\\cG|< |\\overline{HM(i,m)}|$ and cascade form $|\\cG|=\\sum_{j=2}^{m'}\\binom{n_j}{n-i-j+2}$. Observe that $n_2\\le n-2$. This implies that for any $h$, $n_h\\le n_h$.\n\n\n\n\n\nWe partition $\\overline{HM(i,m)}$ into $m-1$ parts: $\\cH_2$ consists of the first $\\binom{n-2}{n-i}$ sets of $\\overline{HM(i,m)}$ in the colex order, $\\cH_3$ consists of the next $\\binom{n-3}{n-i-1}$ sets, and so on. $\\cH_j$ for $j\\le m$ consists of $\\binom{n-j}{n-i-j+2}$ sets that come after $\\cH_2,\\dots, \\cH_{j-1}$, i.e. after the first $\\binom{n-2}{n-i}+\\binom{n-3}{n-i-1}+\\binom{n-4}{n-i-2}+\\dots+\\binom{n-m}{n-i-m+2}$ sets in the colex order.\nWe also partition $\\cG$ into $m-1$ parts: for $2\\le j2$.\n\n\n\nLet us assume that $n_2=n-2$, $n_3=n-3$,...,$n_{h}=n-(h)$ and $n_{h+1}< n-h-1$. \nLet $\\cH^*=\\cup_{j=1}^h \\cH_j$, $\\cH^{**}=\\cup_{j=h+1}^m \\cH_j$, $\\cG^*=\\cup_{j=1}^m \\cG_j$, $\\cG^{**}=\\cup_{j=h+1}^{m'} \\cG_j$. Observe that we have $|\\cH^*|=|\\cG^*|$ and $|\\Delta\\cH^*|\\le |\\Delta \\cG^*|$ since $\\cH^*$ is an initial segment of the colex ordering. We also have $|\\cH^{**}|\\ge \\binom{n-h-1}{n-i-h+1}$ and $|\\cG^{**}|<\\binom{n-h-1}{n-i-h+1}$.\n\nLet $a:=|\\cH^*|$, $c:=|\\cH^{**}|$, $\\alpha=|\\Delta\\cH^*|\/|\\cH^*|$, $b:=|\\Delta\\cH^{**}\\setminus\\Delta\\cH^*|$, $b'=|\\Delta\\cG|-|\\Delta\\cG^*|$, $c':=|\\cG^{**}|$ and $\\alpha'=|\\Delta\\cG^*|\/|\\cG^*|$. Our goal is to apply \\textbf{(ii)} of Proposition \\ref{triv}. By the above, we have $c>c'$. Now we will show that the other conditions are satisfied as well.\n\nWe let $p_\\ell:=\\binom{n-\\ell}{n-i-\\ell+2}=|\\Delta\\cH_\\ell\\setminus\\Delta\\bigcup_{\\ell'=2}^{\\ell-1}\\cH_{\\ell'})|$, i.e. the number of sets added to the shadow of $\\bigcup_{\\ell'=2}^{\\ell}\\cH_{\\ell'}$ by $\\cH_\\ell$.\nObserve first that $p_\\ell\/|\\cH_\\ell|=(n-i-\\ell+2)\/(i-1)$, thus $p_\\ell\/|\\cH_\\ell|$ decreases as $\\ell$ increases. This implies that\n$p_\\ell\/|\\cH_\\ell|\\le p_{h+1}\/|\\cH_{h+1}|$ for every $\\ell>h+1$. Therefore, we have that \\begin{equation}\\label{ineq0} \\frac{b}{c}\\frac{|\\Delta\\cH^{**}\\setminus\\Delta\\cH^*|}{|\\bigcup_{\\ell=h+1}^{m} \\cH_\\ell|}=\\frac{\\sum_{\\ell=h+1}^{m}p_\\ell}{|\\bigcup_{\\ell=h+1}^{m} \\cH_\\ell|}\\le \\frac{\\frac{p_{h+1}}{|\\cH_{h+1}|}|\\bigcup_{\\ell=h+1}^{m} \\cH_\\ell|}{|\\bigcup_{\\ell=h+1}^{m} \\cH_\\ell|}=\\frac{p_{h+1}}{|\\cH_{h+1}|}.\n\\end{equation}\n\nSimilarly, we have that \\[\\alpha=\\frac{|\\Delta\\cH^*|}{|\\cH^*|}=\\frac{|\\Delta\\cup_{j=1}^h \\cH_j|}{|\\cup_{j=1}^h \\cH_j|}=\\frac{\\sum_{j=1}^{h}p_j}{|\\bigcup_{j=1}^{h} \\cH_j|}\\ge \\frac{\\frac{p_{h+1}}{|\\cH_{h+1}|}|\\bigcup_{j=1}^{h} \\cH_j|}{|\\bigcup_{j=1}^{h} \\cH_j|}=\\frac{p_{h+1}}{|\\cH_{h+1}|}\\ge\\frac{b}{c}, \\]\nwhere the last inequality uses (\\ref{ineq0}).\n\nLet $x< n-h-1$ be defined by $\\binom{x}{n-i-h+1}:=\\binom{n_{h+1}}{n-i-h+1}+\\binom{n_{h+2}}{n-i-h}+\\ldots+\\binom{n_{m'}}{n-i-m'+2}=|\\bigcup_{\\ell=h+1}^{m'} \\cG_\\ell|$. We have $|\\Delta\\cG|\\ge |\\Delta\\cH^*|+\\binom{n_{h+1}}{n-i-h}+\\binom{n_{h+2}}{n-i-h-1}+\\ldots+\\binom{n_{m'}}{n-i-m'+1}$ by the Kruskal-Katona theorem. We claim that \n\n\\begin{equation}\\label{ineq1} \\binom{n_{h+1}}{n-i-h}+\\binom{n_{h+2}}{n-i-h-1}+\\ldots+\\binom{n_{m'}}{n-i-m'+1}\\ge \\binom{x}{n-i-h}.\n\\end{equation}\n\nIndeed, the left hand side is the sharp lower bound on the size of the shadow of an $(n-i-h+1)$-uniform family of size $\\binom{x}{n-i-h+1}$ by the Kruskal-Katona theorem, while the right hand side is the not necessarily sharp lower bound on the size of the same family by Lov\\'asz's version of the shadow theorem. We have $\\frac{b'}{c'}=\\frac{\\binom{n_{h+1}}{n-i-h}+\\binom{n_{h+2}}{n-i-h-1}+\\ldots+\\binom{n_{m'}}{n-i-m'+1}}{\\binom{x}{n-i-h+1}}\\ge\\frac{\\binom{x}{n-i-h}}{\\binom{x}{n-i-h+1}}\\ge \\frac{\\binom{n-h-1}{n-i-h}}{\\binom{n-h-1}{n-i-h+1}}=p_{h+1}\/|\\cH_{h+1}|\\ge \\frac{b}{c}$. In the inequalities here we used (\\ref{ineq1}) first, then \\textbf{(i)} of Proposition \\ref{triv} and finally (\\ref{ineq0}). \n\nNow we can apply \\textbf{(ii)} of Proposition \\ref{triv} to show that\n$\\frac{\\alpha a+b}{a+c}\\le \\frac{\\alpha a+b'}{a+c'}\\le \\frac{\\alpha' a+b'}{a+c'}$.\nThis means\n$|\\Delta\\overline{HM(i,m)}|\/|\\overline{HM(i,m)}|\\le|\\Delta \\cG|\/|\\cG|$. By taking the complements, we obtain that $|\\nabla HM(i,m)|\/|HM(i,m)|\\le |\\nabla \\cG'|\/|\\cG'|$ for any $i$-uniform family $\\cG'$ with $|\\cG'|\\le |HM(i,m)|$. In particular, $|\\nabla HM(i,m)|\/|HM(i,m)|\\le g_{i+1}\/f_i$, completing the proof.\n\n\n\\end{proof}\n\n\\section{Proof of the main theorem}\\label{mainbiz}\n\nIn this section we finish the proof of Theorem \\ref{main}.\nIt is easy to see that we can consider only families not containing $[n]$.\nIt is enough to show that if a profile vector ${\\bf p}$ of a non-trivial intersecting family $\\mathcal{F}$ gives the unique maximum for a weight function $w$, then ${\\bf p}\\in \\Gamma_b \\cup \\Gamma_c \\cup \\Gamma_d\\cup \\Gamma_e$.\n\n\n\nAn important observation is that if $F\\in\\cF$, $F \\subset G$ and $G$ has positive weight, then $G$ is in the maximum family (as adding it would not violate any of the properties). In the proof we often start with fixing the maximum size $m$ of members, it implies that larger sets (except possibly $[n]$) do not have positive weight. Note that if $w(m)>0$, then $\\cF_m$ is non-trivial intersecting. Indeed, if $\\cF_m$ is trivial, then all its members contain a given element $x$ and there is a set $F\\in\\cF$ of smaller size not containing $x$. But then all the $m$-element sets which contain $F$ are in $\\cF$, even those which do not contain $x$, a contradiction.\n\n\nWe continue the proof with a case analysis.\n\n{\\bf Case 1. $w(i)\\le 0$ for every $10$ for some $1n\/2$.}\n\n\n\n\n\n\n\nLet $m_0$ be the size of the smallest member of the family $\\mathcal{F}$.\n\n\\smallskip\n\n{\\bf Case 2c1. $w(m)\\ge 0$, $m>n\/2$ and $m+m_0> n$.}\n\nLet us consider the following modified weight function. Let $w'(i)$ be the same as $w(i)$ if $m_0 \\le i \\le m$ and negative otherwise. Obviously the maximum non-trivial intersecting family for $w'$ is also $\\mathcal{F}$. Let us examine the intersecting family $\\cF'$ with maximum weight $w'$ now. One can easily see using Theorem \\ref{metszo} that the profile of $\\cF'$ can be obtained from ${\\bf a_{m_0}}$ by changing some coordinates to 0. If $w(m)=0$, then $\\cF'$ might contain no $m$-element sets, but even in this case we can add every $m$-element set to $\\cF'$ without decreasing the weight (and without ruining the intersecting property). The resulting family $\\mathcal{F}''$ is non-trivial intersecting, and $w'(\\mathcal{F}'')=w'(\\mathcal{F}') \\ge w'(\\mathcal{F}) \\ge w(\\mathcal{F})$, thus $\\mathcal{F}''$ must have the same profile as $\\mathcal{F}$. The profile of $\\mathcal{F}''$ is in $\\Gamma_c$.\n\n\\smallskip\n\n{\\bf Case 2c2. $w(m)\\ge 0$, $m>n\/2$ and $m+m_0 \\le n$.}\n\nLet $H$ be the set of non-empty levels. Recall that coordinate $i$ of ${\\bf a}$ is $0$ if $in\/2$. Let ${\\bf a'}$ be the vector we get from ${\\bf a}$ when we change the coordinates not in $H$ to 0. We will show that ${\\bf p}={\\bf b}_H$, by showing that there is a $\\lambda$ such that $\\lambda {\\bf b}_H + (1-\\lambda){\\bf a'} \\ge {\\bf p}$. We have that ${\\bf b}_H$ and ${\\bf a'}$ are both 0 in the negative coordinates, thus the weight of either ${\\bf b}_H$, or ${\\bf a'}$ is at least as large as the weight of ${\\bf p}$. But that was the unique maximum, thus ${\\bf p}$ is equal to either ${\\bf b}_H$, or ${\\bf a'}$. As ${\\bf p}$ has a non-zero coordinate below $n\/2$, ${\\bf p}$ cannot be equal to ${\\bf a'}$.\n\nLet $i \\le n\/2$ be such that $f_i\/|HM(i,m)|=:\\lambda$ is maximal. Then $\\lambda {\\bf b}_H$ has at least $f_j$ in coordinate $j$ for every $j \\le n\/2$. \nLet us consider now a coordinate $k>n\/2$ with $w(k)>0$.\n\nIf the family $\\cF_i\\cup\\cF_k$ is trivially intersecting, then $f_k\\le |HM(k,m)|$, while ${\\bf b}_H$ and ${\\bf a'}$ both have at least $|HM(k,m)|$ in coordinate $k$, thus so does $\\lambda {\\bf b}_H + (1-\\lambda){\\bf a'}$, completing the proof.\n\nIf the family $\\cF_i\\cup\\cF_k$ is non-trivially intersecting, we can apply Lemma \\ref{ketto}. It implies that there is a $\\lambda'$ such that $(f_i,f_k)\\le ((1-\\lambda')|HM(i,k)|,\\lambda'\\binom{n}{k}+(1-\\lambda')|HM(k,k)|)$. Coordinate $i$ shows that $((1-\\lambda')|HM(i,k)|\\ge \\lambda |HM(i,m)|$. Since $|HM(i,m)|\\ge |HM(i,k)|$, this implies that $\\lambda \\le 1-\\lambda'$. Consider now coordinate $k$. We have \\begin{equation}\\label{equa}\\tag{$\\star$}\n f_k\\le \\lambda'\\binom{n}{k}+(1-\\lambda')|HM(k,k)|\\le \\lambda'\\binom{n}{k}+(1-\\lambda')|HM(k,m)|.\n\\end{equation} Since $|HM(k,m)|\\le \\binom{n}{k}$, increasing $\\lambda'$ increases the right hand side of (\\ref{equa}). Since $\\lambda'\\le 1-\\lambda$, the right hand side is at most $(1-\\lambda)\\binom{n}{k}+\\lambda|HM(k,m)|$, which is coordinate $k$ of $\\lambda {\\bf b}_H + (1-\\lambda){\\bf a'}$, completing the proof.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsubsection*{Search and NP Problems.} \\label{search}\n\nLet us compare the inversion problems with another type -- the search problems\nspecified by computable\\\\ in time $\\|x\\|^{O(1)}$ relations $P(x,w)$:\ngiven $x$, find $w$ s.t. $P(x,w)$. There are two parts to a search problem:\\\\\n (a) decision problem: decide if $w$ (called \\trm{witness}) exist, and (b) a\nconstructive problem: actually find $w$.\n\nAny inversion problem is a search problem and any search problem can be\nrestated as an inversion problem. E.g., finding a Hamiltonian cycle $C$ in a\ngraph $G$, can be stated as inverting a $f(G,C)$, which outputs $G,0\\ldots 0$\nif $C$ is in fact a Hamiltonian cycle of $G$. Otherwise, $f(G,C) = 0\\ldots 0$.\n\nSimilarly any search problem can be reduced to another one equivalent to\nits decision version.\\\\ For instance, factoring $x$ reduces to bounded\nfactoring: given $x,b$ find $p,q$ such that $pq=x$, $p\\le b$\\\\\n(where decisions yield construction by binary search).\n\n{\\bf Exercise:} Generalize the two above examples to reduce any search\nproblem to an inverting problem and to a decision problem.\n\nThe \\trm {language} of a problem is the set of all acceptable inputs.\nFor an inversion problem it is the range of $f$. For a search problem it is\nthe set of all $x$ s.t. $P(x,w)$ holds for some $w$. An \\trm {NP language} is\nthe set of all inputs acceptable by a P-time \\trm {non-deterministic} Turing\nMachine (sec.~\\ref{gm-reduce}). All three classes of languages -- search,\ninversion and NP -- coincide (NP $\\iff$ search is straightforward).\n\nInterestingly, polynomial {\\em space} bounded deterministic and\nnon-deterministic TMs have equivalent power. It is easy to modify TM to have\na unique accepting configuration. Any acceptable string will be accepted in\ntime $2^s$, where $s$ is the space bound. Then we need to check $A(x,w,s,k)$:\nwhether the TM can be driven from the configuration $x$ to $w$ in time $<2^k$\nand space $s$. For this we need for every $z$, to check $A(x,z,s,k{-}1)$\nand $A(z,w,s,k{-}1)$, which takes space $t_k\\le t_{k{-}1} + \\|z\\|+O(1)$.\nSo, $t_k= O(sk)= O(s^2)$ \\cite{Sv}.\n\nSearch problems are games with P-time transition rules and one move duration.\nA great hierarchy of problems results from allowing more moves\nand\/or other complexity bounds for transition rules.\n\n\\newpage\\subsection{Complexity of NP Problems.}\\label{compl}\n\nWe discussed the (equivalent) inversion, search, and NP types of problems.\nNobody knows whether {\\em all} such problems are solvable in P-time (i.e.\nbelong to P). This question (called P=?NP) is probably the most famous one\nin Theoretical Computer Science. All such problems are solvable in exponential\ntime but it is unknown whether any better algorithms generally exist. For many\nproblems the task of finding an efficient algorithm may seem hopeless,\nwhile similar or slightly modified problems have been solved. Examples:\n\n \\begin{enumerate} \\itemsep0pt\n \\item Linear Programming: Given integer $n\\times m$\n matrix $A$ and vector $b$, find a rational vector $x$ with $Ax0$. By induction, $g{=}\\gcd(x,y){=}A*x{-}B*y$,\nwhere integers $A{=}(g\/x\\bmod y)$ and $B{=}(g\/y\\bmod x)$ are produced as\na byproduct of Euclid's Algorithm. This allows division $(\\bmod\\ p)$ by any $r$\n\\trm {coprime} with $p$, (i.e. $\\gcd(r,p){=}1$), and operations $+,-,*,\/$\nobey all usual arithmetical laws. We will need to compute $(x^q\\bmod p)$\nin polynomial time. We cannot do $q{>}2^{\\|q\\|}$ multiplications. Instead\nwe compute all numbers $x_i=(x_{i{-}1}^2\\bmod p)= (x^{2^i}\\bmod p),i<\\|q\\|$.\nThen we represent $q$ in binary, i.e. as a sum of\npowers of $2$ and multiply $\\bmod\\ p$ the needed $x_i$'s.\n\n\\paragraph{Fermat Test.} The Little Fermat Theorem for every\nprime $p$ and $x\\in[1,p{-}1]$ says: $x^{(p{-}1)}\\equiv1\\pmod p$.\\\\\n Indeed, the sequence $(xi\\bmod p)$ is a permutation of $1,\\ldots,p{-}1$.\n So, $1{\\equiv}(\\prod_{i}1$, then $x{=}(1{+}p\/a)$ works for Fermat\nTest: $(1{+}p\/a)^{p{-}1}{=}1{+}(p\/a)(p{-}1){+}(p\/a)^2(p{-}1)(p{-}2)\/2{+}\\ldots\n\\equiv 1{-}p\/a{\\not\\equiv}1\\pmod p$, since $p|(p\/a)^2$. Otherwise $p{=}ab,\n\\gcd(a,b){=}1{<}a{<}b$. Take the {\\bf greatest} $i$ such that $x_i{\\not\\equiv}1$\nfor some $x$ coprime with $p$. It exists: $(-1)^q\\equiv-1$ for odd $q$.\nSo, $(x_i)^2\\equiv 1\\not\\equiv x_i\\pmod p$. (Or $i{=}k$, so Fermat test works.)\nThen $x'{=}1{+} b(1\/b\\bmod a)(x{-}1)\\equiv1{\\equiv}x'_i\\pmod b$, while\n$x'_i{\\equiv}x_i\\pmod a$. So, either $x_i$ or $x'_i$ is $\\not\\equiv\\pm1\\pmod p$.\n\nNow, $T(y,p)$ succeeds with {\\em most} $y_i$, as it does with $x_i$ (or $x'_i$):\nthe function $y\\mapsto xy$ is 1-1 and $T$ cannot fail with both $y$ and $xy$.\nThis test can be repeated for many randomly chosen $y$. Each time $T$ fails, we\nare twice more sure that $p$ is prime. The probability of $300$ failures on a\ncomposite $p$ is $<2^{-300}$, its inverse exceeds the number of atoms in the\nknown Universe.\n\n\\newpage\\subsection\n {Randomized Algorithms and Random Inputs.}\\label{average}\n\n\\trm {Las-Vegas} algorithms, unlike Monte-Carlo, never give wrong answers.\nUnlucky coin-flips just make them run longer than expected. Quick-Sort is a\nsimple example. It is about as fast as deterministic sorters, but is popular\ndue to its simplicity. It sorts an array $a[1\\ldots n]$ of $n>2$ numbers by\nchoosing in it a random \\trm {pivot}, splitting the remaining array in two\nby comparing with the pivot, and calling itself recursively on each half.\n\nFor easy reference, rename the array entries with their positions $1,\\ldots,n$\nin the {\\em sorted output} (no effect on the algorithm). Denote $t(i)$ the\n(random) time $i$ is chosen as a pivot. Then $i$ will ever be compared with $j$\niff either $t(i)$ or $t(j)$ is the smallest among $t(i),\\ldots,t(j)$.\nThis has $2$ out of $|j{-}i|+1$ chances. So, the expected number of comparisons\nis $\\sum_{i,j>i} 2\/(1{+}j{-}i)= -4n+ (n{+}1)\\sum_{k=1}^n 2\/k= 2n(\\ln n-O(1))$.\n Note, that the expectation of the sum of variables is\n the sum of their expectations (not true, say, for product).\n\nThe above Monte-Carlo and Las-Vegas algorithms require choosing strings {\\em at\nrandom} with uniform distribution. We mentally picture that as flipping a coin.\n(Computers use \\trm {pseudo-random generators} rather than coins in hope,\nrarely supported by proofs, that their outputs have all the statistical\nproperties of truly random coin flips needed for the analysis of the algorithm.)\n\n\\paragraph {Random Inputs} to Deterministic Algorithms are analyzed similarly\nto algorithms that flip coins themselves and the two should not be confused.\nConsider an example: Someone is interested in knowing whether or not certain\ngraphs contain Hamiltonian Cycles. He offers graphs and pays \\$100 if we show\neither that the graph {\\em has} or that it {\\em has not} Hamiltonian Cycles.\nHamiltonian Cycle problem is NP-Complete, so it should be very hard for {\\em \nsome}, but not necessarily for {\\em most} graphs. In fact, if our patron chooses\nthe graphs uniformly, a fast algorithm can earn us the \\$100 {\\em most of the\ntime}! Let all graphs have $n$ nodes and, say, $d<\\ln n\/2$ mean degree and be\nequally likely. Then we can use the following (deterministic) algorithm:\\\\\nOutput ``{\\bf No} Hamiltonian Cycles\" and collect the \\$100, if the graph has\nan isolated node. Otherwise, pass on that graph and the money. Now, how often\ndo we get our \\$100. The probability that a given node $A$ of the graph is\nisolated is $(1-1\/n)^{dn}>(1-O(1\/n))\/\\sqrt n$. Thus, the probability that\n{\\em none} of $n$ nodes is isolated (and we lose our \\$100) is $O((1-1\/\\sqrt\nn)^n)= O(1)\/e^{\\sqrt n}$ and vanishes fast. Similar calculations can be made\nwhenever $r = \\lim (d\/\\ln n)<1$. If $r>1$, other fast algorithms can actually\nfind a Hamiltonian Cycle.\\\\ See: \\cite{jnsn,karp-pr,gu}. See also \\cite{vl}\nfor a proof that another graph problem is NP-complete even on average.\nHow do this HC algorithm and the above primality test differ?\n\n\\begin{itemize}\\item The primality algorithm works for {\\em all} instances.\nIt tosses the coin itself and can repeat it for a more reliable answer.\nThe HC algorithms only work for {\\em most} instances\n(with isolated nodes or generic HC). \\item In the HC algorithms, we must\ntrust the customer to follow the presumed random procedure.\\\\ If he cheats\nand produces rare graphs often, the analysis breaks down.\\end{itemize}\n\n\\paragraph {Symmetry Breaking.} Randomness comes into Computer Science\nin many other ways besides those we considered.\nHere is a simple example: avoiding conflicts for shared resources.\n\n{\\bf Dining Philosophers.} They sit at a circular table.\nBetween each pair is either a knife or a fork, alternating. The problem is,\nneighboring diners must share the utensils, cannot eat at the same time. How\ncan the philosophers complete the dinner given that all of them must act in the\nsame way without any central organizer? Trying to grab the knives and forks at\nonce may turn them into fighting philosophers. Instead they could each flip a\ncoin, and sit still if it comes up heads, otherwise try to grab the utensils.\\\\\nIf two diners try to grab the same utensil, neither succeeds.\nIf they repeat this procedure enough times,\\\\ most likely each philosopher\nwill eventually get both a knife and a fork without interference.\n\nWe have no time to actually analyze this and many other scenaria,\nwhere randomness is crucial.\\\\\nInstead we will take a look into the concept of Randomness itself.\n\n\\newpage\\subsection {Arithmetization:\n One-Player Games with Randomized Transition.}\\label{ip}\n\nThe results of section~\\ref{games} can be extended to \\trm {Arthur-Merlin}\ngames which have one player -- Merlin -- but a randomized transition function,\neffectively using a dummy second player -- Arthur -- whose moves are just\ncoin-flips. We will reduce generic games to games in which any Merlin's\nstrategy in any losing position has exponentially small chance to win.\n\nThe trick achieving this, called \\trm {arithmetization}, expresses\nthe boolean functions as low degree polynomials, and applies them\nto $\\Z_p$-tokens (let us call them \\trm {bytes}) instead of bits.\nIt was proposed in Noam Nisan's article widely distributed over email in the\nFall of 1989 and quickly used in a flood of follow-ups for proving relations\nbetween various complexity classes. We follow \\cite{shamir,fl}.\n\nLet $g$ be the (ATM-complete) game of 1d-Chess (\\ref{dc1}), $r(m,x)$ with\n$x{=}x_1\\ldots x_s$, $m,x_i{\\in}\\{0,1\\}$ be its transition rule. Configurations\ninclude $x$ and a remaining moves counter $c\\le2^s$. They are terminal if\n$c{=}0$, winning to the player $x_1$. Intermediate configurations $(m,x,y)$\nhave $y$ claimed as a prefix of $r(m,x)$.\n\nLet $t(m,x,y)$ be $1$ if $y{=}r(m,x)$, else $t{=}0$. 1d-Chess is simple, so $t$\ncan be expressed as a product of $s$ multilinear $O(1)$-sized terms, any\nvariable shared by at most two terms. Thus $t$ is a polynomial, quadratic in\neach $m,x_i,y_i$. Let $V_c(x)$ be $1$ if the active player has a strategy to\nwin in the $c$ moves left, i.e. $V_0(x)\\edf x_1$, $V_{c+1}(x)\\edf$\n$1{-}V_c(0,x,\\{\\})V_c(1,x,\\{\\})= 1{-}V_c(r(0,x))V_c(r(1,x))$, where\n$V_c(m,x,y)\\edf V_c(y)t(m,x,y)$ for $y=y_1\\ldots y_s$ or\n$V_c(m,x,y)\\edf V_c(m,x,y{\\circ}0){+}V_c(m,x,y{\\circ}1)$ for shorter $y$.\n($\\circ$ stands for concatenation.)\n\n$G$ will allow Merlin to prove $x$ is winning i.e., $V_c(x)=1$. Configurations\n$X=(m,x,y,v)$ of $G$ replace bits with $\\Z_p$ bytes and add $v{\\in}\\Z_p$\nreflecting Merlin's claimed $V$. The polynomial $V_c(m,x,y)$ is quadratic in\neach $x_i,m$, as $t(m,x,y)$ is. Then $V_c(y)$ has degree 4 in $y_i$ and\n$V_c(m,x,y)$ has degree 6 in $y_i$.\n\nMerlin starts with choosing a $2s$-bit prime $p$. Then at each step with\n$X=(m,x,y,v)$ Merlin gives an $O(1)$-degree polynomial $P$ with\n$P(1)P(0)=1{-}v$ for $s$-byte $y$ or $P(0){+}P(1)=v$ for shorter $y$. Arthur\nthen selects a random $r{\\in}\\Z_p$ and $X$ becomes $(r,y,\\{\\},P(r))$ for\n$s$-byte $y$ or $(m,x,y{\\circ}r,P(r))$ for shorter $y$.\n\nIf the original $v$ is correct, then Merlin's obvious winning strategy\nis to always provide the correct polynomial. If the original $v$ is wrong\nthen either $P(1)$ or $P(0)$ must be wrong, too, so they will agree only\nwith a wrong $P$. A wrong $P$ can agree with the correct one only\non few (bounded by degree) points. Thus it will give a correct value only to\nexponentially small fraction of random $r$. Thus the wrong $v$ will propagate\nthroughput the game until it becomes obvious in the terminal configuration.\n\n\\vspace{2pc}\nThis reduction of Section~\\ref{games} games yields a hierarchy of Arthur-Merlin\ngames powers, i.e. the type of computations that have reductions to $V_c(x)$ of\nsuch games and back. The one-player games with randomized transition rule $r$\nrunning in space linear in the size of initial configuration are equivalent to\nexponential time deterministic computations. If instead the running time $T$ of\n$r$ combined for all steps is limited by a polynomial, then the games are\nequivalent to polynomial space deterministic computations.\n\nAn interesting twist comes in one move games with polylog $T$, too tiny\nto examine the initial configuration $x$ and the Merlin's move $m$.\n But not only this obstacle is removed but the equivalence to NP is achieved\nwith a little care. Namely, $x$ is set in an error-correcting code, and $r$ is\ngiven $O(\\log\\|x\\|)$ coin-flips and random access to the digits of $x,m$.\nThen the membership proof $m$ is reliably verified by the randomized $r$.\\\\\n See \\cite{holo} for details and references.\n\n\\newpage\\section {Randomness}\\label{rand}\n\n\\subsection {Randomness and Complexity.}\\label{kolm}\n\nIntuitively, a random sequence is one that has the same properties as\na sequence of coin flips. But this definition leaves the question,\nwhat {\\em are} these properties? Kolmogorov resolved these problems with\na new definition of random sequences: those with no description noticeably\nshorter than their full length. See survey and history in \\cite{ku87,vitan}.\n\n\\paragraph {Kolmogorov Complexity} $K_A(x|y)$ of the string $x$ given $y$ is\nthe length of the shortest program $p$ which lets algorithm $A$ transform $y$\ninto $x$: $\\min\\{(\\|p\\|):A(p,y)=x\\}$. There exists a Universal Algorithm $U$\nsuch that, $K_U(x)\\le K_A(x)+O(1)$, for every algorithm $A$.\nThis constant $O(1)$ is bounded by the length of the program $U$ needs to\nsimulate $A$. We abbreviate $K_U(x|y)$ as $K(x|y)$, or $K(x)$ for empty $y$.\n\nAn example: For $A:x\\mapsto x$, $K_A(x)=\\|x\\|$,\n so $K(x)i$, for uniformly random $n$-bit $x$ ?\nThere are $2^n$ strings $x$ of length $n$.\\\\ If $d(x)>i$, then $K(x|n)< n-i$.\nThere are $<2^{n-i}$ programs of such length, generating $<2^{n-i}$ strings.\\\\\nSo, the probability of such strings is $<2^{n-i}\/2^n= 2^{-i}$ (regardless of\n$n$)! Even for $n= 1,000,000$,\\\\ the probability of $d(x)>300$ is absolutely\nnegligible (provided $x$ was indeed generated by fair coin flips).\n\nSmall rarity implies all other enumerable properties of random strings. Indeed,\nlet such property ``$x{\\not\\in}P$\" have a negligible probability and $S_n$\nbe the number of $n$-bit strings violating $P$, so $s_n=\\log(S_n)$ is small.\\\\\nTo generate $x$, we need only the algorithm enumerating $S_n$ and the $s_n$-bit\nposition of $x$ in that enumeration. Then the rarity $d(x)> n-(s_n{+}O(1))$\nis large. Each $x$ violating $P$ will thus also violate the ``small rarity\"\nrequirement. In particular, the small rarity implies unpredictability of bits\nof random strings: A short algorithm with high prediction rate would assure\nlarge $d(x)$. However, the randomness can only be refuted, cannot be confirmed:\nwe saw, $K$ and its lower bounds are not computable.\n\n\\paragraph {Rectification of Distributions.} We rarely have a source of\nrandomness with precisely known distribution. But there are very efficient ways\nto convert ``roughly\" random sources into perfect ones. Assume, we have such a\nsequence with weird unknown distribution. We only know that its long enough\n($m$ bits) segments have min-entropy $>k+i$, i.e. probability $<1\/2^{k+i}$,\ngiven all previous bits. (Without such $m$ we would not know a segment needed\nto extract even one not fully predictable bit.) No relation is required between\n$n,m,i,k$, but useful are small $m,i,k$ and huge $n=o(2^k\/i)$. We can fold $X$\ninto an $n\\times m$ matrix. We also need a small $m\\times i$ matrix $Z$,\nindependent of $X$ and {\\bf really} uniformly random (or random Toeplitz, i.e.\nwith restriction $Z_{a+1,b+1}=Z_{a,b}$). Then the $n\\times i$ product $XZ$ has\nuniform with accuracy $O(\\sqrt{n i\/2^k})$ distribution. This follows from\n\\cite{gl}, which uses earlier ideas of U. and V. Vazirani.\n\n\\newpage\\subsection {Pseudo-randomness.} \\label{pseudor}\n\nThe above definition of randomness is very robust, if not practical. True\nrandom generators are rarely used in computing. The problem is {\\em not} that\nmaking a true random generator is impossible: we just saw efficient ways to\nperfect the distributions of biased random sources. The reason lies in many\nextra benefits provided by pseudorandom generators. E.g., when experimenting\nwith, debugging, or using a program one often needs to repeat the exact same\nsequence. With a truly random generator, one actually has to record all its\noutcomes: long and costly. The alternative is to generate \\trm {pseudo-random}\nstrings from a short seed. Such methods were justified in \\cite{bm,yao}:\n\nFirst, take any one-way permutation $F_n(x)$ (see sec.~\\ref{crypt}) with a\n\\trm {hard-core} bit (see below) $B_p(x)$ which is easy to compute from $x,p$,\nbut infeasible to guess from $p,n,F_n(x)$ with any noticeable correlation.\\\\\n Then take a random \\trm {seed} of three $k$-bit parts $x_0,p,n$ and Repeat:\n($S_i{\\gets}B_p(x_i)$; $x_{i+1}{\\gets}F_n(x_i)$; $i{\\gets}i{+}1$).\n\nWe will see how distinguishing outputs $S$ of this generator\nfrom strings of coin flips would imply the ability to invert $F$.\nThis is infeasible if $F$ is one-way. But if P=NP (a famous open problem),\nno one-way $F$, and no pseudorandom generators could exist.\n\nBy Kolmogorov's standards, pseudo-random strings are not random: let $G$ be the\ngenerator; $s$ be the seed, $G(s) = S$, and $\\|S\\|\\gg k=\\|s\\|$. Then $K(S)\\le\nO(1)+k\\ll\\|S\\|$, thus violating Kolmogorov's definition.\\\\ We can distinguish\nbetween truly random and pseudo-random strings by simply trying all short\nseeds. However this takes time exponential in the seed length. Realistically,\npseudo-random strings will be as good as a truly random ones if they can't\nbe distinguished in feasible time. Such generators we call \\trm {perfect}.\n\n\\paragraph {Theorem:} \\cite{yao} Let $G(s)=S\\in\\{0,1\\}^n$ run in time $t_G$.\n Let a probabilistic algorithm $A$ in expected (over internal coin flips)\n time $t_A$ accept $G(s)$ and truly random strings with different by $d$\nprobabilities. Then, for random $i$, one can use $A$ to guess $S_i$\nfrom $S_{i+1},S_{i+2}, \\ldots$ in time $t_A+t_G$ with correlation $d\/O(n)$.\n\n\\paragraph {Proof.} Let $p_i$ be the probability that $A$ accepts $S=G(s)$\nmodified by replacing its first $i$ digits\\\\ with truly random bits.\nThen $p_0$ is the probability of accepting $G(s)$ and must differ by $d$ from\\\\\nthe probability $p_n$ of accepting random string. Then $p_{i-1}-p_i = d\/n$, for\nrandomly chosen $i$.\\\\ Let $P_0$ and $P_1$ be the probabilities of accepting\n$r0x$ and $r1x$ for $x=S_{i+1},S_{i+2},\\ldots$, and random $(i{-}1)$-bit $r$.\\\\\nThen $(P_1{+}P_0)\/2$ averages to $p_i$, while $P_{S_i}=P_0{+}(P_1{-}P_0)S_i$\naverages to $p_{i-1}$ and\\\\ $(P_1{-}P_0) (S_i{-}1\/2)$ to $p_{i-1}{-}p_i=d\/n$.\nSo, $P_1{-}P_0$ has the stated correlation with $S_i.\\qed$\n\nIf the above generator was not perfect, one could guess $S_i$ from the sequence\n$S_{i+1},S_{i+2},\\ldots$\\\\ with a polynomial (in $1\/\\|s\\|$) correlation.\n But, $S_{i+1}, S_{i+2}\\ldots$ can be produced from $p,n,x_{i+1}$.\\\\\n So, one could guess $B_p(x_i)$ from $p,n,F(x_i)$ with correlation $d\/n$,\n which cannot be done for hard-core $B$.\n\n\\paragraph {Hard Core.} The key to constructing a pseudorandom generator\nis finding a hard core for a one-way $F$. The following $B$ is hard-core\nfor any one-way $F$, e.g., for Rabin's OWF in sec.~\\ref{crypt}.\\\\\n\\cite{Knuth} has more details and references.\n\nLet $B_p(x)=(x\\cdot p)= (\\sum_ix_ip_i\\bmod2)$. \\cite{gl} converts\nany method $g$ of guessing $B_p(x)$ from $p,n,F(x)$ with correlation\n$\\varepsilon$ into an algorithm of finding $x$, i.e. inverting $F$\n(slower $\\varepsilon^2$ times than $g$).\n\n\\paragraph {Proof.} (Simplified with some ideas of Charles Rackoff.)\nTake $k=\\|x\\|=\\|y\\|$, $j=\\log(2k\/\\varepsilon^2)$, $v_i= 0^i10^{k-i}$.\nLet $B_p(x) =(x\\cdot p)$ and $b(x,p)=(-1)^{B_p(x)}$.\nAssume, for $y=F_n(x)$, $g(y,p,w)\\in\\{\\pm 1\\}$ guesses $B_p(x)$\nwith correlation $\\sum_p2^{-\\|p\\|}b(x,p) g_p >\\varepsilon$, where $g_p$\nabbreviates $g(y,p,w)$, since $w,y$ are fixed throughout the proof.\n\n$(-1)^{(x\\cdot p)}g_p$ averaged over ${>}2k\/\\varepsilon^2$ random pairwise\nindependent $p$ deviates from its mean (over all $p$) by ${<}\\varepsilon$\n(and so is ${>}0$) with probability $>1-1\/2k$. The same for\n$(-1)^{(x\\cdot[p+v_i])} g_{p+v_i}= (-1)^{(x\\cdot p)} g_{p+v_i} (-1)^{x_i}$.\n\nTake a random $k\\times j$ binary matrix $P$. The vectors $Pr$,\n$r{\\in}\\{0,1\\}^j\\setminus\\{0^j\\}$ are pairwise independent. So,\nfor a fraction $\\ge1-1\/2k$ of $P$, sign$(\\sum_r(-1)^{xPr}g_{Pr+v_i})=(-1)^{x_i}$.\nWe could thus find $x_i$ for all $i$ with probability $>1\/2$\nif we knew $z=xP$. But $z$ is short: we can try all its\n$2^j$ possible values and check $y=F_n(x)$ for each !\n\nSo the inverter, for a random $P$ and all $i,r$, computes $G_i(r)=g_{Pr+v_i}$.\nIt uses Fast Fourier on $G_i$ to compute $h_i(z)=\\sum_rb(z,r)G_i(r)$. The sign\nof $h_i(z)$ is the $i$-th bit for the $z$-th member of output list. $\\qed$\n\n\\newpage\\subsection {Cryptography.} \\label{crypt}\n\n\\paragraph {Rabin's One-way Function.} Pick random prime numbers $p,q,\\|p\\|=\n\\|q\\|$ with two last bits ${=}1$, i.e. with odd $(p{-}1)(q{-}1)\/4$. Then\n$n=pq$ is called a Blum number. Its length should make factoring infeasible.\n\n Let $Q_n=(Z^*_n)^2$ be the set of squares,\n i.e. \\trm {quadratic residues} (all residues are assumed $\\pmod n$).\n\n\\paragraph {Lemma.} Let $n=pq$ be a Blum number, $F: x\\mapsto x^2\\in Q_n$.\nThen (1) $F$ is a permutation on $Q_n$\\\\ and (2)\nThe ability to invert $F$ on random $x$ is equivalent to that of factoring $n$.\n\n\\vspace{-8pt}\\paragraph {Proof.} (1) $t{=}(p{-}1)(q{-}1)\/4$ is odd, so\n$u{=}(t{+}1)\/2$ is an integer. Let $x{=}F(z)$. Both $p{-}1$ and $q{-}1$\ndivide~$2t$. So, by Fermat's little theorem, both $p$, $q$ (and, thus $n$)\ndivide $x^t{-}1\\equiv z^{2t}{-}1$. Then $F(x)^u\\equiv x^{2u}=xx^t\\equiv x$.\n\n(2) The above $y^u$ inverts $F$. Conversely, let $F(A(y))=y$ for a fraction\n$\\varepsilon$ of $y\\in Q_n$.\\\\ Each $y\\in Q_n$ has $x,x'{\\ne}\\pm x$ with\n$F(x){=}F(x'){=}y$, both with equal chance to be chosen at random.\\\\\nIf $F(x)$ generates $y$ while $A(y)=x'$ the Square Root Test\n(\\ref{prime}) has both $x,x'$ for factoring $n.\\qed$\n\nSuch one-way permutations, called ``trap-door\", have many applications;\nwe look at cryptography below.\n\nPicking random primes is easy: they have density $1\/O(\\|p\\|)$.\nIndeed, one can see that $\\binom{2n}n$ is divisible by every prime\n$p{\\in}[n,2n]$ but by no prime $p{\\in}[\\frac23n,n]$ or prime power $p^i{>}2n$.\nSo, $(\\log\\binom{2n}n)\/ \\log n=2n\/\\log n-O(1)$ is an upper bound on\nthe number of primes in $[n,2n]$ and a lower bound on that in $[1,2n]$\n(and in $[3n,6n]$ as a simple calculation shows).\nAnd fast VLSI exist to multiply long numbers and check primality.\n\n\\paragraph {Public Key Encryption.}\n\nA perfect way to encrypt a message $m$ is to add it $\\bmod2$ bit by bit to a\nrandom string $S$ of the same length $k$. The resulting encryption $m \\oplus S$\nhas the same uniform probability distribution, no matter what $m$ is. So it is\nuseless for the adversary who wants to learn something about $m$, without\nknowing $S$. A disadvantage is that the communicating parties must share a\nsecret $S$ as large as all messages to be exchanged, combined. \\trm {Public\nKey} Cryptosystems use two keys. One key is needed to encrypt the messages and\nmay be completely disclosed to the public. The \\trm {decryption} key must still\nbe kept secret, but need not be sent to the encrypting party. The same keys may\nbe used repeatedly for many messages.\n\nSuch cryptosystem can be obtained \\cite{b-gw} by replacing the above random $S$\nby pseudorandom $S_i= (s_i\\cdot x)$; $s_{i+1} =(s_i^2\\ \\bmod n)$. Here a Blum\nnumber $n=pq$ is chosen by the Decryptor and is public, but $p,q$ are kept\nsecret. The Encryptor chooses $x\\in Z_2^{\\|n\\|},s_0\\in Z_n$ at random and sends\n$x,s_k, m{\\oplus} S$. Assuming factoring is intractable for the adversary, $S$\nshould be indistinguishable from random strings (even with known $x,s_k$).\nThen this scheme is as secure as if $S$ were random. The Decryptor\nknows $p,q$ and can compute $u,t$ (see above) and $v=(u^{k-1}\\bmod t)$.\nSo, he can find $s_1=(s_k^v\\bmod n)$, and then $S$ and $m$.\n\nAnother use of the intractability of factoring is digital signatures\n\\cite{rsa,bb-sg}. Strings $x$ can be released as authorizations\nof $y=(x^2\\bmod n)$. Verifying $x$, is easy but the ability of\nforging it for generic $y$ is equivalent to that of factoring $n$.\n\n\\vfill\\subsubsection* {Go On!}\n\nYou noticed that most of our burning questions are still open. Take them on!\n\nStart with reading recent results (FOCS\/STOC is a good source).\nSee where you can improve them.\\\\ Start writing, first notes just for\nyour friends, then the real papers. Here is a little writing advice:\n\nA well written paper has clear components: skeleton, muscles, etc.\\\\\nThe skeleton is an acyclic digraph of basic definitions and statements,\nwith cross-references.\\\\ The meat consists of proofs (muscles) each\n{\\em separately} verifiable by competent graduate students having to read\nno other parts but statements and definitions cited. Intuitive comments,\nexamples and other comfort items are fat and skin: a lack or excess will\nnot make the paper pretty. Proper scholarly references constitute clothing,\nno paper should ever appear in public without! Trains of thought\nwhich led to the discovery are blood and guts: keep them hidden.\nMetaphors for other vital parts, like open problems, I skip out of modesty.\n\n\\vfill\\paragraph {Writing Contributions.} {\\small\n Section~\\ref{models} was originally prepared by Elena Temin,\n Yong Gao and Imre Kifor (BU), others by Berkeley students:\n \\ref{compress} by Mark Sullivan,\n \\ref{win} by Eric Herrmann and Elena Eliashberg,\n \\ref{halt-gm} by Wayne Fenton and Peter Van Roy,\n \\ref{gm-reduce} by Carl Ludewig, Sean Flynn, and Francois Dumas,\n \\ref{invert} by Jeff Makaiwi, Brian Jones and Carl Ludewig,\n \\ref{compl} by David Leech and Peter Van Roy,\n \\ref{tile} by Johnny and Siu-Ling Chan, \\ref{average} by Deborah Kordon,\n \\ref{kolm} by Carl Ludewig, \\ref{pseudor} by Sean Flynn,\n Francois Dumas, Eric Herrmann, \\ref{crypt} by Brian Jones.}\n\n\\newpage\\section {References}\n \\renewcommand\\refname{}\n\n\\vspace*{-1pc}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe massive recent growth of the computational cost of accurate deep learning models, in particular large language models (LLMs), has motivated the development of several advanced {model compression} techniques~\\citep{hoefler2021sparsity, gholami2021survey}, encompassing unstructured and structured pruning, quantization, and knowledge distillation. \nIn this paper, we focus on the unstructured pruning, for which we follow the standard pipeline. Such models are first \\emph{pre-trained} on a large \\emph{upstream} corpus of unlabelled text. Then, they are \\emph{fine-tuned} in a supervised manner on a smaller \\emph{downstream} task, such as question-answering or text classification. \nIn the context of compression, this pipeline led to two paradigms: 1) \\emph{upstream pruning}, followed by fine-tuning of the remaining weights on a downstream task, and 2) \\emph{downstream pruning}, pruning and fine-tuning directly on the downstream task.\n\nA tempting baseline approach in most settings is \\emph{gradual magnitude pruning (GMP)}~\\citep{hagiwara1994, zhu2017prune}, that is, periodically removing the smallest fraction of weights during training, possibly interspersed with fine-tuning steps designed to recover accuracy. \nGMP has been shown to be an extremely strong baseline in the context of computer vision~\\citep{gale2019state, hoefler2021sparsity}.\nHowever, the literature on pruning LLMs, and in particular BERT models~\\cite{sanh2020movement, chen2020lottery, zafrir2021prune}, clearly suggests that GMP \\emph{does not} perform well. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/squad_and_mnli_merged.pdf}\n \\caption{Performance of state-of-the-art unstructured pruning methods relative to the dense $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ model at high sparsities and two tasks, SQuADv1.1 and MNLI.}\n \\label{fig:overview}\n \\vspace{-0.2in}\n\\end{figure}\n\n\\paragraph{Contribution.} In this paper, we re-examine this conclusion and investigate whether GMP can be a competitive baseline, once carefully tuned. \nSpecifically, we show that a well tuned variant which we call GMP$\\scriptstyle\\bigstar$\\,, can produce highly accurate and sparse language models in both upstream and downstream pruning regimes, matching or even outperforming more complex methods. We explore effects of the crucial parameters for gradual pruning, and provide simple and intuitive guidelines on how to integrate them in a principled manner. \n\nOur results are summarized in Figure~\\ref{fig:overview}, which presents performance of state-of-the-art unstructured pruning techniques on two benchmarks. Specifically, we compare GMP$\\scriptstyle\\bigstar$\\, with the Lottery Ticket approach~\\citep{chen2020lottery}, Movement Pruning (MvP)~\\citep{sanh2020movement} (as well as its GMP baseline $\\textnormal{GMP}_{\\textnormal{MvP}}$), upstream Prune OFA~\\citep{zafrir2021prune}, as well as the recently-proposed second-order pruning oBERT~\\citep{kurtic2022optimal}. \nWe observe that: 1) for both benchmarks, GMP$\\scriptstyle\\bigstar$\\, is only second to the more complex oBERT method; 2) GMP$\\scriptstyle\\bigstar$\\, in fact outperforms the highly competitive Prune OFA and MvP methods; and 3) GMP$\\scriptstyle\\bigstar$\\, outperforms both Lottery Tickets and $\\textnormal{GMP}_{\\textnormal{MvP}}$ by extremely wide margins. \n\n\\comment\n}\n\n\\paragraph{Prior Work.} \nFollowing the vast BERT-pruning literature, we focus on the unstructured pruning of the $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ model~\\citep{devlin2018bert}. As previously noted, upstream and downstream pruning paradigms exist, and methods are usually developed and specialized for only one of the two. For example, Movement Pruning (MvP)~\\citep{sanh2020movement, lagunas2021block} for downstream pruning and Prune Once for All (Prune OFA)~\\citep{zafrir2021prune} for upstream pruning. Simplicity and generality of the GMP makes it suitable for both paradigms, without any regime-specific modifications. New and more advanced pruning techniques, which are, contrary to GMP, able to leverage gradients~\\citep{sanh2020movement, lagunas2021block}, loss curvature~\\citep{kurtic2022optimal}, compute-intensive pre-training setup~\\citep{zafrir2021prune} are built on the premise that the simple magnitude-based GMP method falters when applied to BERT-pruning. In this work, contrary to what is currently available in the literature, we present empirical evidence that GMP, when tuned carefully, can produce very accurate sparse models which are competitive or even better than most state-of-the-art pruning techniques across both regimes (upstream and downstream). As can be seen from Figure \\ref{fig:overview} and our later results, we massively improve upon existing GMP-based pruning baselines, in some cases by even more than \\textbf{20 accuracy points}.\n\n\\section{Competitive Gradual Magnitude Pruning (GMP$\\scriptstyle\\bigstar$\\,)}\n\\paragraph{Experimental setup.}\nWe focus our attention on the standard $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ model, composed of embedding and encoder layers, which has approximately 110M parameters. All methods focus on pruning among approximately 85M weights of encoder layers and report sparsities with respect to that number. We evaluate models on the validation split of the respective dataset, and to improve confidence in the obtained results we perform multiple runs with different seeds and report mean performance.\n\\subsection{Downstream pruning}\nFollowing the literature, we consider three popular tasks: question-answering SQuADv1.1~\\citep{rajpurkar2016squad}, recognition of textual entailment MNLI ~\\citep{williams2017broad}, and duplicate question detection QQP~\\citep{iyer2017first}. Now, we reflect upon the most important constituents of the gradual pruning framework that enabled us to attain massive improvements.\n\n\\paragraph{Sparsity schedule.}\nIn all of our gradual runs, there is no pruning during the first two and the last two epochs. The former fine-tunes the pre-trained model, and the latter fine-tunes the sparse model with the fixed mask. In between the two, GMP$\\scriptstyle\\bigstar$\\, follows the cubic sparsity scheduler~\\citep{zhu2017prune} and prunes weights with the frequency of ten times per epoch. Motivated by the fact that $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ is heavily overparametrized for downstream tasks, we deviate from the standard cubic schedule by introducing a large first pruning step. This showed to be of a crucial importance when pruning the model to high target sparsities (e.g. 97\\%) as it leaves more time to recover from later pruning steps which are much more difficult. In Table~\\ref{tab:initsparsity_sweep} we report results from an ablation study with respect to the size of the initial step. For convenience, we visualize the sparsity scheduler in Figure~\\ref{fig:lr_and_spars}. Our preliminary experiments showed similar performance between uniform and global sparsity distributions, so we use the former. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/lr_and_sparsity.pdf}\n \\vspace{-0.25in}\n \\caption{Learning rate and sparsity schedules for the proposed gradual pruning framework.}\n \\label{fig:lr_and_spars}\n \\vspace{-0.2in}\n\\end{figure}\n\n\\paragraph{Learning rate schedule.} \nOur goal is to provide a simple baseline setup that works well across wide range of datasets without any additional task-dependent tuning. Currently, papers either report best results following an extensive hyperparameter search for each task, e.g.~\\citet{zafrir2021prune}, or they make use of carefully crafted schedulers for each setup independently which may include warm-up phases with and without rewinds~\\citep{sanh2020movement, kurtic2022optimal}. This may lead to high specialization to the target task\/model, which is undesirable in practice and makes it hard to distinguish benefits from the pruning technique itself. We propose to simply \\textit{replicate} the standard 2-epoch fine-tuning schedule~\\citep{devlin2018bert} by a certain factor and intertwine it with pruning steps. For a fair comparison with~\\citet{sanh2020movement} we replicate it by a factor of 5, reproducing their 10-epoch setup. And for a fair comparison with~\\citet{chen2020lottery} we replicate it by a factor of 15, reproducing their 30-epoch setup. For convenience, we visualize the learning rate schedule in Figure~\\ref{fig:lr_and_spars}. In appendix~\\ref{app:failed_lr}, we describe results with other schedulers that didn't work.\n\n\\paragraph{Knowledge Distillation (KD) Hardness.} We leverage KD~\\cite{hinton2015distilling} of outputs from a fine-tuned dense teacher. KD is a standard practice when pruning, e.g.~\\cite{sanh2020movement, zafrir2021prune, xu2021rethinking}. The loss function is formulated as a linear combination of the standard loss associated with the specific task (e.g. cross-entropy for classification $\\mathcal{L}_{CE}$) and the Kullback-Leibler divergence ($\\mathcal{L}_{KL}$) between output distributions of the dense (teacher) model and the sparse (student) model in the form: $\\mathcal{L}= (1-h) \\mathcal{L}_{CE} + h \\mathcal{L}_{KL}$. The ratio between the two is controlled with the \\textit{hardness} hyperparameter $h$. To determine its optimal value at high sparsities we run an ablation study reported in Table \\ref{tab:hardness_sweep}, and adopt the hardness $h=1$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/output_dist_x2.pdf}\n \\vspace{-0.2in}\n \\caption{Teacher's output distribution at commonly used temperatures ${T \\in \\{1.0, 2.0\\}}$ and the proposed $T = 5.5$.}\n \\label{fig:logits_dist}\n \\vspace{-0.25in}\n\\end{figure}\n\n\\paragraph{Knowledge Distillation Temperature.} The temperature \\textit{T} is an additional KD-hyperparameter that requires proper tuning, as it controls the ``softness'' of the output distribution. \nIn the pruning literature, it is standard to use the ``stronger'' $T = 1 $ or $T = 2$ values \\citep{xu2021rethinking, zafrir2021prune, sanh2020movement, lagunas2021block, kurtic2022optimal}; we revisit this by visualizing teacher's output distributions to get an insight into what the sparse student is learning. In Figure~\\ref{fig:logits_dist}, we visualize generated distributions for randomly picked samples from the SQuADv1.1 task softened with three values of the temperature. As can be seen, teacher's high confidence in predicting the correct class at the commonly used temperatures $T \\in \\{1.0 , 2.0 \\}$ makes the knowledge distillation almost obsolete. Motivated by this observation, we run an ablation study for many higher temperatures and report a fraction of results in Table~\\ref{tab:temperature_sweep}. Given the results, we adopt the temperature $T = 5.5$.\n\n\\subsubsection{GMP$\\scriptstyle\\bigstar$\\, vs. other GMP-based baselines}\nDue to space constraints, we aggregate all the previously analyzed improvements in a \\textit{downstream pruning recipe} and present it in detail in Appendix~\\ref{app:down_recipe}. We compare our optimized GMP$\\scriptstyle\\bigstar$\\, with other GMP results reported in the pruning literature. For a fair comparison, we consider both setups, 10 and 30-epoch. In the 10-epoch setup, we compare against the GMP baselines reported in \\citet{sanh2020movement} and refer to them as GMP$_{\\small{\\textrm{MvP}}}$\\,. In the 30-epoch setup, we compare against the best reported results in \\citet{chen2020lottery}, obtained either via GMP or via Lottery Ticket (LTH) approach, and refer to them as GMP$_{\\small{\\textrm{LTH}}}$\\,. As can be seen from the Table~\\ref{tab:gmp_downstream}, our GMP$\\scriptstyle\\bigstar$\\, remarkably outperforms all other results; in some cases the improvements are more than \\textbf{20 points}!\n\n\\begin{table}[t]\n \\caption{Downstream pruning comparison of GMP$\\scriptstyle\\bigstar$\\, with other GMP-based baselines.}\n \\label{tab:gmp_downstream}\n \\centering\n \\small{\n \\begin{tabular}{lccccc}\n \\toprule\n \\multirow{2}{*}{Method} & \\multirow{2}{*}{Spars.} & \\multirow{2}{*}{Ep.} & SQuAD & MNLI & QQP \\\\\n & & & F1 & m-acc & acc \\\\\n \\midrule \n $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ & 0\\% & & 88.5 & 84.5 & 91.1 \\\\\n \\midrule\n GMP$_{\\small{\\textrm{MvP}}}$\\, & \\multirow{2}{*}{90\\%} & \\multirow{2}{*}{10} & 80.1 & 78.3 & 79.8 \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & & \\textbf{86.7} & \\textbf{81.9} & \\textbf{90.6} \\\\\n \\midrule \n GMP$_{\\small{\\textrm{MvP}}}$\\, & \\multirow{2}{*}{97\\%} & \\multirow{2}{*}{10} & 59.6 & 69.4 & 72.4 \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & & \\textbf{81.3} & \\textbf{79.1} & \\textbf{89.7} \\\\\n \\midrule[1pt]\n GMP$_{\\small{\\textrm{LTH}}}$\\, & \\multirow{2}{*}{90\\%} & \\multirow{2}{*}{30} & 68.0 & 75.0 & 90.0 \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & & \\textbf{87.9} & \\textbf{82.7} & \\textbf{90.8} \\\\\n \\midrule \n GMP$\\scriptstyle\\bigstar$\\, & 97\\% & 30 & 85.4 & 80.9 & 90.6 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\vspace{-0.15in}\n\\end{table}\n\n\\comment{ \n\\begin{table}\n \\caption{Comparison of GMP$\\scriptstyle\\bigstar$\\, in pruning during fine-tuning (downstream) setup against GMP baselines from Movement Pruning (GMP$_{\\small{\\textrm{MvP}}}$\\,) \\cite{sanh2020movement} and Lottery Tickets (GMP$_{\\small{\\textrm{LTH}}}$\\,) \\cite{chen2020lottery}. For GMP$\\scriptstyle\\bigstar$\\,, we report mean performance from three runs with different seeds.}\n \\label{tab:gmp_downstream}\n \\centering\n \\small{\n \\begin{tabular}{lcccccccc}\n \\toprule\n \\multirow{2}{*}{Method} & \\multirow{2}{*}{Sparsity} & \\multirow{2}{*}{Epochs} & \\multicolumn{2}{c}{SQuAD} & \\multicolumn{2}{c}{MNLI} & \\multicolumn{2}{c}{QQP} \\\\\n & & & F1 & EM & m-acc & mm-acc & acc & F1 \\\\\n \\midrule \n $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ & 0\\% & & 88.5 & 81.4 & 84.5 & 85.0 & 91.1 & 88.0 \\\\\n \\midrule\n GMP$_{\\small{\\textrm{MvP}}}$\\, & \\multirow{2}{*}{90\\%} & \\multirow{2}{*}{10} & 80.1 & 70.2 & 78.3 & 79.3 & 79.8 & 65.0 \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & & \\textbf{86.7} & \\textbf{78.7} & \\textbf{81.9} & \\textbf{82.1} & \\textbf{90.6} & \\textbf{87.4} \\\\\n \\midrule \n GMP$_{\\small{\\textrm{MvP}}}$\\, & \\multirow{2}{*}{97\\%} & \\multirow{2}{*}{10} & 59.6 & 45.5 & 69.4 & 70.6 & 72.4 & 57.8 \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & & \\textbf{81.3} & \\textbf{71.3} & \\textbf{79.1} & \\textbf{79.6} & \\textbf{89.7} & \\textbf{86.1} \\\\\n \\midrule[1pt]\n GMP$_{\\small{\\textrm{LTH}}}$\\, & \\multirow{2}{*}{90\\%} & \\multirow{2}{*}{30} & 68.0 & - & 75.0 & - & 90.0 & - \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & & \\textbf{87.9} & \\textbf{80.4} & \\textbf{82.7} & \\textbf{83.2} & \\textbf{90.8} & \\textbf{87.7} \\\\\n \\midrule \n GMP$\\scriptstyle\\bigstar$\\, & 97\\% & 30 & 85.4 & 77.1 & 80.9 & 81.2 & 90.6 & 87.3 \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n}\n\n\\subsubsection{GMP$\\scriptstyle\\bigstar$\\, vs. advanced pruning techniques}\nNow, we wish to compare our GMP$\\scriptstyle\\bigstar$\\, with methods that rely on higher-order information to make pruning decisions, like gradients in MvP \\cite{sanh2020movement} and the loss curvature in oBERT \\cite{kurtic2022optimal}. Both of these impose higher computational overhead compared to the magnitude-based pruning, but we still put our results in the context with respect to theirs to fully grasp the scope of improvements introduced by careful optimizations of GMP. As can be seen from results in Table~\\ref{tab:high_downstream}, GMP$\\scriptstyle\\bigstar$\\, is able to improve upon the performance of Movement Pruning in 4 out of 6 analyzed configurations, but unfortunately can't match the performance of the oBERT method. In addition to these comparisons, we make use of the open-source implementation of oBERT, current state-of-the-art BERT-pruning method, and run it with optimized hyperparameters from GMP$\\scriptstyle\\bigstar$\\, on the SQuADv1.1 task. We refer to these results as oBERT$\\scriptstyle\\bigstar$\\,. As can be seen from the Table \\ref{tab:high_downstream}, even the very competitive oBERT results benefit from the GMP$\\scriptstyle\\bigstar$\\, setup. For all GMP$\\scriptstyle\\bigstar$\\, runs, we report mean performance across three runs with different seeds, and additional metrics in Tables \\ref{tab:gmp_downstream2} and \\ref{tab:high_downstream2}.\n\n\\begin{table}[t]\n \\caption{Downstream pruning comparison of GMP$\\scriptstyle\\bigstar$\\, with advanced pruning techniques.}\n \\label{tab:high_downstream}\n \\centering\n \\small{\n \\begin{tabular}{lccccc}\n \\toprule\n \\multirow{2}{*}{Method} & \\multirow{2}{*}{Spars.} & \\multirow{2}{*}{Ep.} & SQuAD & MNLI & QQP \\\\\n & & & F1 & m-acc & acc \\\\\n \\midrule \n $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ & 0\\% & & 88.5 & 84.5 & 91.1 \\\\\n \\midrule\n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{2}{*}{90\\%} & \\multirow{2}{*}{10} & \\textbf{86.7} & \\textbf{81.9} & \\textbf{90.6} \\\\\n MvP & & & 84.9 & 81.2 & 90.2 \\\\\n \\midrule \n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{2}{*}{97\\%} & \\multirow{2}{*}{10} & 81.3 & 79.1 & \\textbf{89.7} \\\\\n MvP & & & \\textbf{82.3} & \\textbf{79.5} & 89.1 \\\\\n \\midrule[1pt]\n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{3}{*}{90\\%} & \\multirow{3}{*}{30} & 87.9 & 82.7 & 90.8 \\\\\n oBERT & & & 88.3 & \\textbf{83.8} & \\textbf{91.4} \\\\\n oBERT$\\scriptstyle\\bigstar$ & & & \\textbf{88.6} & & \\\\\n \\midrule \n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{3}{*}{97\\%} & \\multirow{3}{*}{30} & 85.4 & 80.9 & 90.6 \\\\\n oBERT & & & 86.0 & \\textbf{81.8} & \\textbf{90.9} \\\\\n oBERT$\\scriptstyle\\bigstar$ & & & \\textbf{86.6} & & \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.15in}\n }\n\\end{table}\n\\comment{\n\\begin{table}\n \\caption{Comparison of GMP$\\scriptstyle\\bigstar$\\, in pruning during fine-tuning (downstream) setup against more advanced techniques, Movement Pruning (MvP) \\cite{sanh2020movement} and The Optimal BERT Surgeon (oBERT) \\cite{kurtic2022optimal}. oBERT$\\scriptstyle\\bigstar$\\, stands for results we obtained by running the open-sourced oBERT implementation in the GMP$\\scriptstyle\\bigstar$\\, setup. For GMP$\\scriptstyle\\bigstar$\\,, we report mean performance from three runs with different seeds.}\n \\label{tab:high_downstream}\n \\centering\n \\small{\n \\begin{tabular}{lcccccccc}\n \\toprule\n \\multirow{2}{*}{Method} & \\multirow{2}{*}{Sparsity} & \\multirow{2}{*}{Epochs} & \\multicolumn{2}{c}{SQuAD} & \\multicolumn{2}{c}{MNLI} & \\multicolumn{2}{c}{QQP} \\\\\n & & & F1 & EM & m-acc & mm-acc & acc & F1 \\\\\n \\midrule \n $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ & 0\\% & & 88.5 & 81.4 & 84.5 & 85.0 & 91.1 & 88.0 \\\\\n \\midrule\n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{2}{*}{90\\%} & \\multirow{2}{*}{10} & \\textbf{86.7} & \\textbf{78.7} & \\textbf{81.9} & \\textbf{82.1} & \\textbf{90.6} & \\textbf{87.4} \\\\\n MvP & & & 84.9 & 76.6 & 81.2 & 81.8 & 90.2 & 86.8 \\\\\n \\midrule \n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{2}{*}{97\\%} & \\multirow{2}{*}{10} & 81.3 & 71.3 & 79.1 & 79.6 & \\textbf{89.7} & \\textbf{86.1} \\\\\n MvP & & & \\textbf{82.3} & \\textbf{72.7} & \\textbf{79.5} & \\textbf{80.1} & 89.1 & 85.5 \\\\\n \\midrule[1pt]\n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{3}{*}{90\\%} & \\multirow{3}{*}{30} & 87.9 & 80.4 & 82.7 & 83.2 & 90.8 & 87.7 \\\\\n oBERT & & & 88.3 & 81.1 & \\textbf{83.8} & \\textbf{84.4} & \\textbf{91.4} & \\textbf{88.3} \\\\\n oBERT$\\scriptstyle\\bigstar$ & & & \\textbf{88.6} & \\textbf{81.3} & & & & \\\\\n \\midrule \n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{3}{*}{97\\%} & \\multirow{3}{*}{30} & 85.4 & 77.1 & 80.9 & 81.2 & 90.6 & 87.3 \\\\\n oBERT & & & 86.0 & 78.1 & \\textbf{81.8} & \\textbf{82.0} & \\textbf{90.8} & \\textbf{87.7} \\\\\n oBERT$\\scriptstyle\\bigstar$ & & & \\textbf{86.6} & \\textbf{78.8} & & & & \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n}\n\\subsection{Upstream pruning}\n\nTo validate the optimized GMP$\\scriptstyle\\bigstar$\\, setup introduced in the previous section, we apply it now to the pre-training phase of LLMs. This is a two-stage process. In the first stage, the $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ model is pruned during pre-training and then, in the second stage, the remaining weights are fine-tuned with the fixed mask on a specific downstream task to evaluate performance. Given the high costs of experimenting in the pre-training phase, we use the dense teacher open-sourced by~\\citet{kurtic2022optimal}. Due to the space constraints, we summarize all hyperparameters in an \\textit{upstream pruning recipe} and present it in detail in Appendix~\\ref{app:up_recipe}. In Table \\ref{tab:upstream} we present results obtained in this setup and compare against other methods that are utilizing the same approach. More specifically, we compare against the Lottery Ticket~\\citep{chen2020lottery}, Prune OFA~\\citep{zafrir2021prune}, and The Optimal BERT Surgeon (oBERT)~\\citep{kurtic2022optimal}. In addition to this, we report the GMP baselines obtained in the Prune OFA work and refer to them as GMP$_{\\small{\\textrm{Prune OFA}}}$\\,. As can be seen from the Table \\ref{tab:upstream}, the GMP$\\scriptstyle\\bigstar$\\, significantly outperforms GMP$_{\\small{\\textrm{Prune OFA}}}$\\,, Lottery Tickets and even the Prune OFA, and comes really close to the performance of oBERT. For all GMP$\\scriptstyle\\bigstar$\\, runs, we report mean performance across four runs with different seeds. These results confirm findings from the previous section and establish the GMP$\\scriptstyle\\bigstar$\\, as an extremely competitive baseline in all regimes.\n\n\\begin{table}\n \\caption{Upstream pruning comparison of GMP$\\scriptstyle\\bigstar$\\, with other GMP-based baselines and more advanced pruning techniques.}\n \\label{tab:upstream}\n \\centering\n \\small{\n \\begin{tabular}{lcccc}\n \\toprule\n \\multirow{2}{*}{Method} & \\multirow{2}{*}{Sparsity} & SQuAD & MNLI & QQP \\\\\n & & F1 & m-acc & acc \\\\\n \\midrule \n $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ & 0\\% & 88.5 & 84.5 & 91.1 \\\\\n \\midrule\n GMP$_{\\small{\\textrm{Prune OFA}}}$\\, & 85\\% & 86.2 & 82.5 & 90.9 \\\\\n \\midrule\n Lottery Ticket & \\multirow{4}{*}{90\\%} & 68.0 & 75.0 & 90.0 \\\\\n Prune OFA & & 87.3 & 81.5 & 90.9 \\\\\n GMP$\\scriptstyle\\bigstar$\\, & & 88.2 & 83.2 & 90.8 \\\\\n oBERT & & \\textbf{88.5} & \\textbf{83.4} & \\textbf{91.0} \\\\\n \\midrule\n GMP$\\scriptstyle\\bigstar$\\, & \\multirow{2}{*}{97\\%} & 84.7 & 80.3 & 89.8 \\\\\n oBERT & & \\textbf{84.9} & \\textbf{80.9} & \\textbf{90.3} \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-0.15in}\n }\n\\end{table}\n\n\\section{Conclusion}\nIn this work, we presented a set of updates to the standard gradual pruning setup for BERT models which enabled us to achieve very competitive results with the simple magnitude pruner. These results outperformed, by significant margins, all magnitude-based results currently available in the pruning literature which have been used as baselines for development and benchmarking of the new and more advanced pruning techniques. We hope that these \\textit{new baselines} will help the community to start off from a competitive set of results when compressing large language models. Moreover, our GMP$\\scriptstyle\\bigstar$\\, has even outperformed some results obtained with more advanced and computationally heavier pruning techniques. At this point, we would like to {strongly emphasize} that these results should not be interpreted as evidence that magnitude pruning is better than other more advanced methods. Rather, they should be interpreted as evidence that their current results could significantly benefit from updates of the gradual setup presented on the GMP$\\scriptstyle\\bigstar$\\, use-case. To support this claim, we ran the state-of-the-art oBERT pruner with the GMP$\\scriptstyle\\bigstar$\\, setup and managed to improve its results by non-trivial margins.\n\n\\section{Limitations}\nAs any academic study, our work is not without its limitations. Following the literature, our extensive empirical studies were conducted only on the standard $\\textrm{BERT}_{\\tiny{\\textrm{BASE}}}\\,$ model, giving us opportunity to compare against a vast amount of different pruning techniques. Throughout the literature, this model emerged as a consistent benchmark for unstructured pruning methods. However, the current results don't directly imply that our findings will be generally applicable to other language models as well. To partially fill in this uncertainty gap, we conduct a few experiments on the three times larger $\\textrm{BERT}_{\\tiny{\\textrm{LARGE}}}\\,$ model and report results in the Appendix~\\ref{app:additional_models}. Another limitation which we aim to remove in future work is the focus on fine-grained unstructured sparsity type, and explore other variants such as semi-structured and structured pruning. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\\cite{Broadbent89} identified a feature near G357.2$-$0.2 (G357.1$-$00.2\nin some references) as a candidate supernova remnant (SNR) because\nits $S_{\\rm 60\\,\\mu m}\/S_{\\rm 6\\,cm}$ flux-density ratio is lower\nthan that of Galactic H\\textsc{ii} regions and it is resolved at\n6~cm with the Parkes telescope 4\\arcmin\\ beam. \\cite{Gray94} added\nthe 1\\arcmin\\ resolution 843\\,MHz Molonglo Observatory Synthesis\nTelescope image clearly resolving a sinuous structure for the first\ntime and indicating a nonthermal radio spectrum. There is a diffuse\nhalo surrounding the fine scale structure.\n\n\\cite{Gray96} was the first to present and discuss high-resolution\n(13\\arcsec) Very Large Array images of this nebula. The author\nalso noted the high polarization of the filaments at C band (5\\,GHz)\nand the low polarization at L band (1.5\\,GHz), indicating depolarization\nand rotation measure $\\mbox{RM}\\sim2000$\\,rad\\,m$^{-2}$. On the\nbasis of the unusual morphology, \\cite{Gray96} deprecated the SNR\ninterpretation and mentioned a variety of possibilities, including\na pulsar wind nebula (PWN) and one more example of peculiar nonthermal\nphenomena near the Galactic center \\citep[e.g., the ``Tornado''\nonly $0\\fdg5$ away from G357.2$-$0.2,][]{Gaensler2003}.\n\n\\cite{Gray94} and \\cite{Gray96} note that the pulsar B1736$-$31 is\nin the vicinity of G357.2$-$0.2, in projection. Its location outside\nthe nebula precludes any connection to a PWN interpretation, and\nits spin-down age of 0.5\\,Myr \\citep{Clifton92} also makes it too\nold to still have an associated visible SNR.\n\nH\\textsc{i} observations of G357.2$-$0.2 by \\cite{Roy2002} give a\ndistance of at least 6\\,kpc and place it either in front of, or\npartly embedded in, a cloud believed to be beyond the Galactic\ncenter; they conclude that it is Galactic.\n\nWe observed G357.2$-$0.2 with the MeerKAT radio telescope\\footnote{Operated\nby the South African Radio Astronomy Observatory (SARAO).} in the\nUHF and L bands (0.56--1.68\\,GHz) with 7\\arcsec\\ resolution and\nwith the eROSITA X-ray instrument. The observations and analysis\nare described in Section \\ref{ObsAnalysis}, the imaging results are\npresented in Section \\ref{Results}, and a discussion of these results\nis in Section \\ref{Discussion} followed by a summary in Section\n\\ref{Summary}.\n\n\n\\section{Observations and Data Analysis \\label{ObsAnalysis}}\n\n\\subsection{MeerKAT Observations, Analysis, and Imaging}\n\nWe observed G357.2$-$0.2 in both ``L'' (886--1682\\,MHz) and UHF\n(563--1068\\,MHz) bands with the 64 antenna MeerKAT array pointed\nat J2000 $\\mbox{R.A.} = 17^{\\mathrm h}39^{\\mathrm m}39\\fs82$,\n$\\mbox{Dec.} = -31\\arcdeg27\\arcmin47\\farcs0$ (G357.176$-$0.235).\nThe integration time was 8\\,s, and each band was divided into 4096\nspectral channels.\n\nThe observations were in two sessions, L band on 2020 July 21 for\n8 hours with 59 antennas and UHF on 2020 August 18 for 8 hours with\n53 antennas. PKS~B1934$-$638 was used as the flux density, band-pass\nand delay calibrator, 3C~286 as the polarization calibrator, and\nJ1830$-$3602 as the astrometric calibrator. The observing sequence\ncycled between J1830$-$3602 (2 minutes) and G357.2$-$0.2 (20 minutes)\nwith a flux\/band-pass calibrator (10 minutes) every 2 hours. Our\nflux-density scale is based on the \\cite{Reynolds94} spectrum of\nPKS~B1934$-$638: \\begin{equation}\n \\log(S) = -30.7667 + 26.4908 \\bigl(\\log\\nu\\bigr) - 7.0977\n \\bigl(\\log\\nu\\bigr)^2 $$ $$+0.605334 \\bigl(\\log\\nu\\bigr)^3,\n\\label{eq:pks1934} \\end{equation} where $S$ is the flux density in\nJy and $\\nu$ is the frequency in MHz.\n\n\n\\subsubsection{Analysis}\n\nData flagging and calibration were performed as described for L-band\ndata in \\cite{DEEP2} and \\cite{XGalaxy}. The UHF session was\ncalibrated independently, and we have adopted the L-band procedure\nfor the UHF data with some band-specific modifications described\nbelow.\n\nFirst, we trimmed 144 channels from each edge of the UHF band to\naccount for the roll-off in receiver response, leaving a frequency\nrange 563--1069\\,MHz. We then used a UHF-specific mask to identify\nfrequency ranges that contain persistent and strong radio frequency\ninterference (RFI). This covers only 934--960\\,MHz, where cellular\ncommunication signals are present. After combining our empirical\nmask with the editing steps described in \\cite{DEEP2} during\ncalibration, $\\sim 10\\%$ of the target data were flagged from the\ntrimmed UHF band.\n\nThe data were split into 8 sub-bands with equal frequency width and\nthese were calibrated independently. We used a UHF sky model\nextrapolated from the L-band model of the PKS~B1934$-$638 field\ncontaining the power-law spectra of sources appearing brighter than\n1\\,mJy\\,beam$^{-1}$ at 1.3\\,GHz within $1^\\circ$ of PKS~B1934$-$638.\nThe flux density of PKS~B1934$-$638 in each sub-band was obtained\nfrom equation~(1), and used to derive the amplitude spectrum of\nJ1830$-$3602. The amplitudes of the gains measured from J1830$-$3602\nwere scaled by a smooth model fitted to its measured flux densities\nin each sub-band, and the scaled amplitude and phase corrections\nwere interpolated in time and applied to the target data. The data\nwere reweighted using the root mean square (RMS) in the observed\nvisibilities in 10 minute intervals.\n\nThe above extrapolation does not account for sources towards the\nedge of the wider UHF field of view (FoV). However, we have compared\nthe above analysis to one that uses a preliminary model of the full\nUHF FoV of PKS~B1934$-$638, and find no appreciable difference in\nthe derived flux scales above 700\\,MHz. Below this frequency our\nderived flux densities are somewhat (up to 10--20\\%) overestimated.\n\nImaging used the wide-band, wide-field imager MFImage in the\n\\emph{Obit}\npackage\\footnote{\\url{http:\/\/www.cv.nrao.edu\/~bcotton\/Obit.html}}\n\\citep{Obit} as described in \\cite{DEEP2} and \\cite{XGalaxy}.\nMFImage \\citep[described in detail in][]{SourceSize} uses faceting\nto account for the non-coplanarity of the MeerKAT baselines and\nmultiple frequency bins which are imaged independently and CLEANed\njointly to account for frequency variations in the sky and the\nantenna pattern. Imaging used Robust weighting ($-1.5$ in\n\\emph{AIPS}\/\\emph{Obit} usage) to down-weight the central condensation of\nantennas in the array and improve the resolution.\n\n\n\\subsubsection{Total-Intensity Imaging}\\label{StokesIimaging}\n\nThe data in the two frequency bands were imaged independently. With\nthe large bandwidth covered by the data, the shortest baseline\nlength in wavelengths varied by a factor of three between the highest\nand lowest frequencies in the two bands. Due to the large-scale\nemission in the field, if uncorrected, this will lead to a variable\nfraction of the total intensity recovered as a fraction of frequency\nand a frequency-dependent negative bowl around the extended emission.\nThis will artificially cause the spectrum to appear steeper than\nit actually is. In order to counteract this, an inverted Gaussian\ntaper centered at the origin was applied to the weights of the\nshortest baselines with a Gaussian $\\sigma$ of 500 wavelengths to\nboth the UHF and the L-band data. \nThis will suppress emission on scales larger than $\\sim$200\\arcsec;\nthis is similar to the spectral index analysis in \\cite{XGalaxy}. \nA multi-resolution CLEAN was used to help recover the very extended\nemission in the field. \n\nThe L-band total-intensity data were imaged to a radius of 1$^\\circ$\nplus outlier facets to a distance of $1\\fdg5$ to cover sources\nexpected to appear brighter than 1\\,mJy\\,beam$^{-1}$ based on the\nSUMSS catalog at 843\\,MHz \\citep{SUMSS}. Three iterations of\nphase-only self-calibration were applied. The total band-pass was\ndivided into 14 $\\times$ 5\\% fractional bandwidth bands giving\nunequal widths in frequency. L-band total-intensity imaging used\n366,886 components stopping at a depth of 45\\,$\\mu$Jy\\,beam$^{-1}$\nwith a total flux density of 23.7\\,Jy; the off-source RMS noise is\n20\\,$\\mu$Jy\\,beam$^{-1}$. The CLEAN restoring beam was an elliptical\nGaussian with FWHM axes $7\\farcs0 \\times 6\\farcs8$ at position angle\n0$^\\circ$.\n\nAt UHF a field of view with radius $2\\fdg5$ was imaged in 14 $\\times$ 5\\%\nfractional bands with phase self-calibration using 419,484 components\nto a minimum of 200\\,$\\mu$Jy\\,beam$^{-1}$ and a total flux density\nof 60.9\\,Jy. Outliers were added up to $3\\fdg5$ from the pointing.\nThe off-source RMS was 89\\,$\\mu$Jy\\,beam$^{-1}$. The CLEAN restoring\nbeam was $11\\farcs6 \\times 10\\farcs3$ at position angle $-20^\\circ$.\n\nFor both L band and UHF, the 8\\,s integrations and sub-bands used\nintroduce negligible time and bandwidth smearing ($<2\\arcsec$)\nacross the full imaged FoVs.\n\n\n\\subsubsection{Deconvolution of Stokes Q and U}\n\nOnly the L-band data had adequate polarization calibration and were\nimaged in Stokes Q and U. In order to recover the polarimetry in\nthe presence of the large Faraday rotation of polarized emission,\na relatively high spectral resolution was used for Stokes Q and U\nimaging --- a 1\\% fractional bandwidth resulting in 68 sub-bands\nacross the band. The deconvolution also used the joint polarization\nCLEAN described in \\cite{Condon2021}. Linear polarization imaging\nused 50,000 CLEAN components to a depth of 54\\,$\\mu$Jy\\,beam$^{-1}$\nresulting in an off-source RMS of 10\\,$\\mu$Jy\\,beam$^{-1}$.\n\n\n\\subsection{eROSITA Observations and Analysis} \\label{sec:eROSITA}\n\nThe X-ray eROSITA \\citep[extended R\\\"ontgen Survey Imaging Telescope\nArray,][]{Predehl2020a} is one of two instruments on the Spectrum\nR\\\"ontgen-Gamma observatory \\citep{Sunyaev2021}. It consists of\nseven aligned X-ray telescopes (TM1--TM7) which have an FoV of\n1\\degr. All telescopes observe the same sky region simultaneously\nin the 0.2--8\\,keV band-pass. In survey mode, the instrument's\nangular resolution is $26\\arcsec$. eROSITA started its first all-sky\nsurvey on 2019 December 13, with eight such surveys planned over 4\nyears \\citep[see][]{Predehl2020a}.\n\nThe X-ray data we report here were taken during the first four\neROSITA surveys, eRASS:4. By end 2021 the position of G357.2$-$0.2\nhad been observed with a total of 27 telescope passages during four\nepochs, 2020 March 27--28, 2020 September 28--30, 2021 March 24--25,\nand 2021 September 24--25, resulting in an un-vignetted averaged\nexposure time of 1048\\,s.\n\nThe data used in our analysis were processed by the eROSITA Standard\nAnalysis Software System (\\emph{eSASS}) pipeline and have the\nprocessing number $\\#946$. For the data analysis we used \\emph{eSASS}\nversion 201009\\footnote{See \\url{https:\/\/erosita.mpe.mpg.de\/}}.\nWithin the \\emph{eSASS} pipeline, X-ray data of the eRASS sky are\ndivided into 4700 partly overlapping sky tiles of $3\\fdg6 \\times\n3\\fdg6$ each. These are numbered using six digits, three each for\nR.A. and Dec., encoding the sky tile center position in degrees.\nThe majority of G357.2$-$0.2 falls into the eRASS tiles 266120 and\n266123, with the surrounding tile 263123 also required for a complete\ncoverage of G357.2$-$0.2.\n\n\n\\section{Results\\label{Results}}\n\nThe MeerKAT L-band total-intensity image of G357.2$-$0.2 is shown\nin Figure~\\ref{Heartworm_L}. \nThe region imaged most prominently contains a complex of filamentary\n(worm-like) structures spanning $\\sim 8\\arcmin$, some of which appear\nto terminate in brighter knots; for the first time, some of these\nfilaments are resolved into striking double tails\n(Figure~\\ref{Worm_L}). \nThere is no overall organization apparent and this fine-scale\nstructure, at least in projection, is embedded in larger-scale low\nbrightness emission which contains a large amount of flux density. \n\n\\chg{Since the imaging used in Figures~\\ref{Heartworm_L} and\n\\ref{Worm_L} only used the L-band data and explicitly removed the\nshorter baselines, the most extended emission is attenuated.\nIn order to bring out this extended emission, the UHF data were\nreimaged with enhanced brightness sensitivity ($\\mbox{Robust}=-0.75$) and\nincluding the shorter baselines. \nThis is shown in Figure~\\ref{Heartworm_LoRes} emphasizing the lower\nbrightness regions.\n}\n\nSome of \\chg{the} larger-scale emission appears to be organized in a\npartial shell-like heart-shaped feature spanning $\\sim 18\\arcmin$,\nreported here for the first time. \nOn the basis of this combined morphology, we have nicknamed these\nfeatures the ``Heartworm'' Nebula. \n\n\\begin{figure*}\n\\plotone{fig01.eps}\n\\caption{Reverse gray-scale of the L-band \\chg{(886--1681\\,MHz)} Stokes~I image of G357.2$-$0.2\n(the Heartworm) in double log stretch with a scale-bar at the top\nlabeled in mJy\\,beam$^{-1}$. The resolution is shown in the box\nat lower left. This rendering optimizes the display of the larger-scale\nlow brightness emission, including the shell-like heart-shaped\nfeature spanning $\\sim 18\\arcmin$ northwards from (R.A., Dec.)\n$\\approx$ ($17^{\\mathrm h}39^{\\mathrm m}15^{\\mathrm s}$,\n$-31\\arcdeg38\\arcmin$). The central fine-scale features, considerably\nsaturated in this view, are best discerned in Figure~\\ref{Worm_L}.\n} \n\\label{Heartworm_L}\n\\end{figure*}\n\n\\begin{figure*}\n\\plotone{fig02.eps}\n\\caption{Zoom in on Figure~\\ref{Heartworm_L}, with a different\ncontrast (reverse gray-scale in double log stretch with scale-bar\nat the top labeled in mJy\\,beam$^{-1}$), to highlight the fine-scale\nfeatures of G357.2$-$0.2. The resolution is shown in the box at\nlower left. Prominent knots of emission are labeled (see\nTable~\\ref{tab:knots}). The bright point source at (R.A., Dec.) =\n($17^{\\mathrm h}39^{\\mathrm m}24^{\\mathrm s}$,\n$-31\\arcdeg31\\arcmin12\\arcsec$) is the pulsar PSR~B1736$-$31 =\nJ1739$-$3131.\n} \n\\label{Worm_L}\n\\end{figure*}\n\n\\begin{figure*}\n\\plotone{fig03.eps}\n\\caption{\\chg{The UHF band (563--1068\\,MHz) Heartworm enhanced brightness\nsensitivity image in reverse \ngray-scale with double log stretch; a scale-bar is shown at the top labeled in\nmJy\\,beam$^{-1}$.\nThe resolution is $12\\farcs4\\times11\\farcs8$ and is shown in the box at\nlower left.} \n}\n\\label{Heartworm_LoRes}\n\\end{figure*}\n\n\n\\subsection{Spectral Index}\n\nThe individual total-intensity frequency-bin images in the UHF and\nL-band images were convolved to a common resolution (that of the\nUHF image \\chg{described in Section~\\ref{StokesIimaging}}) and\ninterpolated to the grid of the L-band image. After \nprimary beam correction using the frequency-dependent antenna beam\nshape of \\cite{DEEP2}, a spectrum was fitted in each pixel with the\nflux density at 1000\\,MHz $S_{\\rm 1\\,GHz}$ and the spectral index\n$\\alpha$. The spectral index image is displayed in\nFigure~\\ref{Heartworm_SI}. The northern and western rim of the\nheart are shown in more detail in Figure~\\ref{Heart_SI}.\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig04.eps}\n}\n\\caption{The spectral index of the Heartworm (G357.2$-$0.2). Intensity\nis flux density at 1000\\,MHz with square root stretch and color is\nspectral index as given by the scale-bar at the top. PSR~B1736$-$31,\nwith a typical steep pulsar spectrum, corresponds to the prominent\nred point. See Figure~\\ref{Heartworm_SI_err} for the corresponding\nerror map.\n} \n\\label{Heartworm_SI}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig05.eps}\n}\n\\caption{Like Figure~\\ref{Heartworm_SI} but emphasizing the northern\nand western rim of the heart.\n} \n\\label{Heart_SI}\n\\end{figure}\n\nThe uncertainty in the spectral index depends on both the signal-to-noise\nratio of a feature across the observed band and any systematics\nsuch as the frequency dependent ``missing'' flux density from\nstrongly resolved extended emission (see Section~\\ref{StokesIimaging}).\nThe spectral index error image, based only on the statistical\nuncertainty, is displayed in Figure~\\ref{Heartworm_SI_err}.\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig06.eps}\n}\n\\caption{The error map for the spectral index of the Heartworm\n(G357.2$-$0.2) shown in Figure~\\ref{Heartworm_SI}. Color represents\nthe statistical uncertainty on the spectral index as given by the\nscale-bar at the top. This is based only on RMS noise, and does\nnot account for systematic errors related to missing flux (see\nSection~\\ref{StokesIimaging}) or calibration.\n} \n\\label{Heartworm_SI_err}\n\\end{figure}\n\n\n\\subsection{Polarimetry}\n\nThe imaging in Stokes Q and U used 68 $\\times$ 1\\% fractional\nband-pass image planes although many were completely blanked due\nto the editing of RFI. A rotation measure (RM) fit was performed in each pixel by\ndoing a direct search in Faraday space. The test Faraday rotation\nthat gives the highest averaged, unwrapped polarized intensity was\ntaken as the Faraday rotation at that pixel, the unwrapped polarization\nangle extrapolated to zero wavelength was taken as the intrinsic\npolarization angle, and the maximum polarized intensity taken as the\npolarized intensity in that pixel. This is essentially taking the\npeak of the Faraday synthesis \\citep{RMSynthesis}.\n\nFractional polarization ``B'' vectors in the worm are shown in\nFigure~\\ref{Heartworm_PolVec} and the RMs in Figure~\\ref{Heartworm_RM}.\nPolarization was detected only in limited areas but with moderately\nhigh fractional polarization (20--30\\%) and with the magnetic field\nlargely along the linear features and with large and variable Faraday\nrotation.\nThe rotation measures shown in Figure~\\ref{Heartworm_RM} are much less\nthan the 2000\\,rad\\,m$^{-2}$ at $\\lambda = 6$\\,cm found by \\cite{Gray96},\nsupporting the suggestion in the \\chg{Figure~\\ref{Heartworm_RM}}\ncaption that at L band and \nUHF we are seeing only through gaps in the dense foreground screen. \n\n\\begin{figure}\n\\centerline{\n\\includegraphics[height=3.25in]{fig07.eps}\n}\n\\caption{Total intensity contours of the worm in G357.2$-$0.2, with\nsuperposed red fractional polarization ``B'' vectors from the L-band\ndata. Contours are at 2, 4, 8, 12 and 16 $\\times$ 0.2\\,mJy\\,beam$^{-1}$,\nand a vector length of $10''$ corresponds to 28\\% polarization.\nThe resolution is shown in the box in the lower left corner.\n} \n\\label{Heartworm_PolVec}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig08.eps}\n}\n\\caption{Total intensity contours of the worm in G357.2$-$0.2, with\nsuperposed RM in color with scale-bar at the top in rad\\,m$^{-2}$.\nThe spotty and highly variable nature of the detected Faraday\nrotation suggests that the foreground screen is quite dense and we\nare seeing through gaps. Contours are at 2, 4, 8, 12 and 16 $\\times$\n0.2\\,mJy\\,beam$^{-1}$. The resolution is shown in the box in the\nlower left corner.\n} \n\\label{Heartworm_RM}\n\\end{figure}\n\n\n\\subsection{X-ray Image} \\label{sec:X-ray-image}\n\nFigure~\\ref{fig:eROSITA} depicts a three-color image of G357.2$-$0.2\nwhich has been coded according to the energy of the detected X-ray\nphotons. To produce it, we first created images for the three energy\nbands 0.2--0.7\\,keV, 0.7--1.2\\,keV, and 1.2--2.4\\,keV, using data\nfrom all seven telescopes. The spatial binning in these images was\nset to $26\\arcsec$ to match eROSITA's FoV-averaged FWHM angular\nresolution during survey mode. In order to enhance the visibility\nof diffuse emission in the three-color image while leaving point\nsources unsmoothed to the greatest possible extent, we applied the\nadaptive kernel smoothing algorithm of \\cite{2006MNRAS.368...65E}\nwith a Gaussian smoothing kernel of $1.5\\,\\sigma$.\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig09.eps}\n}\n\\caption{Three-color image of G357.2$-$0.2 as seen in the eROSITA\nall-sky surveys eRASS:4. Photons to produce the image were color\ncoded according to their energy (red for energies 0.2--0.7\\,keV,\ngreen for 0.7--1.2\\,keV, blue for 1.2--2.4\\,keV). An adaptive\nkernel smoothing algorithm was applied to the images in each energy\nband. Radio contour lines (yellow) from the image in\nFigure~\\ref{Heartworm_L} are overlaid to outline G357.2$-$0.2. The\ngreen circle with radius 240\\arcsec\\ encompasses the worm, with a\nfaint unrelated soft point source located towards the southwest,\nindicated by a circle of radius 60\\arcsec.\n}\n\\label{fig:eROSITA}\n\\end{figure}\n\nAs can be seen from Figure~\\ref{fig:eROSITA}, no significant diffuse\nemission was detected from G357.2$-$0.2 during eRASS:4. There is\nsome mixture of very faint soft- (red) to medium-band (green)\nemission overlapping with the radio contour lines within the large\ngreen circle, but its significance is estimated to be only at the\n$\\sim 2.5$--3\\,$\\sigma$ level. Such low level emission is seen at\nvarious other locations in the wider image of all the merged sky\ntiles, making it very speculative to associate this faint emission\nwith G357.2$-$0.2. The small circle in Figure~\\ref{fig:eROSITA}\nindicates the position of a weak soft point source, which seems\nunrelated to the radio features.\n\n\n\\section{Discussion\\label{Discussion}}\n\nThe H\\textsc{i} observations of \\cite{Roy2002} indicate that the\nworm in G357.2$-$0.2 is at a distance of at least 6\\,kpc, possibly\nbeyond the Galactic center, and likely of Galactic origin. Hereafter\nfor the purposes of discussion we assume a distance $d = 8.5$\\,kpc.\nHowever it is quite unlike any known class of Galactic object, with\nthe possible exception of PWNe. The worm has a diameter of\n$\\sim8\\farcm3$ which at the assumed distance is equivalent to\n$\\sim20$\\,pc.\n\n\\subsection{(Not) Star Formation}\n\nInfrared observations of the Heartworm indicate that the bulk of\nthe radio features are unlikely to be related to current star\nformation.\n\nThere are no extended far-infrared (FIR) features visible near the\nworm (Figure~\\ref{3colour}) that could be indicative of thermal\ndust emission. However, the brightest portion of the heart coincides\nwith strong FIR emission and may be an H\\textsc{ii} region unrelated\nto the rest of G357.2$-$0.2 (and hence of unconstrained distance).\nThis interpretation is supported by the flat radio spectrum of this\nregion seen in Figure~\\ref{Heartworm_SI}. A second smaller clump\nof FIR\/sub-mm emission may likewise be an unrelated H\\textsc{ii}\nregion (Figure~\\ref{3colour}).\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig10.eps}\n}\n\\caption{\nThree-color image of the dust emission around the Heartworm Nebula,\nin Galactic coordinates and with arbitrary units. Red and green in\nthe image are respectively coded to PACS 70\\,$\\mu$m and SPIRE\n250\\,$\\mu$m emission from the \\emph{Herschel} Hi-GAL survey\n\\citep{molinari2010}. Blue is coded to 850\\,$\\mu$m emission from\nthe SCUBA-2 Galactic center survey \\citep{parsons2018}. Contours\ntrace the MeerKAT L-band emission from the Heartworm, with levels\nchosen using a power-law fitting scheme to emphasise both low-level\nand bright emission \\citep{thompson2006}. The image shows that there\nis little thermal dust emission associated with the worm, although\nthere is a compact warm dust clump positionally coincident with the\nnorthern end of the heart, indicating a candidate H\\textsc{ii}\nregion, and another such clump and possible H\\textsc{ii} region to\nits west.\n}\n\\label{3colour}\n\\end{figure}\n\nThe strongest argument that the knots in the worm are not H\\textsc{ii}\nregions is based on the observation that they are fairly strong\nradio sources ($S_{\\rm 1\\,GHz}\\sim7$\\,mJy according to\nTable~\\ref{tab:knots}) but are not visible at all ($S_{\\rm 24\\,\\mu\nm} \\ll 5\\,\\sigma$) in the deep \\emph{Spitzer} Enhanced Data Products\n$24\\,\\mu$m image (Figure~\\ref{hworm24}) made with $6\\arcsec$ FWHM\nresolution. The $24\\,\\mu$m flux densities of Galactic H\\textsc{ii}\nregions are typically $30\\times$ their 1.4\\,GHz flux densities\n\\citep{Anderson2014} and the $5\\,\\sigma$ upper limits for sources\nsmaller than $10\\arcsec$ FWHM on the knot positions are $S_{\\rm\n24\\,\\mu m} \\le 1$\\,mJy. Even $A_V = 50$\\,mag of extinction would\nlower $S_{\\rm 24\\,\\mu m}$ by only a factor of 10 \\citep{Anderson2014},\nso $< 5\\%$ of the knot radio emission is likely to be thermal.\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig11.eps}\n}\n\\caption{\nThe \\emph{Spitzer} Enhanced Data Products $24\\,\\mu$m MIPS image\ncovering the Heartworm, with circles centered on the knot positions\nfrom Table~\\ref{tab:knots} (see also Figure~\\ref{Worm_L}). The\nintensity scale on the right has units MJy\\,sr$^{-1} \\approx$\nmJy\\,beam$^{-1}$.\n}\n\\label{hworm24}\n\\end{figure}\n\nThere is also scant indication of correspondence between the compact\nradio features in Figure~\\ref{Worm_L} and infrared emission at\nshorter wavelengths. Knot \\#3 is the closest to a near-\/mid-infrared\n(NIR\/MIR) source, with its peak 1\\farcs3\\ away from a 3.6 and\n8\\,$\\mu$m GLIMPSE-II source \\citep{Churchwell2009}. This source\nis also detected in the VVV K$_{\\rm s}$ survey but not, as noted\nabove, in MIPSGAL 24\\,$\\mu$m. The \\citet{Downes1986} P-statistic\nfor the possible association of this 8\\,$\\mu$m 9.259 magnitude\nsource (the probability of finding a brighter IR source closer to\nthe radio peak) is $2.3\\times10^{-3}$. Nominally, we might thus\nexclude a chance association at the 3\\,$\\sigma$ level. However this\ndoes not account for MeerKAT astrometric errors, which may contribute\nat the $\\sim 1\\arcsec$ level \\citep{Knowles2022,Heywood2022}. As\nfor the remaining six radio knots, there are no plausible NIR-MIR\ncounterparts.\n\n\n\\subsection{The Worm and the Heart}\n\nBoth the spectrum and polarized emission suggest that the worm emits\nby a nonthermal process, likely synchrotron. However, the spectrum\nof the emission in much of the worm is relatively flat for synchrotron\nemission suggesting that the radiating electrons have been recently\naccelerated.\n\\chg{Furthermore}, ionization losses can flatten the spectrum by up to\n$\\Delta\\alpha = +0.5$.\n\nDue to the extended size of the heart, much larger than the $\\sim$200\\arcsec\\\nscale filtering in the imaging, much of the emission may be\nresolved out.\nThe rim of this structure survives the filtering of the interferometer\narray. \nThe spectrum of the bulk of the heart,\nat least in the parts of the rim which are well imaged, is relatively\nsteep (Figures~\\ref{Heartworm_SI}--\\ref{Heartworm_SI_err}) indicating\nan aged relativistic electron population. This excludes the brightest\nand flattest-spectrum portion of the heart, which as noted above\nmay be an unrelated H\\textsc{ii} region (see Figure~\\ref{3colour}).\nOther than positional coincidence, there is no evidence that the\nheart and the worm are physically related.\n\nThe worm also shares the heart with the pulsar B1736$-$31 (bright\nred point in Figure~\\ref{Heartworm_SI}) although as already alluded\nto in Section~\\ref{sec:intro} there is no physical connection between\nthis pulsar and any of the nearby features. This is further supported\nby the RM of the pulsar --- we measure $43.5\\pm0.2$\\,rad\\,m$^{-2}$\n\\citep[compared to $32\\pm8$\\,rad\\,m$^{-2}$ in][]{Rand94} --- which\nis far smaller than that over most of the worm (Figure~\\ref{Heartworm_RM}).\n\n\n\\subsection{The Loopy and Knotty Worm}\n\nThe worm is remarkably complex. Much of its emission seen in\nFigure~\\ref{Worm_L} consists of filaments. Many of these are either\npaired and connected to a flatter spectrum knot\n(Figure~\\ref{Heartworm_SI_Close}) or are loops. Where the polarization\nwas detectable, the magnetic field appears to be along the filaments\n(Figure~\\ref{Heartworm_PolVec}) suggesting that they are magnetically\nconfined structures which have been dragged into their current\nconfiguration, possibly by what is causing the bright knots. The\nflatter spectra near the knots (an example spectrum together with\na least squares fit is given in Figure~\\ref{Heartworm_SI_Point})\nsuggest that these are the locations at which electrons are\naccelerated. The identified knots have all very nearly the same\nflux densities and nonthermal spectra (Table~\\ref{tab:knots}), with\nno hint of a break or turnover in the frequency range observed.\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[width=3.25in]{fig12.eps}\n}\n\\caption{Like Figure~\\ref{Heartworm_SI} but a close up with a tighter\nrange of spectral index. Note that the region immediately surrounding\nthe worm appears to have a very steep spectrum ($\\alpha \\sim -1$),\nbut this may be affected by the negative bowl due to missing flux\n(see Section~\\ref{StokesIimaging} and also Figure~\\ref{Heartworm_SI_err}).\n} \n\\label{Heartworm_SI_Close}\n\\end{figure}\n\n\\begin{figure}\n\\centerline{\n\\includegraphics[height=3.25in,angle=-90]{fig13.eps}\n}\n\\caption{Spectrum of knot \\#6 in Figure~\\ref{Worm_L}; see also\nFigure~\\ref{Heartworm_SI_Close} and Table~\\ref{tab:knots}. UHF\ndata are displayed as ``+'' and L band as ``*''. The line is the\nfitted spectrum given in the figure, with the flux density provided\nfor a frequency of 1000\\,MHz. Note the match in flux densities\nindependently determined in the overlapping range $\\approx\n900$--1050\\,MHz.\n} \n\\label{Heartworm_SI_Point}\n\\end{figure}\n\n\\begin{deluxetable}{lrrcc}\n\\tablewidth{0pt}\n\\tablecolumns{5}\n\\tablecaption{Seven knots within the G357.2$-$0.2 nebula}\n\\tablehead{\n \\colhead{\\#\\tablenotemark{a}} & \\colhead{R.A.} & \\colhead{Dec.} &\n \\colhead{$S_{\\rm 1\\,GHz}$\\tablenotemark{b}} & \n\t\\colhead{$\\alpha$\\tablenotemark{b}} \\\\\n\t\\colhead{} & \\colhead{($^{\\mathrm h}~^{\\mathrm m}~^{\\mathrm s}$)} & \n\t\\colhead{$(\\arcdeg\\:\\arcmin\\:\\arcsec)$} & \\colhead{(mJy)} & \\colhead{} \n}\n\\startdata\n1 & 17 39 33.95 & $-31$ 30 29.9 & 7.2 & $-0.37$ \\\\\n2 & 17 39 34.95 & $-31$ 30 16.9 & 7.0 & $-0.36$ \\\\\n3 & 17 39 37.15 & $-31$ 24 38.3 & 6.1 & $-0.39$ \\\\\n4 & 17 39 39.56 & $-31$ 27 44.6 & 7.2 & $-0.43$ \\\\\n5 & 17 39 40.18 & $-31$ 28 01.2 & 7.5 & $-0.39$ \\\\\n6 & 17 39 43.03 & $-31$ 27 55.1 & 7.1 & $-0.37$ \\\\\n7 & 17 39 54.87 & $-31$ 28 21.7 & 5.9 & $-0.33$ \\\\\n\\enddata\n\\tablenotetext{a}{Knots are labeled as in Figure~\\ref{Worm_L}.}\n\\tablenotetext{b}{Flux density values at 1\\,GHz and spectral index\n$\\alpha$ are obtained from pixel-by-pixel fitting over the UHF and\nL bands. Uncertainties in $S_{\\rm 1\\,GHz}$ and $\\alpha$ (noise\ncomponents only) are $35\\,\\mu$Jy and 0.02 respectively for each\nknot.}\n\\label{tab:knots}\n\\end{deluxetable}\n\nThere is also a long, relatively straight filament appearing to\nconnect the center of the worm to the southwestern part of the\nheart, at least in projection (see \\chg{Figures~\\ref{Heartworm_L}--\\ref{Heartworm_LoRes}}). \nIt is unclear what connection if any this\nfilament might have to the overall features.\n\nThe spotty but high RMs seen in Figure~\\ref{Heartworm_RM} and the\nstrong depolarization reported by \\cite{Gray96} indicate that the\nemission is behind a relatively dense plasma. \\cite{Gray96} shows\npolarized emission at 5\\,GHz over most of the worm (Fig.~2) but\nreports little polarization at 1.5\\,GHz. The author infers\n$\\mbox{RM}\\sim2000$\\,rad\\,m$^{-2}$. This value is substantially\nhigher than those seen in Figure~\\ref{Heartworm_RM}; however, our\nresolution is higher than that of \\cite{Gray96} at 1.5\\,GHz and we\nmay just be seeing through gaps in an otherwise dense Faraday screen.\nNearby sources, presumed to be background AGNs, have RMs ranging\nfrom $-120$ to +160 rad\\,m$^{-2}$ which is outside most of the range\nshown in Figure~\\ref{Heartworm_RM}, indicating that the bulk of the\nFaraday rotation in front of the worm is local to it.\n\nThe filamentary and tangled structure of the worm bears resemblance\nto some known PWNe. For instance, the PWN in the composite SNR~G0.9+0.1\n(Figure~\\ref{G0.9+0.1PWN}) displays a complex web of twisted filaments\n(without reported polarization measurements). By contrast to the\nworm, however, no prominent knots of emission are seen in G0.9+0.1.\nConversely, its compact PWN is known to be powered by one of the\nmost energetic pulsars in the Galaxy \\citep{Camilo2009a}, while no\nsuch powering source has been identified for the worm.\n\n\\begin{figure}\n\\includegraphics[width=3.25in]{fig14.eps}\n\\caption{\nMeerKAT image at 1.28\\,GHz showing the PWN at the center of the\nSNR~G0.9+0.1. The torus and jet structure inferred from X-ray\nobservations \\citep{gaensler01}, and subsequently reported in radio\nimaging by \\citet{dubner09}, is revealed here to be a more complex\nweb of tangled filamentary structures surrounding a prominent central\npoint-like source \\citep[presumably the pulsar discovered\nby][]{Camilo2009a}. Compare to the G357.2$-$0.2 worm in\nFigure~\\ref{Worm_L}. The angular resolution is 4\\arcsec, shown in\nthe lower left. The reverse gray-scale is linear with scale-bar at\nthe top labeled in mJy\\,beam$^{-1}$. Adapted from \\cite{Heywood2022}.\n} \n\\label{G0.9+0.1PWN}\n\\end{figure}\n\n\n\\subsection{Pulsar Wind Nebula?}\n\n\\subsubsection{The Heartworm as a Composite SNR} \\label{sec:g327}\n\nComposite SNRs manifest as a shell (possibly partial and\/or distorted)\nresulting from the supernova explosion shockwave interacting with\nthe interstellar medium, together with an interior PWN powered by\na suitably energetic pulsar. The PWNe in middle-aged or older\ncomposite SNRs are often complex in structure due to the fact that\nthey have been disrupted by the SNR reverse shock (RS). Particularly\nin cases for which the shockwave has evolved in a nonuniform medium,\nthis disruption can result in a complex structure in which the relic\nPWN becomes highly distorted \\citep{Blondin2001,Kolb2017}, and in\nwhich freshly injected particles and magnetic flux create a new\nextended structure near the pulsar. The worm in G357.2$-$0.2, while\nunique in some ways, shares several properties of the comparatively\nbright PWN in G327.1$-$1.1, which appears to be an example of a\nsystem that has undergone an interaction between the PWN and an\nasymmetric RS \\citep{Temim2009,Temim2015}.\n\nAustralia Telescope Compact Array images of G327.1$-$1.1 taken at\n3~cm show diffuse emission from the PWN along with a network of\nfilamentary structures accompanied by bright knots \\citep{Ma2016}.\nAccompanying polarization measurements at 6~cm show that the magnetic\nfield is largely aligned with the filaments. G327.1$-$1.1 also has\na dense and variable Faraday screen with up to $-600$\\,rad\\,m$^{-2}$\nand an average of $-380$\\,rad\\,m$^{-2}$ \\citep{Ma2016}. These\nfeatures are similar to what is seen in G357.2$-$0.2 in\nFigures~\\ref{Heartworm_PolVec} and \\ref{Heartworm_RM}.\n\nAn elongated structure in G327.1$-$1.1 also extends from the putative pulsar ---\nidentified as an X-ray source with spectral properties consistent\nwith a neutron star --- back into the relic nebula. Hydrodynamical\nstudies show that this appears to be associated with the current\noutflow from the pulsar, swept into a tail-like structure by the\nRS. More detailed MHD studies are required to assess whether finer\nfilamentary structures such as seen in the worm might be formed in\nthis type of RS\/PWN interaction.\n\nIf the larger heart structure in G357.2$-$0.2 is considered to be\nthe shell of an SNR, then assuming a Sedov solution \\citep[see,\ne.g.,][]{Matthews98} yields an age of about $21\\,d_{8.5}^{5\/2}\n(n_0\/E_{51})^{1\/2}$\\,kyr. For such a solution, the RS would have\nalready propagated back to the central regions of the SNR. This is\nsimilar to the age estimate for G327.1$-$1.1 ($\\sim 17$\\,kyr) at a\ndistance of 9\\,kpc. The radio spectral index for the entire nebula\nin G327.1$-$1.1 is $\\alpha \\sim -0.3$, typical of PWNe, although\nthe tail-like structure extending from the pulsar has a steeper\nspectrum with $\\alpha \\sim -0.6$, similar to the filamentary\nstructures in the worm.\n\n\n\\subsubsection{X-ray Limits}\n\nPulsars that power appreciable PWNe convert a fraction of their\nspin-down luminosity $\\dot E$ into nonthermal X-rays. Here we\ninvestigate whether the limits on X-ray emission obtained from the\neROSITA image presented in Section~\\ref{sec:X-ray-image} are\nconsistent with a PWN interpretation for the worm in G357.2$-$0.2.\nIn what follows we assume that the absorbing hydrogen column to\nG357.2$-$0.2 is $N_{\\rm{H}}= 10^{22}$\\,cm$^{-2}$. This is the total\naverage column in the direction of the worm \\citep{HI4PI2016}, which\nwe use in the absence of other constraints.\n\nWe calculate limits separately for the presence of a point source,\nthe putative pulsar powering the PWN, as well as extended emission\nfrom the candidate PWN. In what follows we always report unabsorbed\nflux and luminosity limits, i.e., intrinsic to the source after\ncorrection for the assumed absorbing column. All limits are reported\nat the $3\\,\\sigma$ level.\n\nNo X-ray point source is detected in eRASS:4 within the bounds of\nthe presumed PWN, indicated by radio contours inside the large green\ncircle in Figure~\\ref{fig:eROSITA}. We considered two different\nemission free spots within this region and obtained a mean cumulative\nTM1--TM7 count rate for a putative point source of $<0.059$\\,cts\\,s$^{-1}$\nin the 0.2--8\\,keV band.\n\nPulsars detected in X-rays that power PWNe have power-law spectra\nwith photon index $\\Gamma_{\\rm psr}$ in the range 1.0--2.7 \\citep[see,\ne.g.,][]{Becker09}. Here we assume $\\Gamma_{\\rm psr} = 1.7$\n\\citep[e.g., applicable to PSR~J2021+3651 with $\\dot E =\n3\\times10^{36}$\\,erg\\,s$^{-1}$,][]{Hessels2004}. For this spectrum,\nthe above count rate limit yields $f_x(0.2-8\\,\\mbox{keV}) <\n1.3\\times10^{-13}$\\,erg\\,s$^{-1}$\\,cm$^{-2}$ for the unabsorbed\nenergy flux of an undetected point source. For comparison with a\nmore commonly referenced band, $f_x(0.2-2.4\\,\\mbox{keV}) <\n7.9\\times10^{-14}$\\,erg\\,s$^{-1}$\\,cm$^{-2}$. Using the assumed\n$d=8.5$\\,kpc for G357.2$-$0.2, we estimate that the isotropic X-ray\nluminosity of the undetected putative neutron star is $L_{x, \\rm\npsr} = 4 \\pi d^2 f_x < 6.9 \\times 10^{32}$\\,erg\\,s$^{-1}$ within\nthe 0.2--2.4\\,keV band.\n\nThe observed nonthermal X-ray efficiency of rotation-powered pulsars\n($\\eta_{x, \\rm psr} \\equiv L_{x, \\rm psr} \/ \\dot E$) clusters around\n$10^{-3}$ in the 0.1--2.4\\,keV band\n\\citep[see][]{1997A&A...326..682B,Becker09}. The above point source\nlimit therefore nominally implies $\\dot{E} < 6.9\\times\n10^{35}$\\,erg\\,s$^{-1}$. Given the scatter in the $\\eta_{x, \\rm\npsr}$ relation, and the uncertainties in $N_{\\rm H}$ and $d$, this\nlimit does not exclude the existence of a pulsar of intermediate\n$\\dot E \\sim 10^{36}$\\,erg\\,s$^{-1}$ powering G357.2$-$0.2 and\nbeaming towards the Earth. Also, it is always possible that an\nunfavorable beaming geometry would preclude direct detection of\nnonthermal emission from a pulsar regardless of $\\dot E$ and\nsensitivity. However, regardless of geometry a suitably energetic\npulsar should manifest itself via a diffuse PWN.\n\nTo constrain extended X-ray emission from G357.2$-$0.2, we derived\nthe count rate limit within the circle of radius 240\\arcsec\\ in\nFigure~\\ref{fig:eROSITA}, which encompasses most of the putative\nradio PWN, after subtracting the contribution from the faint\nsouthwestern point source. We obtained a cumulative count rate\n$<0.18$\\,cts\\,s$^{-1}$ in the 0.2--8\\,keV band.\n\nPWNe detected in X-rays have power-law spectra with $\\Gamma_{\\rm\npwn}$ in the range 1.0--2.2 \\citep[see, e.g.,][]{2008AIPC..983..171K}.\nHere we assume $\\Gamma_{\\rm pwn} = 2.0$ (e.g., applicable to the\nG327.1$-$1.1 PWN discussed in Section~\\ref{sec:g327}). For this\nspectrum, the above count rate limit gives $f_x(0.2-8\\,\\mbox{keV})\n< 4.1\\times10^{-13}$\\,erg\\,s$^{-1}$\\,cm$^{-2}$. In turn, with\n$d=8.5$\\,kpc we obtain $L_{x, \\rm pwn} = 4 \\pi d^2 f_x < 3.6 \\times\n10^{33}$\\,erg\\,s$^{-1}$ for the putative PWN in G357.2$-$0.2.\n\nThe observed X-ray efficiency of PWNe spans a wide range, with the\nbulk within $10^{-5} < \\eta_{x, \\rm pwn} < 10^{-2}$\n\\citep{2008AIPC..983..171K}. In any case, there are many instances\nof X-ray PWNe powered by pulsars with $\\dot E = 10^{36-37}$\\,erg\\,s$^{-1}$\n(e.g., PSR~J2021+3651 and Vela) that have $L_{x, \\rm pwn}$ below\nour limit for G357.2$-$0.2, and a few such instances powered by\npulsars with even higher $\\dot E$ \\citep[e.g.,\nPSR~J2229+6114,][]{Halpern2001}.\n\nTherefore, the current X-ray limits\\footnote{We have also analyzed\n\\emph{Swift} X-Ray Telescope observations of this region resulting\nin the concatenated image available at\n\\url{https:\/\/www.swift.ac.uk\/2SXPS\/Fields\/10000013359}. No sources\nare detected and the limits at the location of G357.2$-$0.2 are 5\ntimes poorer than those from the eROSITA observations.} do not rule\nout that G357.2$-$0.2 may be powered by a pulsar of intermediate\n$\\dot E$, like many that power a variety of PWNe.\n\nFor completeness, we also searched the \\emph{Fermi}-LAT 4FGL catalog\n\\citep{Fermi4FGL} for a source coincident with G357.2$-$0.2 but\nthere are none. This is not constraining: while many energetic\npulsars emit in GeV $\\gamma$-rays, their $\\dot E\/d^2$ flux needs\nto be large \\citep{Fermi2PC}.\n\n\n\\section{Summary\\label{Summary}}\n\nG357.2$-$0.2 consists of two possibly related components, the\n``worm'', a series of filaments; and the ``heart'' which is an\nextended heart-shaped feature of which we may only see the rim.\nH\\textsc{i} observations of \\cite{Roy2002} show the worm to be of\nGalactic origin. The pulsar B1736$-$31 appears inside the heart\nbut is a chance positional coincidence. Part of the rim of the\nheart appears to be an unrelated H\\textsc{ii} region.\n\nThe spectrum and polarization of the emission indicate that the\nbulk of the emission from both the worm and the heart is nonthermal\nsynchrotron. There is a dense plasma, possibly associated with the\nheart, that results in a large Faraday rotation and some depolarization\nof the emission from the filaments of the worm. These appear to\nbe magnetic structures lit up by particle acceleration in knots\nwhich are associated with the filaments and which appear to be\ndragging the magnetic field tubes. The nature of these knots is\nuncertain.\n\nThe structure of the worm at least superficially resembles some\nPWNe with much of the emission appearing in the form of tangled\nfilaments. More sensitive X-ray observations are of particular\ninterest to further understand the nature of this source. MeerKAT\nobservations at S band, with higher angular resolution and less\nsusceptible to depolarization, may also be instructive. In addition,\ndetailed hydrodynamical studies could be revealing. An ultra-deep\nradio pulsar search might also be illuminating \\citep[see][]{Camilo2009b}.\nNevertheless, if close to the Galactic center, this $\\sim20$\\,pc\nstructure would be a very large PWN. The possibility remains that\nthis is a more exotic object, perhaps sculpted in part by interaction\nwith outflows from the Galactic center region.\n\nThe radio imaging products presented here are made available with\nthis article\\footnote{\\url{https:\/\/doi.org\/10.48479\/q20r-hb79}},\n\\chg{including Stokes~I (L band, UHF+L band, UHF enhanced surface\nbrightness sensitivity),\nspectral index (UHF+L band), and Stokes Q and U L-band cubes.} Raw\nvisibility products are available from the MeerKAT data\narchive\\footnote{\\url{https:\/\/archive.sarao.ac.za}} under project\ncode SSV-20200720-SA-01.\n\n\\acknowledgments\n\\chg{We would like to thank the anonymous reviewer for helpful comments.}\nThe MeerKAT telescope is operated by the South African Radio Astronomy\nObservatory which is a facility of the National Research Foundation,\nan agency of the Department of Science and Innovation.\nThe National Radio Astronomy Observatory is a facility of the National\nScience Foundation, operated under a cooperative agreement by Associated\nUniversities, Inc.\nMAT acknowledges support from the UK's Science \\& Technology Facilities\nCouncil [grant number ST\/R000905\/1].\neROSITA is the primary instrument aboard SRG, a joint Russian-German\nscience mission supported by the Russian Space Agency (Roskosmos),\nin the interests of the Russian Academy of Sciences represented by\nits Space Research Institute (IKI), and the Deutsches Zentrum f\\\"ur\nLuft- und Raumfahrt (DLR). The SRG spacecraft was built by Lavochkin\nAssociation (NPOL) and its subcontractors, and is operated by NPOL\nwith support from IKI and the Max Planck Institute for Extraterrestrial\nPhysics (MPE). The development and construction of the eROSITA\nX-ray instrument was led by MPE, with contributions from the Dr.~Karl\nRemeis Observatory Bamberg \\& ECAP (FAU Erlangen-N\\\"urnberg), the\nUniversity of Hamburg Observatory, the Leibniz Institute for\nAstrophysics Potsdam (AIP), and the Institute for Astronomy and\nAstrophysics of the University of T\\\"ubingen, with the support of\nDLR and the Max Planck Society. The Argelander Institute for\nAstronomy of the University of Bonn and the Ludwig Maximilians\nUniversit\\\"at Munich also participated in the science preparation\nfor eROSITA. The eROSITA data shown here were processed using the\neSASS\/NRTA software system developed by the German eROSITA consortium.\n\n\\facilities{MeerKAT, eROSITA}\n\n\n\\software{\\emph{Obit} \\citep{Obit}}\n\n\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}