diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzahqw b/data_all_eng_slimpj/shuffled/split2/finalzzahqw new file mode 100644 index 0000000000000000000000000000000000000000..f45a58b99a7df5a10f7d71cb08b24ae3499fd694 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzahqw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nImage classification and segmentation are fundamental tasks in many visual recognition applications involving natural and medical images. Given a large image dataset annotated with global image-level labels for classification or with pixel-level labels for segmentation, deep learning (DL) models achieve state-of-the-art performances for these tasks \\citep{dolz20183d,Goodfellow-et-al-2016,krizhevsky12,LITJENS201760,LongSDcvpr15,Ronneberger-unet-2015}. However, the impressive accuracy of such fully-supervised learning models comes at the expense of a considerable cost for collecting and annotating large image data sets. While the acquisition of global image-level annotation can be relatively inexpensive, pixel-wise annotation involves a laborious process, a difficulty further accrued by the requirement of domain expertise, as in medical imaging, which increases the annotation costs.\n\\begin{figure}[t!]\n\\centering\n \\centering\n \\includegraphics[scale=0.49]{proposal}\n \\caption{Proposed AL framework with weak annotator.}\n \\label{fig:fig-proposal}\n\\end{figure}\n\nWeakly-supervised learning (WSL) has recently emerged as a paradigm that relaxes the need for dense pixel-wise annotations \\citep{rony2019weak-loc-histo-survey,zhou2017brief}. WSL techniques depend on the type of application scenario and annotation, such as global image-level labels \\citep{belharbi2019weakly,kim2017two,pathak2015constrained,teh2016attention,wei2017object}, scribbles \\citep{Lin2016,ncloss:cvpr18}, points \\citep{bearman2016}, bounding boxes \\citep{dai2015boxsup,Khoreva2017} or global image statistics such as the target-region size \\citep{bateson2019constrained,jia2017constrained,kervadec2019curriculum,kervadec2019constrained}. This paper focuses on learning using only image-level labels, which enables to classify an image while yielding pixel-wise scores (i.e., segmentations), thereby localizing the regions of interest linked to the image-class predictions.\n\n\nSeveral CNN visualization and interpretation methods have recently been proposed, based on either perturbation, propagation or activation approaches, and allowing to localize the salient image regions responsible for a CNN's predictions \\citep{fu2020axiom}. In particular, WSL techniques \\citep{rony2019weak-loc-histo-survey} rely on activation-based methods like CAM and, more recently, Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, Ablation-CAM and Axiom-based Grad-CAM \\citep{fu2020axiom, lin2013network, PinheiroC15cvpr}. Trained with only global image-level annotations, these methods locate the regions of interest (ROIs) of the corresponding class in a relatively inexpensive and accurate way. However, while these WSL techniques can provide satisfying results in natural images, they typically yield poor segmentations in relatively more challenging scenarios, for instance, histology data in medical image analysis \\citep{rony2019weak-loc-histo-survey}.\nWe note two limitations associated with CAMs: (1) they are obtained in an unsupervised way (\\emph{i.e.}\\@\\xspace without pixel-level supervision under an ill-posed learning problem \\citep{choe2020evaluating}); and (2) they have low resolution. For instance, CAMs obtained from ResNet models \\citep{heZRS16} have a resolution of ${1\/32}$ of the input image. Interpolation is required to restore full image resolution. Both of these limitation with CAM-based methods lead to high false-positive rates, which may render them impractical \\citep{rony2019weak-loc-histo-survey}.\n\nEnhancing deep WSL models with pixel-wise annotation, as supported by a recent study in weakly-supervised object localization \\citep{choe2020evaluating}, can improve localization and segmentation accuracy, which is the central goal of this paper. To do so, we introduce a deep WSL model that allows supervised learning for classification, and active learning for segmentation, with the latter providing more accurate and high-resolution masks. We assume that the images in the training set are globally annotated with image-class labels. Relevant images are \\emph{gradually} labeled at the pixel level through an oracle that respects a low annotation-budget constraint. Consequently, this leads us to an active learning (AL) paradigm \\citep{settles2009active}, where an oracle is requested to annotate pixels in a subset of samples.\n\nDifferent sample-acquisition techniques have been successfully applied to deep AL for classification based on, e.g., certainty \\citep{ducoffe2015qbdc,gal2017deep,kirsch2019batchbald} or representativeness \\citep{kim2020task,sinha2019variational}. However, very few deep AL techniques were investigated in the context of segmentation \\citep{gaur2016membrane,gorriz2017active,lubrano2019deep}. Most AL techniques focus mainly on the sampling criterion (Fig.\\ref{fig:fig-proposal}, left) to populate the labeled pool using an oracle. During training, only the labeled pool is used, while the unlabeled pool is left dormant. Such an AL protocol may limit the accuracy of DL models under constrained oracle-supervision budget in real-world applications for multiple reasons:\n\n\\textbf{(1)} Standard AL protocols may be relevant to small\/shallow models that can learn and provide reliable queries using a few training samples. Since training accurate DL models typically depends on large training sets, large numbers of queries may be needed to build reliable DL models, which may incur a high annotation cost.\n\n\\textbf{(2)} In most AL work, the experimental protocol starts with a large labeled pool, and acquires a large number of queries for sufficient supervision, neglecting the workload placed on the oracle.\nThis typically reaches a plateau-performance of a DL quickly, hampering a reliable study of the impact of different AL selection techniques. Moreover, model-based sampling techniques are inconsistent \\citep{gaur2016membrane} in the sense that the model is used to query samples while it is still in an early learning stage.\n\n\\textbf{(3)} Segmentation and classification problems are associated with different properties and challenges, such as decision boundaries and uncertainty, which provide additional challenges to AL. For instance, the class boundaries defined by different classification methods \\citep{ducoffe2018adversarial,settles2009active,tong2001support} are not defined in the context of segmentation, making such a branch of methods inadequate for segmentation.\n\n\\textbf{(4)}\nIn critical fields such as medical imaging, acquiring a sample itself can be very expensive\\footnote{\nFor instance, prior to a diagnosis of breast cancer from a histological sample, a patient undergoes a bilateral diagnostic mammogram and breast ultrasound that are interpreted by a radiologist, one to several needle biopsies (with low risks under ${1\\%}$ of hematoma and wound infection) to further assess areas of concern, surgical consultation and pre-operative blood work, and surgical excision of the positive tissue for breast cancer cells. The biopsy and surgical tissues are processed (fixation, embedding in parraffin, H\\&E staining) and interpreted by a pathologist. Depending on the cancer stage, the patient may undergo additional procedures. Therefore, accounting for all the steps required for breast cancer diagnosis from histological samples, a rough estimation of the final cost associated with obtaining a Whole Slide Image (WSI) is about \\$400 (Canadian dollars, by 1999) \\citep{will1999diagnostic}. Moreover, some cancer types are rare \\citep{will1999diagnostic}, adding to the values of these samples. All these procedures are conducted by highly trained experts, with each procedure taking from a few minutes to an hour and requiring expensive specialized medical equipment.}.\nThe time and cost associated with each sample makes them valuable items. Such considerations may be overlooked for large-scale data sets with almost-free samples, as in the case of natural images. Given this high cost, keeping the unlabeled pool dormant during learning may be ineffective.\n\n\nBased on the aforementioned arguments, we advocate that focusing solely on the sample acquisition and supervision pool may not be an efficient way to build\nhigh-performing DL models in an AL framework for segmentation. Therefore, we consider augmenting the labeled pool using the model as a second source of annotation, in a self-learning fashion \\citep{mao2020survey} (Fig.\\ref{fig:fig-proposal}, right). This additional annotation might be less accurate (\\emph{i.e.}\\@\\xspace, weak\\footnote{Not to be confused with the weak annotation of data in weakly supervised learning frameworks.}) compared to the oracle that provides strong but expensive annotations. However, it is expected to fast-improve the model's performance \\citep{mao2020survey}, while using a few oracle-annotated samples, reducing the annotation cost.\n\nOur main contributions are the following.\n\\textbf{(1) Architecture design}: As an alternative to CAMs, we propose using a segmentation mask trained with pixel-level annotations, which yields more accurate and high-resolution ROIs. This is achieved through a convolutional architecture capable of simultaneously classifying and segmenting images, with the segmentation task trained using annotations acquired using an AL framework.\nAs illustrated in Fig.\\ref{fig:fig-archi}, the architecture combines well-known DL models for classification (ResNet \\citep{heZRS16}) and segmentation (U-Net \\citep{Ronneberger-unet-2015}), although other architectures could also be used.\n\\textbf{(2) Active learning}: We augment the size of the labeled pool by weak-annotating a large number of unlabeled samples based on predictions of the DL model itself, providing a second source of annotation (Fig.\\ref{fig:fig-proposal}). This enables rapid improvements of the segmentation accuracy, with less oracle-based annotation. Moreover, our method can be integrated on top of any sample-acquisition method.\n\\textbf{(3) Experimental study}: We conducted comprehensive experiments over two challenging benchmarks -- high-resolution medical images (histology GlaS data for colon cancer) and natural images (CUB-200-2011 for bird species). Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of the-art CAMs and AL methods, with an identical oracle-supervision budget.\n\n\n\\section{Related work}\n\\label{sec:related-work}\n\n\\noindent \\textbf{Deep active learning:}\nAL has been studied for a long time in machine learning, mainly for classification and regression, using linear models in particular \\citep{settles2009active}. Recently, there has been an effort to transfer such techniques to DL models for classification tasks by mimicking their intuition or by adapting them, taking into consideration model specificity and complexity. Such methods include, for instance, different mechanisms for uncertainty \\citep{beluch2018power,ducoffe2015qbdc,ducoffe2018adversarial,gal2017deep,kim2020task,kirsch2019batchbald,lakshminarayanan2017simple,wang2016cost,yoo2019learning} and representativeness estimation \\citep{kim2020task,sener2018coreset,sinha2019variational}. However, most deep AL techniques are validated on synthetic, simple or tiny data, which does not explore their full potential in real applications.\n\nWhile deep AL for classification is rapidly growing, deep AL models for segmentation are uncommon in the literature. In fact, the very few methods in the literature mostly focused on the direct application of deep AL classification methods. The limited research in this area may be explained by the fact that segmentation tasks bring challenges to AL, such as the additional spatial information and the fact that a segmentation mask lies in a much larger dimension than a classification prediction. In classification, AL often deals with one output that is used to drive queries \\citep{huang2010active}. The spatial information in segmentation does not naturally provide a direct scoring function that can indicate the overall quality or certainty of the output. Most of deep AL methods for segmentation consider pixels as classification instances, and apply standard AL techniques to each pixel.\n\nFor instance, the authors of \\citep{gaur2016membrane} exploit a variant of entropy-based acquisition at the pixel level, combined with a distribution-based term that encodes diversity using a complex hierarchical clustering algorithm over sliding windows, with application to microscopic membrane segmentation. Similarly, \\citep{gorriz2017active,lubrano2019deep} apply Monte-Carlo dropout uncertainty \\citep{gal2017deep} at the pixel level, with application to myelin segmentation using spinal cord and brain microscopic histology images. In \\citep{roels2019cost}, the authors experiment with five acquisition functions of classification for a segmentation task, including entropy-based, core-set \\citep{sener2018coreset}, k-mean and Bayesian \\citep{gal2017deep} sampling, with application to electron microscopy segmentation. Entropy-based methods seem to be dominant over multiple datasets. In \\citep{yang2017suggestive}, the authors combine two sampling terms for histology image segmentation. The first employs bootstrapping over fully convolutional networks (FCN) to estimate uncertainty, where a set of FCNs are trained on different subsets of samples. The second term is a representation-based term that selects the most representative samples. This is achieved by solving an optimization of a generalization version of the maximum cover set problem \\citep{feige1998threshold} using sample description extracted from an FCN. Despite the obtained promising results, this approach remains complex and impractical due to the use of bootstrapping over DL models and an optimization step. Moreover, the authors of \\citep{yang2017suggestive} do not provide a comparison to other acquisition functions. The work in \\citep{casanova2020} considers a specific case of AL using reinforcement learning for \\emph{region-based} AL for segmentation, where only a selected region of the image is labeled. This method is adequate for data sets with large and unbalanced classes, such as street-view images. While the method in \\citep{casanova2020} outperforms random and Bayesian \\citep{gal2017deep} selection, surprisingly, it performs close to entropy-based selection.\n\n\n\\noindent \\textbf{Weak annotators:}\nThe AL paradigm does not prohibit the use of unlabelled data for training \\citep{settles2009active}, but it mainly constrains the oracle-labeling budget. The standard AL experimental protocol (Fig.\\ref{fig:fig-proposal}, left) was inherited from AL of simple\/linear ML models, and adopted in subsequent works. Budget-constrained oracle annotation may not be sufficient to build effective DL models, due to the lack of labeled samples. Furthermore, several studies in AL for classification have successfully leveraged the unlabelled data to provide additional supervision \\citep{lin2017active,long2008graph,vijayanarasimhan2012active,wang2016cost,zhou2010active,zhualsslharmo2003}.\n\nFor instance, the authors of \\citep{lin2017active,wang2016cost} propose to pseudo-label a part of the unlabeled pool. The latter is selected using dynamic thresholding based on confidence, through the model itself, so as to learn a better embedding. Furthermore, a theoretical framework for AL using strong and weak annotators for classification task is introduced in \\citep{zhang2015active}. Their results suggest that using multiple annotators can reduce the cost of oracle annotation, in comparison to one annotator. Multiple sources of annotations that include both strong and weak annotators were used in AL, crowd-sourcing, self-paced learning and other interactive learning scenarios for classification to help reducing the number of requests for the strong annotator \\citep{kremer2018robust,malago2014online,mattsson2017active,murugesan2017active,urner2012learning,yan2016active,zhang2015active}.\nUsing the model itself for pseudo-annotation is motivated mainly by the success of deep self-supervised learning \\citep{mao2020survey}.\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n\\includegraphics[scale=0.65]{knn}\n\\caption{The $k$-nn method for selecting $\\mathbb{U}^{\\prime\\prime}$ subset to be speudo-labeled. Assumption to select $\\mathbb{U}^{\\prime\\prime}$: since $\\mathbb{U}^{\\prime\\prime}$ lives nearby supervised samples, it is more likely to be assigned good segmentation by the model.\nWe consider measuring $k$-nn for each \\textbf{unlabeled} sample. In this example, using $k=4$ allows ${\\abs{\\mathbb{U}^{\\prime\\prime}} = 14}$. If $k$-nn is considered for each \\textbf{labeled} sample: $\\abs{\\mathbb{U}^{\\prime\\prime}} = 8$. ${\\abs{\\cdot}}$ is the set cardinal. Note that ${k}$-nn is only considered between samples of the \\emph{same class}.}\n \\label{fig:fig-knn}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\n\\noindent \\textbf{Label Propagation (LP):}\nOur approach is also related to LP methods \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002} for classification, which aim to label unlabeled samples using knowledge from the labeled ones (Fig.\\ref{fig:fig-knn}). However, while LP propagates labels to unlabeled samples through an iterative process, our approach bypasses this using the model itself. In our case, the propagation is limited to the neighbors of labeled samples defined through $k$-nearest neighbors ($k$-nn) (Fig.\\ref{fig:fig-knn}). Using $k$-nn has been also studied to combine AL and domain adaptation \\citep{berlind2015active}, where the goal is to query samples from the target domain. Such an approach is connected to the recently developed core-set method for deep AL \\citep{sener2018coreset}. Our method intersects with \\citep{berlind2015active} only in the sense of predicting the labels to query samples using their labeled neighbors.\n\nIn contrast to state-of-the-art DL models for AL segmentation, we consider increasing the unlabeled pool through pseudo-annotated samples (Fig.\\ref{fig:fig-proposal}, right). To this end, the model is used for pseudo-labeling samples within the neighborhood of samples with strong supervision (Fig.\\ref{fig:fig-knn}). From a self-learning perspective, the works in \\citep{lin2017active,wang2016cost} on face recognition are the closest to ours. While both rely on pseudo-labeling, they mainly differ in the sample selection for pseudo-annotation. In \\citep{lin2017active,wang2016cost}, the authors considered model confidence, where samples with high confidence are pseudo-labeled, while low-confidence samples are queried. While this yields good results, it makes the overall method strongly dependent on model confidence. As we consider segmentation tasks, model-confidence is not well-defined. Moreover, using the expected pixel-wise values can be less representative for model confidence.\n\nOur approach relies on the spatial assumption in Fig.\\ref{fig:fig-knn}, where the samples to pseudo-label are selected to be near the labeled samples, and expected to have good pseudo-segmentations. This makes the oracle-querying technique independent from the pseudo-labeling method, giving more flexibility to the user. Our pseudo-labeled samples are added to the labeled pool, along with the samples annotated by the oracle. The underlying assumption is that, given a sample labeled by an oracle, the model is more likely to produce good segmentations of images located nearby that sample. Our assumption is verified empirically in the experimental section of this paper. This simple procedure enables to rapidly increase the number of pseudo-labeled samples, and helps improving segmentation performance under a limited amount of oracle-based supervision.\n\n\n\n\n\n\\section{Proposed approach}\n\\label{sec:proposal}\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n \\includegraphics[scale=0.75]{archi}\n \\caption{Our proposed DL architecture for classification and segmentation composed of: (1) a shared \\textbf{backbone} for feature extraction; (2) a \\textbf{classification head} for the classification task; (3) and a \\textbf{segmentation head} for the segmentation task with a U-Net style \\citep{Ronneberger-unet-2015}. The latter merges representations from the backbone, while gradually upscaling the feature maps to reach the full image resolution for the predicted mask, similarly to the U-Net model.}\n \\label{fig:fig-archi}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\nWe consider an AL framework for training deep WSL models, where all the training images have class-level annotations, but no pixel-level annotations. Due to their high cost, pixel annotations are gradually acquired for training through oracle queries. It propagate pixel-wise knowledge encoded in the model though the labeled images.\n\nActive learning training consists of sequential training rounds. At each round $r$, the total training set ${\\mathbb{D}}$ that contains $n$ samples with unlabeled and labeled subsets (Fig.\\ref{fig:fig-proposal}).\n\\textbf{(1) Unlabeled subset:} contains samples without pixel-wise annotation (unlabeled samples) ${\\mathbb{U} = \\{\\bm{x}_i, y_i, -\\-\\}_{i=1}^u}$ where ${\\bm{x} \\in \\mathcal{X}}$ is the input image, ${y}$ is its global label; and the pixel label is missing.\n\\textbf{(2) Labeled subset:} contains samples with full supervision ${\\mathbb{L}=\\{\\bm{x}_i, y_i, \\bm{m}_i\\}_{i=1}^l}$ where ${\\bm{m}}$ is the pixel-wise annotation of the sample. ${\\mathbb{L}}$ is initially empty. It is gradually populated from ${\\mathbb{U}}$ by querying the oracle using an acquisition function.\nLet ${f(\\cdot: \\bm{\\theta})}$ denotes a DL model that is able to classify and segment an image ${\\bm{x}}$ (Fig.\\ref{fig:fig-archi}). For clarity, and since we focus on the segmentation task, we omit the notation for the classification task (to simplify the presentation). Therefore, ${f(\\bm{x})}$ refers to the predicted segmentation mask.\nLet ${\\mathbb{U}^{\\prime} \\subseteq \\mathbb{U}}$ and ${\\mathbb{U}^{\\prime\\prime} \\subseteq \\mathbb{U}}$ denote two subsets (Fig.\\ref{fig:fig-proposal}), with ${ \\mathbb{U}^{\\prime} \\cap \\mathbb{U^{\\prime\\prime}} = \\varnothing}$.\nIn our method, we introduce ${\\mathbb{P}}$ as a subset holder for pseudo-labeled samples, which is initially empty and gradually replenished (Fig.\\ref{fig:fig-proposal}, right).\nTo express the dependency of each subset on round ${r}$, we introduce the following notations: ${\\mathbb{U}_r, \\mathbb{L}_r, \\mathbb{P}_r, \\mathbb{U}^{\\prime}_r, \\mathbb{U}^{\\prime\\prime}_r}$. The samples in ${\\mathbb{P}_r}$ are denoted as ${\\{\\bm{x}_i, y_i, \\hat{\\bm{m}}_i\\}}$. The following holds: ${\\forall r: \\mathbb{D} = \\mathbb{L}_r \\cup \\mathbb{U}_r \\cup \\mathbb{P}_r}$.\n\n\n\nAlg.\\ref{alg:alg-0} describes the overall AL process with our pseudo-annotation method.\nFirst, ${\\mathbb{U}^{\\prime}_r}$ is queried, then labeled by the oracle, and added to ${\\mathbb{L}_r}$.\nUsing $k$-nn, ${\\mathbb{U}^{\\prime\\prime}_r}$ is selected based on their proximity to ${\\mathbb{L}_r}$ (Fig.\\ref{fig:fig-knn}); and pseudo-labeled by the model, then added to ${\\mathbb{P}_r}$. To fast-increase the size of ${\\mathbb{L}_r}$, ${\\mathbb{P}_r}$ is protected from being queried for the oracle until it is inevitable. In the \\emph{latter case}, queried samples from ${\\mathbb{P}_r}$ are used to fill ${\\mathbb{U}^{\\prime}}$; and they are no longer considered pseudo-labeled since they will be assigned the oracle annotation.\n\nTo measure image similarity for the $k$-nn method, we used the color distribution to describe image content. This can be a flexible descriptor for highly unstructured images such as histology images. Note that the ${k}$-nn method is considered \\emph{only} for pairs of samples of the \\emph{same class}. The underlying assumption is that samples of the same class, with similar color distributions, are likely to contain relatively similar objects. Consequently, labeling representative samples could be a proxy for supervised learning based on the underlying data distribution. This can increase the likelihood of the model to provide relatively good segmentations of the other samples. The proximity between two images ${(\\bm{x}_i, \\bm{x}_j)}$ is measured using the Jensen-Shannon divergence between their respective color distributions (measured as normalized histograms). For an image with multiple color planes, the similarity is formulated as the sum of similarities, one for each plane.\n\nAt round $r$, the queried and pseudo-labeled samples are both used in training by optimizing the following loss function:\n\\begin{equation}\n\\label{eq:eq-1}\n \\min_{\\bm{\\theta}} \\sum_{\\bm{x}_i \\in \\mathbb{L}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\bm{m}_i) + \\lambda \\sum_{\\bm{x}_i \\in \\mathbb{P}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\hat{\\bm{m}}_i),\n\\end{equation}\nwhere ${\\mathcal{L}(\\cdot, \\cdot)}$ is a segmentation loss, and ${\\lambda}$ a positive scalar. Eq.(\\ref{eq:eq-1}) corresponds to training the model (Fig.\\ref{fig:fig-archi}) solely for the segmentation task. Simultaneous training for classification and segmentation in this AL setup is avoided due to the unbalance between the number of samples that are labeled globally and at the pixel level. Therefore, we consider training the model for classification first. Then, we freeze the classifier parameters. Training for the segmentation tasks is resumed later. This yields the best classification performance, and allows a better study of the impact of the queried samples on the segmentation task.\n\nConsidering the relation of our method and label propagation algorithm \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002}, we refer to our proposal as Label\\_prop.\n\n\n\\begin{center}\n\\begin{minipage}{0.6\\linewidth}\n\\IncMargin{0.04in}\n\\begin{algorithm}[H]\n \\SetKwInOut{Input}{Input}\n \n \\Input{\n ${\\mathbb{P}_0 = \\mathbb{L}_0 = \\varnothing}$\n \\\\\n ${\\bm{\\theta}^0}$: Initial parameters of ${f}$ trained on the classification task.\n \\\\\n ${\\texttt{maxr}: \\textrm{Maximum number of AL rounds}}$.\n }\n \\vspace{0.1in}\n Select ${\\mathbb{U}^{\\prime}_0}$ randomly and label them by an oracle. \\\\\n \\vspace{0.025in}\n $ \\mathbb{L}_0 \\ \\leftarrow\\ \\mathbb{U}^{\\prime}_0$. \\\\\n \\For{${r \\in 1 \\cdots \\texttt{maxr}}$} \n {\n \n ${\\bm{\\theta} \\ \\leftarrow\\ \\bm{\\theta}^0}$. \\\\\n \\vspace{0.03in}\n Train $f$ using ${\\mathbb{L}_{r-1}}$ \\colorbox{mybluelight}{${\\cup\\; \\mathbb{P}_{r-1}}$} and the loss in Eq. (\\ref{eq:eq-1}). \\\\\n \\vspace{0.03in}\n Select ${\\mathbb{U}^{\\prime}_r}$ and label them by an oracle. \\\\\n \\vspace{0.03in}\n $ \\mathbb{L}_r \\ \\leftarrow\\ \\mathbb{L}_{r-1} \\cup \\mathbb{U}^{\\prime}_r$. \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{Select ${\\mathbb{U}^{\\prime\\prime}_r}$.} \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{$ \\mathbb{P}_r \\ \\leftarrow\\ \\mathbb{P}_{r-1} \\cup \\mathbb{U}^{\\prime\\prime}_r$.} \\\\ \n \\vspace{0.03in}\n \\colorbox{mybluelight}{Pseudo-label ${\\mathbb{P}_r}$.}\n }\n \\vspace{0.1in}\n \\caption{Standard AL procedure and our proposal. The extra instructions associated with our method are indicated with a \\colorbox{mybluelight}{blue background}.\n }\n \\label{alg:alg-0}\n\\end{algorithm}\n\\DecMargin{0.04in}\n\\end{minipage}\n\\end{center}\n\n\n\n\\section{Results and discussion}\n\\label{sec:experiments}\n\n\n\n \\subsection{Experimental methodology:}\n\n \\begin{figure}[!b]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{samples-datasets}\n \\caption{\\textbf{Top row}: GlaS dataset \\citep{sirinukunwattana2017gland}. \\textbf{Bottom row}: CUB dataset \\citep{WahCUB2002011}. (Best visualized in color.)}\n \\label{fig:fig-datasets}\n\\end{figure}\n\n\\noindent \\textbf{a) Datasets.}\nFor evaluation, datasets should have global and pixel-wise annotation. We consider two public datasets including both medical (histology) and natural images (Fig.\\ref{fig:fig-datasets}).\n\\begin{inparaenum}[(1)]\n\n \\item \\textbf{GlaS dataset}: This dataset contains histology images for colon cancer diagnosis\\footnote{GlaS: \\href{https:\/\/warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}{warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}.} \\citep{sirinukunwattana2017gland}. It includes 165 images derived from 16 Hematoxylin and Eosin (H\\&E) histology sections of two grades (classes): benign and malignant. It is divided into 84 samples for training and 80 samples for testing.\n The ROIs to be segmented are the glandes.\n \n \n \\item \\textbf{CUB-200-2011 dataset (CUB)}\\footnote{CUB: \\href{http:\/\/www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}{www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}} \\citep{WahCUB2002011} is a dataset for bird species with ${11,788}$ samples ($5,994$ for training and $5,794$ for testing) and ${200}$ species. The ROIs to be segmented are the birds.\n \n \\end{inparaenum}\n %\n In GlaS and CUB datasets, we randomly select $80\\%$ of the training samples for effective training, and $20\\%$ for validation (with full supervision) to perform early stopping. The splits are identical to the ones used in \\citep{belharbi2019unimoconstraints,rony2019weak-loc-histo-survey} (split 0, fold 0), and are publicly available.\n\n\\noindent \\textbf{b) Active learning setup.}\nTo assess the performance of different AL acquisition methods, we consider a realistic scenario with respect to the number of samples to be labeled at each AL round, accounting for the load imposed on the oracle.\nTherefore, only a few samples are selected at each round for oracle annotation, and ${\\mathbb{L}}$ is slowly replenished. This allows better comparison between AL selection techniques since we spend more time\nin a phase where ${\\mathbb{L}}$ holds a few samples. Such a phase allows to better measure the impact of the selected samples. Filling ${\\mathbb{L}}$ quickly brings the model's performance to a plateau that hides the impact of newly selected samples.\nThe initial replenishment ($r=1$) is achieved by randomly selecting a few samples. The same samples are used for all AL techniques at round $r=1$ for a fair comparison. To avoid any bias from selecting unbalanced classes that can directly affect the segmentation performance, and hinder AL evaluation, the same number of samples is selected from each class (since the global annotation is known beforehand for all the samples). Note that the oracle is used only to provide pixel-wise annotation. Tab.\\ref{tab:tab0} provides the selection details.\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Number of samples selected for the oracle per round.}\n\\label{tab:tab0}\n\\centering\n \\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r=1$)} & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r > 1$)} & \\makecell[c]{Max AL rounds \\\\ (\\texttt{maxr} in Alg.\\ref{alg:alg-0})}\\\\\n \\toprule\n GlaS & $4$ & $1$ & $25$ \\\\\n CUB & $1$ & $1$ & $20$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\noindent \\textbf{c) Evaluation.}\nWe report the classification accuracy obtained by the classification head (Fig.\\ref{fig:fig-archi}). Average Dice index is used to measure the segmentation quality at each AL round forming a Dice index curve over all the rounds. To better assess the \\emph{dominance} of each method \\citep{settles2009active}, the Area Under the Dice index Curve is used (AUC). This provides a fair indicator of the dominant curve, but contrasts with standard AL works, where one or multiple specific operating points in the curve are selected (leading to a biased and less accurate protocol). The average and standard deviation of Dice index curve and AUC metric are reported based on 5 replications of a complete AL session, using a different seed for each session. An AL session across different methods uses the same seed.\n\nWhile our approach, referred to as (Label\\_prop), can operate on top of any AL selection criterion, we demonstrate its efficiency using simple random selection, which is often a baseline for AL experiments. Note that our pseudo-annotations are obtained from the segmentation head shown in Fig.\\ref{fig:fig-archi}. Our method is compared to three different AL selection approaches for segmentation:\n\\textbf{(1) random selection (Random)}: the samples are randomly selected;\n\\textbf{(2) entropy-based selection (Entropy)}: the scoring function per sample is the average entropy at the pixel level \\citep{gaur2016membrane}. Samples with high entropy are selected; and\n\\textbf{(3) Monte-Carlo dropout uncertainty (MC\\_Dropout)}: we use Monte-Carlo dropout \\citep{gorriz2017active,lubrano2019deep} at the pixel level to compute the uncertainty score per sample. Samples are forwarded ${50}$ times in the model, where dropout is set to ${0.2}$ \\citep{gorriz2017active,lubrano2019deep}. Then, the pixel-wise variance is estimated. Samples with high mean variance are selected.\n\n\n\\noindent \\textbf{Lower bound performance (WSL)}: We consider the segmentation performance obtained by WSL method as a lower bound. It is trained using only global annotation. CAMs are used to extract the segmentation mask. WILDCAT method is considered \\citep{durand2017wildcat} (Fig.\\ref{fig:fig-archi}) at the classification head to obtain the CAMs. For WSL method, a pre-trained model over ImageNet \\citep{imagenetcvpr09} is used to initialize the weights of the backbone, which is then fined-tuned.\nThe model is trained over the entire dataset, where samples are labeled globally only.\nThe obtained classifier using seed=0 is frozen and used as a backbone for \\emph{all} the other methods.\n\n\\noindent \\textbf{Upper bound performance (Full\\_sup)}: Fully supervised segmentation is considered as an upper bound on performance. The model in Fig.\\ref{fig:fig-archi} is trained for segmentation only using the entire dataset, where samples are labeled at the pixel level.\n\nFor a fair comparison, all the methods are trained using the same hyper-parameters over the same dataset. WSL and Full\\_sup methods have minor differences. Due to space limitation, all the hyper-parameters are presented in the supplementary material.\nIn Alg.\\ref{alg:alg-0}, notice that for our method, ${\\mathbb{P}_r}$ is not used at the current round $r$ but until the next round ${r+1}$. To take advantage of ${\\mathbb{P}_r}$ at round $r$, instructions from line-4 to line-10 are repeated twice in the provided results.\n\n\n\n\n\n\n\n\n\\subsection{Results}\n\\label{sub:results}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{glas-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-0}\n \\end{subfigure}%\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{Caltech-UCSD-Birds-200-2011-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-1}\n \\end{subfigure}%\n\\caption{Average Dice index of the proposed and baseline methods over test sets.\n(\\protect\\subref{fig:fig-al-0}) GlaS.\n(\\protect\\subref{fig:fig-al-1}) CUB.\n}\n\\label{fig:fig-al-results}\n\\end{figure}\n\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Classification accuracy over of the proposed deep WSL model on GlaS and CUB test datasets.}\n\\label{tab:tab-cl-acc}\n\\centering\n \\resizebox{.5\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB}\\\\\n \\toprule\n \\makecell{Classification \\\\ accuracy (\\%)} & $99.50 \\pm 0.61$\n & $73.22 \\pm 0.19$\n \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Average AUC and standard deviation (Fig.\\ref{fig:fig-al-results}) for Dice index performance over GlaS and CUB test sets.}\n\\label{tab:tab-seg-perf}\n\\centering\n \\resizebox{.6\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB} \\\\\n \\toprule\n \\toprule\n WSL & $66.44 \\pm 0.20$ & $39.22 \\pm 0.19$ \\\\\n \\midrule\n Random & $78.57 \\pm 0.93$ & $68.15 \\pm 0.61$ \\\\\n Entropy & $79.13 \\pm 0.26$ & $68.25 \\pm 0.29$ \\\\\n MC\\_Dropout & $77.92 \\pm 0.49$ & $67.69 \\pm 0.27$ \\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{81.48 \\pm 1.03}$ & $\\bm{71.73 \\pm 0.67}$ \\\\\n \\midrule\n Full\\_sup & $86.53 \\pm 0.31$ & $75.29 \\pm 1.50$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\\begin{table*}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Readings of Dice index (mean $\\pm$ standard deviation) from Fig.\\ref{fig:fig-al-results} over test set for the \\textbf{first 5 queries} formed by each method. We start from the second query since the first query is random but identical for all methods.}\n\\label{tab:tab-dice-q}\n\\centering\n \\resizebox{0.9\\linewidth}{!}{\n\\begin{tabular}{l|cccccc}\n \\textbf{Queries} & \\textbf{q2} & \\textbf{q3} & \\textbf{q4} & \\textbf{q5} & \\textbf{q6} \\\\\n \\toprule\n \\toprule\n \\multicolumn{6}{c}{GlaS} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$66.44 \\pm 0.20$} \\\\\n \\midrule\n Random & $70.26 \\pm 3.02$ & $71.58 \\pm 3.14$ & $71.43 \\pm 1.83$ & $74.05 \\pm 3.14$ & $75.36 \\pm 3.45$\\\\\n Entropy & $\\bm{72.75 \\pm 2.96}$ & $70.93 \\pm 3.58$ & $72.60 \\pm 1.44$ & $73.44 \\pm 1.38$ & $75.15 \\pm 1.63$\\\\\n MC\\_Dropout & $68.44 \\pm 2.89$ & $69.70 \\pm 1.96$ & $69.97 \\pm 1.95$ & $72.71 \\pm 2.21$ & $73.00 \\pm 1.04$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $71.02 \\pm 4.19$ & $\\bm{74.07 \\pm 3.93}$ & $ \\bm{76.52 \\pm 3.49}$ & $\\bm{77.63 \\pm 2.73}$ & $\\bm{78.41 \\pm 1.23}$\\\\\n \n Full\\_sup & \\multicolumn{4}{c}{$86.53 \\pm 0.31$}\\\\\n \\toprule \\toprule\n \\multicolumn{6}{c}{CUB} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$39.22 \\pm 0.18$} \\\\\n \\midrule\n Random & $56.86 \\pm 2.07$ & $61.39 \\pm 1.85$ & $62.97 \\pm 1.13$ & $63.56 \\pm 4.02$ & $66.56 \\pm 2.50$\\\\\n Entropy & $53.37 \\pm 2.06$ & $59.11 \\pm 2.50$ & $60.48 \\pm 3.56$ & $63.81 \\pm 2.75$ & $63.59 \\pm 2.34$\\\\\n MC\\_Dropout & $57.13 \\pm 0.83$ & $59.98 \\pm 2.06$ & $63.52 \\pm 2.26$ & $63.02 \\pm 2.68$ & $64.68 \\pm 1.41$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{62.58 \\pm 2.15}$ & $\\bm{66.32 \\pm 2.34}$ & $ \\bm{67.01 \\pm 2.85}$ & $\\bm{69.40 \\pm 3.40}$ & $\\bm{68.28 \\pm 1.60}$\\\\\n \\midrule\n Full\\_sup & \\multicolumn{4}{c}{$75.29 \\pm 1.50$}\\\\\n \\bottomrule\n \\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\nWe report the classification and segmentation performances following the training the proposed deep WSL model in Fig.\\ref{fig:fig-archi}. Tab.\\ref{tab:tab-cl-acc} reports the Classification accuracy of the classification head using WSL, which is\nclose to the results reported in \\citep{belharbi2019weakly,rony2019weak-loc-histo-survey}. The results of GlaS suggest that it is an easy dataset for classification.\n\nThe segmentation results are reported in Tabs. \\ref{tab:tab-dice-q} and \\ref{tab:tab-seg-perf}, and in Fig \\ref{fig:fig-al-results}.\n\nFig. \\ref{fig:fig-al-0} compares Dice accuracy on the \\textbf{GlaS dataset}. On the latter, we observe that adding more labels increases Dice index for all AL methods, yielding, as expected, better performance than the WSL method. Reading from Tab.\\ref{tab:tab-dice-q}, randomly labeling only 4 samples per class enables to easily outperform WSL. This means that using our approach in Fig.\\ref{fig:fig-archi}, with limited supervision, can lead to more accurate masks compared to using CAMs in the WSL method. From Fig.\\ref{fig:fig-al-0}, one can also observe that Random, Entropy, and MC\\_Dropout methods grow relatively in the same way, leading to the same overall performance, with the Entropy method slightly ahead. Considering the overall behavior of the curves, one may conclude that using advanced selection techniques such as MC\\_Dropout and Entropy provides an accuracy similar to simple random selection. On the one hand, since both methods have shown substantial improvements in AL for classification, and based on the results in Fig.\\ref{fig:fig-al-0}, one may conclude that all samples are equivalently informative for the model. Therefore, there is no better order to acquire them. On the other hand, using simply random selection and pseudo-labeled samples allowed our method to substantially improve the overall performance, demonstrating the benefits of self-learning.\n\nFig.\\ref{fig:fig-al-1} and Tab.\\ref{tab:tab-dice-q} compare Dice accuracy on the \\textbf{CUB dataset}, where labeling only one sample per class yielded a large improvement in Dice index, in comparison to WSL. Adding more samples increases the performance of all the methods. One can observe similar pattern as for GlaS: Random, Entropy and MC\\_Dropout methods yield similar curves, while the AUC performances of Random and Entropy methods are similar, and slightly ahead of MC\\_Dropout.\nSimilar to GlaS analysis, and based on the results of these three methods, one can conclude that none of the methods for ordering the samples is better than simple random selection. Using self-labeled samples in our method shows again its benefits. Simple random selection combined with self-annotation yields an overall best performance. Using two datasets, our empirical results suggest that self-learning, under limited oracle-annotation, has the potential to provide a reliable second source of annotation, which can efficiently enhance model performance, while using simple sample acquisition techniques.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=.7\\linewidth]{dice-pseudo-labeled}\n \\caption{Average Dice index over the pseudo-labeled samples of our method in \\textbf{each} AL round.}\n \\label{fig:fig-dice-pseudo-labeled}\n\\end{figure}\n\n\\noindent \\textbf{Pseudo-annotation performance}. Furthermore, the proposed approach is assessed on the pseudo-labeled samples at each AL round. Fig.\\ref{fig:fig-dice-pseudo-labeled} shows that the model provides good segmentations at the initial rounds. Then, the more supervision, the more accurate the pseudo-segmentation, as expected. This figure shows the interest and potential of self-learning in segmentation, and confirms our assumption that samples near the labeled ones are likely to achieve accurate pseudo-segmentation by the model.\n\n\\noindent \\textbf{Hyper-parameters}. Our approach requires two main hyper-parameters: ${ k \\ \\text{and } \\lambda}$. We conducted an ablation study over ${k}$ on GlaS dataset, and over ${\\lambda}$ on both datasets. Results, which are presented in the supplementary material, suggest that our method is less sensitive to ${k}$. ${\\lambda}$ plays an important role, and\nbased on our study, we recommend using small values of this weighting parameter. In our experiments, we used $\\lambda=0.1$ for Glas and $\\lambda = 0.001$ for CUB. We set ${k=40}$. We note that hyper-parameter tuning in AL is challenging due to the change of the size of the data set, which in turn changes the training dynamics. In all the experiments, we used fixed hyper-parameters across the AL rounds.\nFig.\\ref{fig:fig-dice-pseudo-labeled} suggests that a dynamic ${\\lambda(r)}$ that is increased through AL rounds could be more beneficial. However, this requires a principled update protocol for ${\\lambda}$, which was not explored in this work. Nonetheless, using a fixed value seems to yield promising results overall.\n\n\\noindent \\textbf{Supplementary material}. Due to space limitation, we deferred the hyper-parameters used in the experiments, results of the ablation study, visual results for the similarity measure and examples of predicted masks to the supplementary materials.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nDeep WSL models trained with global image-level annotations can play an important role in CNN visualization and interpretability. However, they are prone to high false-positive rates, especially for challenging images, leading to poor segmentations.\nTo alleviate this issue, we considered using pixel-wise supervision provided gradually through an AL framework. This annotation is integrated into training using an adequate deep convolutional model that allows supervised learning for both tasks: classification and segmentation. Through a few pixel-supervised samples, such a design is intended to provide full-resolution and more accurate masks compared to standard CAMs, which are trained without pixel supervision and often provide coarse resolution. Therefore, it enables a better CNN visualization and interpretation of CNN predictions.\nFurthermore, and unlike standard deep AL methods that focus solely on the acquisition function, we considered using self-learning as a second source of supervision to fast-improve the model segmentation.\nEvaluating our method using a realistic AL protocol over two challenging benchmarks, our results indicate that:\n(1) using a \\emph{few} supervised samples, the proposed architecture yielded more accurate segmentations compared to CAMs, with a large margin using different AL methods. Thus, it provides a solution to enhance pixel-wise predictions\nin real-world visual recognition applications.\n(2) using self-learning with random selection yielded substantial improvements. Self-learning under a limited oracle-budget can, therefore, provide a cost-effective alternative to standard AL protocols, where most of the effort is spent on the acquisition function.\n\n\\section*{Acknowledgment}\nThis research was supported in part by the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, Compute Canada, MITACS, and the Ericsson Global AI Accelerator Montreal.\n\n\\clearpage\n\\newpage\n\n\\appendices\n\n\n\n\\section{Supplementary material for the experiments}\nDue to space limitation, we provide in this supplementary material detailed hyper-parameters used in the experiments, results of the ablation study, visual results to the similarity measure, and examples of predicted masks.\n\n\\subsection{Training hyper-parameters}\n\\label{subsec:hyper-params}\nTab.\\ref{tab:tabx-tr-hyper-params} shows the used hyper-parameters in all our experiments.\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Training hyper-parameters.}\n\\label{tab:tabx-tr-hyper-params}\n\\centering\n\\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{lccc}\n Hyper-parameter & GlaS & CUB\\\\\n \\toprule\n Model backbone & \\multicolumn{2}{c}{ResNet-18 \\citep{heZRS16}}\\\\\n WILDCAT \\citep{durand2017wildcat}: && \\\\\n ${\\alpha}$ & \\multicolumn{2}{c}{${0.6}$} \\\\\n ${kmin}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n ${kmax}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n modalities & \\multicolumn{2}{c}{${5}$} \\\\\n \\midrule\n Optimizer & \\multicolumn{2}{c}{SGD}\\\\\n Nesterov acceleration & \\multicolumn{2}{c}{True}\\\\\n Momentum & \\multicolumn{2}{c}{$0.9$} \\\\\n Weight decay & \\multicolumn{2}{c}{$0.0001$}\\\\\n Learning rate (LR) & ${0.1}$ (WSL: $10^{-4}$) & ${0.1}$ (WSL: $10^{-2}$)\\\\\n LR decay & ${0.9}$ & ${0.95}$ (WSL: ${0.9}$) \\\\\n LR frequency decay & $100$ epochs & $10$ epochs \\\\\n Mini-batch size & ${20}$ & ${8}$ \\\\\n Learning epochs & ${1000}$ & ${30}$ (WSL: ${90}$) \\\\\n \\midrule\n Horizontal random flip & \\multicolumn{2}{c}{True} \\\\\n Vertical random flip & True & False \\\\\n \n Crop size & \\multicolumn{2}{c}{${416 \\times 416}$}\\\\\n \\midrule\n ${k}$ & \\multicolumn{2}{c}{${40}$}\\\\\n ${\\lambda}$ & ${0.1}$ & ${0.001}$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{knn-ablation-glas-Label_prop-best-k-40}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter $k$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{25 queries for one trial}.\n AUC average $\\pm$ standard deviation: ${81.49 \\pm 0.59}$.\n Best performance in red dot: $k=40, AUC=82.41\\%$.\n }\n \\label{fig:abl-glas-knn}\n\\end{figure}\n\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-glas-Label_prop-best-lambda-0-dot-1}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{15 queries for one trial}.\n Best performance in red dot: $\\lambda=0.1, AUC=79.15\\%$.\n }\n \\label{fig:abl-glas-lambda}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-Caltech-UCSD-Birds-200-2011-Label_prop-best-lambda-0-dot-001}\n \\caption{Ablation study over CUB dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{5 queries for one trial}.\n Best performance in red dot: $\\lambda=0.001, AUC=66.94\\%$.\n }\n \\label{fig:abl-cub-lambda}\n\\end{figure}\n\n\n\\subsection{Ablation study}\n\\label{sub:ablation}\n\nWe study the impact of $k$ and ${\\lambda}$ on our method. Results are presented in Fig.\\ref{fig:abl-glas-knn}, \\ref{fig:abl-glas-lambda} for GlaS over ${k, \\lambda}$; and in Fig.\\ref{fig:abl-cub-lambda} for CUB over ${\\lambda}$. Due to the expensive computation time required to perform AL experiments, we limited the experiments (${k, \\lambda}$, number of trials, and \\texttt{maxr}).\nThe obtained results of this study show that our method is less sensitive to ${k}$ (standard deviation of ${0.59}$ in Fig.\\ref{fig:abl-glas-knn}).\nIn other hand, the method shows sensitivity to ${\\lambda}$ as expected from penalty-based methods \\citep{bertsekas1999nonlinear}.\nHowever, the method seems more sensitive to ${\\lambda}$ in the case of CUB than GlaS. CUb dataset is more challenging leading to more potential erroneous pseudo-annotation. Using Large ${\\lambda}$ will systematically push the model to learn on the wrong annotation (Fig.\\ref{fig:abl-cub-lambda}) which leads to poor results. In the other hand, GlaS seems to allow obtaining good segmentation where using large values of ${\\lambda}$ did not hinder the performance quickly (\\ref{fig:abl-glas-lambda}).\nThe obtained results recommend using small values that lead to better and stable performance. Using high values, combined with the pseudo-annotation errors, push the network to learn erroneous annotation leading to overall poor performance.\n\n\n\n\n\n\\subsection{Similarity measure}\n\\label{subsec:similarity}\nIn this section, we present some samples with their nearest neighbors. Although, it is difficult to quantitatively evaluate the quality of such measure.\nFig.\\ref{fig:glas-sim} shows the case of GlaS. Overall, the similarity shows good behavior of capturing the general stain of the image which is what was intended for since the structure of such histology images is subject to high variation. Since the stain variation is one of the challenging aspects in histology images \\citep{rony2019weak-loc-histo-survey}, labeling a sample with a common stain can help the model in segmenting other samples with similar stain.\nThe case of CUB, presented in Fig.\\ref{fig:cub-sim}, is more difficult to judge the quality since the images contain always the same species within their natural habitat. Often, the similarity succeeds to capture the overall color, background which can help segmenting the object in the neighbors and also the background. In some cases, the similarity captures samples with large zoom-in where the bird color dominate the image.\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=1.\\linewidth]{glas}\n \\caption{Examples of ${k}$-nn over GlaS dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:glas-sim}\n\\end{figure*}\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=0.9\\linewidth]{Caltech-UCSD-Birds-200-2011}\n \\caption{Examples of ${k}$-nn over CUB dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:cub-sim}\n\\end{figure*}\n\n\n\\subsection{Predicted mask visualization}\n\\label{subsec:mask-vis}\nFig.\\ref{fig:cub-results-visu} shows several test examples of predicted masks of different methods over CUB test set at the first AL round (${r=1}$) where only one sample per class has been labeled by the oracle. This interesting functioning point shows that by labeling only one sample per class, the performance of the average Dice index can go from ${39.08 \\pm 08}$ for WSL method up to ${62.58 \\pm 2.15}$ for Label\\_prop and other AL methods. The figure shows that WSL tend to spot small part of the object in addition to the background leading high false positive. Using few supervision in combination with the proposed architecture, better segmentation is achieved by spotting large part of the object with less confusion with the background.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{sample-test-Caltech-UCSD-Birds-200-2011} \\\\\n \\includegraphics[width=0.95\\linewidth]{code}\n \\caption{Qualitative results (on several CUB test images) of the predicted binary mask for each method after being trained in the first round ${r=1}$ (\\emph{i.e.}\\@\\xspace after labeling 1 sample per class) using seed=0. The average Dice index over the test set of each method is: ${40.16\\%}$ (WSL), ${55.32\\%}$ (Random), ${55.41\\%}$ (Entropy), ${55.52\\%}$ (MC\\_Dropout), ${59.00\\%}$ (Label\\_prop), and ${75.29\\%}$ (Full\\_sup). (Best visualized in color.)}\n \\label{fig:cub-results-visu}\n\\end{figure*}\n\n\\FloatBarrier\n\n\n\n\n\n\n\n\\bibliographystyle{apalike}\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nImage classification and segmentation are fundamental tasks in many visual recognition applications involving natural and medical images. Given a large image dataset annotated with global image-level labels for classification or with pixel-level labels for segmentation, deep learning (DL) models achieve state-of-the-art performances for these tasks \\citep{dolz20183d,Goodfellow-et-al-2016,krizhevsky12,LITJENS201760,LongSDcvpr15,Ronneberger-unet-2015}. However, the impressive accuracy of such fully-supervised learning models comes at the expense of a considerable cost for collecting and annotating large image data sets. While the acquisition of global image-level annotation can be relatively inexpensive, pixel-wise annotation involves a laborious process, a difficulty further accrued by the requirement of domain expertise, as in medical imaging, which increases the annotation costs.\n\\begin{figure}[t!]\n\\centering\n \\centering\n \\includegraphics[scale=0.49]{proposal}\n \\caption{Proposed AL framework with weak annotator.}\n \\label{fig:fig-proposal}\n\\end{figure}\n\nWeakly-supervised learning (WSL) has recently emerged as a paradigm that relaxes the need for dense pixel-wise annotations \\citep{rony2019weak-loc-histo-survey,zhou2017brief}. WSL techniques depend on the type of application scenario and annotation, such as global image-level labels \\citep{belharbi2019weakly,kim2017two,pathak2015constrained,teh2016attention,wei2017object}, scribbles \\citep{Lin2016,ncloss:cvpr18}, points \\citep{bearman2016}, bounding boxes \\citep{dai2015boxsup,Khoreva2017} or global image statistics such as the target-region size \\citep{bateson2019constrained,jia2017constrained,kervadec2019curriculum,kervadec2019constrained}. This paper focuses on learning using only image-level labels, which enables to classify an image while yielding pixel-wise scores (i.e., segmentations), thereby localizing the regions of interest linked to the image-class predictions.\n\n\nSeveral CNN visualization and interpretation methods have recently been proposed, based on either perturbation, propagation or activation approaches, and allowing to localize the salient image regions responsible for a CNN's predictions \\citep{fu2020axiom}. In particular, WSL techniques \\citep{rony2019weak-loc-histo-survey} rely on activation-based methods like CAM and, more recently, Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, Ablation-CAM and Axiom-based Grad-CAM \\citep{fu2020axiom, lin2013network, PinheiroC15cvpr}. Trained with only global image-level annotations, these methods locate the regions of interest (ROIs) of the corresponding class in a relatively inexpensive and accurate way. However, while these WSL techniques can provide satisfying results in natural images, they typically yield poor segmentations in relatively more challenging scenarios, for instance, histology data in medical image analysis \\citep{rony2019weak-loc-histo-survey}.\nWe note two limitations associated with CAMs: (1) they are obtained in an unsupervised way (\\emph{i.e.}\\@\\xspace without pixel-level supervision under an ill-posed learning problem \\citep{choe2020evaluating}); and (2) they have low resolution. For instance, CAMs obtained from ResNet models \\citep{heZRS16} have a resolution of ${1\/32}$ of the input image. Interpolation is required to restore full image resolution. Both of these limitation with CAM-based methods lead to high false-positive rates, which may render them impractical \\citep{rony2019weak-loc-histo-survey}.\n\nEnhancing deep WSL models with pixel-wise annotation, as supported by a recent study in weakly-supervised object localization \\citep{choe2020evaluating}, can improve localization and segmentation accuracy, which is the central goal of this paper. To do so, we introduce a deep WSL model that allows supervised learning for classification, and active learning for segmentation, with the latter providing more accurate and high-resolution masks. We assume that the images in the training set are globally annotated with image-class labels. Relevant images are \\emph{gradually} labeled at the pixel level through an oracle that respects a low annotation-budget constraint. Consequently, this leads us to an active learning (AL) paradigm \\citep{settles2009active}, where an oracle is requested to annotate pixels in a subset of samples.\n\nDifferent sample-acquisition techniques have been successfully applied to deep AL for classification based on, e.g., certainty \\citep{ducoffe2015qbdc,gal2017deep,kirsch2019batchbald} or representativeness \\citep{kim2020task,sinha2019variational}. However, very few deep AL techniques were investigated in the context of segmentation \\citep{gaur2016membrane,gorriz2017active,lubrano2019deep}. Most AL techniques focus mainly on the sampling criterion (Fig.\\ref{fig:fig-proposal}, left) to populate the labeled pool using an oracle. During training, only the labeled pool is used, while the unlabeled pool is left dormant. Such an AL protocol may limit the accuracy of DL models under constrained oracle-supervision budget in real-world applications for multiple reasons:\n\n\\textbf{(1)} Standard AL protocols may be relevant to small\/shallow models that can learn and provide reliable queries using a few training samples. Since training accurate DL models typically depends on large training sets, large numbers of queries may be needed to build reliable DL models, which may incur a high annotation cost.\n\n\\textbf{(2)} In most AL work, the experimental protocol starts with a large labeled pool, and acquires a large number of queries for sufficient supervision, neglecting the workload placed on the oracle.\nThis typically reaches a plateau-performance of a DL quickly, hampering a reliable study of the impact of different AL selection techniques. Moreover, model-based sampling techniques are inconsistent \\citep{gaur2016membrane} in the sense that the model is used to query samples while it is still in an early learning stage.\n\n\\textbf{(3)} Segmentation and classification problems are associated with different properties and challenges, such as decision boundaries and uncertainty, which provide additional challenges to AL. For instance, the class boundaries defined by different classification methods \\citep{ducoffe2018adversarial,settles2009active,tong2001support} are not defined in the context of segmentation, making such a branch of methods inadequate for segmentation.\n\n\\textbf{(4)}\nIn critical fields such as medical imaging, acquiring a sample itself can be very expensive\\footnote{\nFor instance, prior to a diagnosis of breast cancer from a histological sample, a patient undergoes a bilateral diagnostic mammogram and breast ultrasound that are interpreted by a radiologist, one to several needle biopsies (with low risks under ${1\\%}$ of hematoma and wound infection) to further assess areas of concern, surgical consultation and pre-operative blood work, and surgical excision of the positive tissue for breast cancer cells. The biopsy and surgical tissues are processed (fixation, embedding in parraffin, H\\&E staining) and interpreted by a pathologist. Depending on the cancer stage, the patient may undergo additional procedures. Therefore, accounting for all the steps required for breast cancer diagnosis from histological samples, a rough estimation of the final cost associated with obtaining a Whole Slide Image (WSI) is about \\$400 (Canadian dollars, by 1999) \\citep{will1999diagnostic}. Moreover, some cancer types are rare \\citep{will1999diagnostic}, adding to the values of these samples. All these procedures are conducted by highly trained experts, with each procedure taking from a few minutes to an hour and requiring expensive specialized medical equipment.}.\nThe time and cost associated with each sample makes them valuable items. Such considerations may be overlooked for large-scale data sets with almost-free samples, as in the case of natural images. Given this high cost, keeping the unlabeled pool dormant during learning may be ineffective.\n\n\nBased on the aforementioned arguments, we advocate that focusing solely on the sample acquisition and supervision pool may not be an efficient way to build\nhigh-performing DL models in an AL framework for segmentation. Therefore, we consider augmenting the labeled pool using the model as a second source of annotation, in a self-learning fashion \\citep{mao2020survey} (Fig.\\ref{fig:fig-proposal}, right). This additional annotation might be less accurate (\\emph{i.e.}\\@\\xspace, weak\\footnote{Not to be confused with the weak annotation of data in weakly supervised learning frameworks.}) compared to the oracle that provides strong but expensive annotations. However, it is expected to fast-improve the model's performance \\citep{mao2020survey}, while using a few oracle-annotated samples, reducing the annotation cost.\n\nOur main contributions are the following.\n\\textbf{(1) Architecture design}: As an alternative to CAMs, we propose using a segmentation mask trained with pixel-level annotations, which yields more accurate and high-resolution ROIs. This is achieved through a convolutional architecture capable of simultaneously classifying and segmenting images, with the segmentation task trained using annotations acquired using an AL framework.\nAs illustrated in Fig.\\ref{fig:fig-archi}, the architecture combines well-known DL models for classification (ResNet \\citep{heZRS16}) and segmentation (U-Net \\citep{Ronneberger-unet-2015}), although other architectures could also be used.\n\\textbf{(2) Active learning}: We augment the size of the labeled pool by weak-annotating a large number of unlabeled samples based on predictions of the DL model itself, providing a second source of annotation (Fig.\\ref{fig:fig-proposal}). This enables rapid improvements of the segmentation accuracy, with less oracle-based annotation. Moreover, our method can be integrated on top of any sample-acquisition method.\n\\textbf{(3) Experimental study}: We conducted comprehensive experiments over two challenging benchmarks -- high-resolution medical images (histology GlaS data for colon cancer) and natural images (CUB-200-2011 for bird species). Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of the-art CAMs and AL methods, with an identical oracle-supervision budget.\n\n\n\\section{Related work}\n\\label{sec:related-work}\n\n\\noindent \\textbf{Deep active learning:}\nAL has been studied for a long time in machine learning, mainly for classification and regression, using linear models in particular \\citep{settles2009active}. Recently, there has been an effort to transfer such techniques to DL models for classification tasks by mimicking their intuition or by adapting them, taking into consideration model specificity and complexity. Such methods include, for instance, different mechanisms for uncertainty \\citep{beluch2018power,ducoffe2015qbdc,ducoffe2018adversarial,gal2017deep,kim2020task,kirsch2019batchbald,lakshminarayanan2017simple,wang2016cost,yoo2019learning} and representativeness estimation \\citep{kim2020task,sener2018coreset,sinha2019variational}. However, most deep AL techniques are validated on synthetic, simple or tiny data, which does not explore their full potential in real applications.\n\nWhile deep AL for classification is rapidly growing, deep AL models for segmentation are uncommon in the literature. In fact, the very few methods in the literature mostly focused on the direct application of deep AL classification methods. The limited research in this area may be explained by the fact that segmentation tasks bring challenges to AL, such as the additional spatial information and the fact that a segmentation mask lies in a much larger dimension than a classification prediction. In classification, AL often deals with one output that is used to drive queries \\citep{huang2010active}. The spatial information in segmentation does not naturally provide a direct scoring function that can indicate the overall quality or certainty of the output. Most of deep AL methods for segmentation consider pixels as classification instances, and apply standard AL techniques to each pixel.\n\nFor instance, the authors of \\citep{gaur2016membrane} exploit a variant of entropy-based acquisition at the pixel level, combined with a distribution-based term that encodes diversity using a complex hierarchical clustering algorithm over sliding windows, with application to microscopic membrane segmentation. Similarly, \\citep{gorriz2017active,lubrano2019deep} apply Monte-Carlo dropout uncertainty \\citep{gal2017deep} at the pixel level, with application to myelin segmentation using spinal cord and brain microscopic histology images. In \\citep{roels2019cost}, the authors experiment with five acquisition functions of classification for a segmentation task, including entropy-based, core-set \\citep{sener2018coreset}, k-mean and Bayesian \\citep{gal2017deep} sampling, with application to electron microscopy segmentation. Entropy-based methods seem to be dominant over multiple datasets. In \\citep{yang2017suggestive}, the authors combine two sampling terms for histology image segmentation. The first employs bootstrapping over fully convolutional networks (FCN) to estimate uncertainty, where a set of FCNs are trained on different subsets of samples. The second term is a representation-based term that selects the most representative samples. This is achieved by solving an optimization of a generalization version of the maximum cover set problem \\citep{feige1998threshold} using sample description extracted from an FCN. Despite the obtained promising results, this approach remains complex and impractical due to the use of bootstrapping over DL models and an optimization step. Moreover, the authors of \\citep{yang2017suggestive} do not provide a comparison to other acquisition functions. The work in \\citep{casanova2020} considers a specific case of AL using reinforcement learning for \\emph{region-based} AL for segmentation, where only a selected region of the image is labeled. This method is adequate for data sets with large and unbalanced classes, such as street-view images. While the method in \\citep{casanova2020} outperforms random and Bayesian \\citep{gal2017deep} selection, surprisingly, it performs close to entropy-based selection.\n\n\n\\noindent \\textbf{Weak annotators:}\nThe AL paradigm does not prohibit the use of unlabelled data for training \\citep{settles2009active}, but it mainly constrains the oracle-labeling budget. The standard AL experimental protocol (Fig.\\ref{fig:fig-proposal}, left) was inherited from AL of simple\/linear ML models, and adopted in subsequent works. Budget-constrained oracle annotation may not be sufficient to build effective DL models, due to the lack of labeled samples. Furthermore, several studies in AL for classification have successfully leveraged the unlabelled data to provide additional supervision \\citep{lin2017active,long2008graph,vijayanarasimhan2012active,wang2016cost,zhou2010active,zhualsslharmo2003}.\n\nFor instance, the authors of \\citep{lin2017active,wang2016cost} propose to pseudo-label a part of the unlabeled pool. The latter is selected using dynamic thresholding based on confidence, through the model itself, so as to learn a better embedding. Furthermore, a theoretical framework for AL using strong and weak annotators for classification task is introduced in \\citep{zhang2015active}. Their results suggest that using multiple annotators can reduce the cost of oracle annotation, in comparison to one annotator. Multiple sources of annotations that include both strong and weak annotators were used in AL, crowd-sourcing, self-paced learning and other interactive learning scenarios for classification to help reducing the number of requests for the strong annotator \\citep{kremer2018robust,malago2014online,mattsson2017active,murugesan2017active,urner2012learning,yan2016active,zhang2015active}.\nUsing the model itself for pseudo-annotation is motivated mainly by the success of deep self-supervised learning \\citep{mao2020survey}.\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n\\includegraphics[scale=0.65]{knn}\n\\caption{The $k$-nn method for selecting $\\mathbb{U}^{\\prime\\prime}$ subset to be speudo-labeled. Assumption to select $\\mathbb{U}^{\\prime\\prime}$: since $\\mathbb{U}^{\\prime\\prime}$ lives nearby supervised samples, it is more likely to be assigned good segmentation by the model.\nWe consider measuring $k$-nn for each \\textbf{unlabeled} sample. In this example, using $k=4$ allows ${\\abs{\\mathbb{U}^{\\prime\\prime}} = 14}$. If $k$-nn is considered for each \\textbf{labeled} sample: $\\abs{\\mathbb{U}^{\\prime\\prime}} = 8$. ${\\abs{\\cdot}}$ is the set cardinal. Note that ${k}$-nn is only considered between samples of the \\emph{same class}.}\n \\label{fig:fig-knn}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\n\\noindent \\textbf{Label Propagation (LP):}\nOur approach is also related to LP methods \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002} for classification, which aim to label unlabeled samples using knowledge from the labeled ones (Fig.\\ref{fig:fig-knn}). However, while LP propagates labels to unlabeled samples through an iterative process, our approach bypasses this using the model itself. In our case, the propagation is limited to the neighbors of labeled samples defined through $k$-nearest neighbors ($k$-nn) (Fig.\\ref{fig:fig-knn}). Using $k$-nn has been also studied to combine AL and domain adaptation \\citep{berlind2015active}, where the goal is to query samples from the target domain. Such an approach is connected to the recently developed core-set method for deep AL \\citep{sener2018coreset}. Our method intersects with \\citep{berlind2015active} only in the sense of predicting the labels to query samples using their labeled neighbors.\n\nIn contrast to state-of-the-art DL models for AL segmentation, we consider increasing the unlabeled pool through pseudo-annotated samples (Fig.\\ref{fig:fig-proposal}, right). To this end, the model is used for pseudo-labeling samples within the neighborhood of samples with strong supervision (Fig.\\ref{fig:fig-knn}). From a self-learning perspective, the works in \\citep{lin2017active,wang2016cost} on face recognition are the closest to ours. While both rely on pseudo-labeling, they mainly differ in the sample selection for pseudo-annotation. In \\citep{lin2017active,wang2016cost}, the authors considered model confidence, where samples with high confidence are pseudo-labeled, while low-confidence samples are queried. While this yields good results, it makes the overall method strongly dependent on model confidence. As we consider segmentation tasks, model-confidence is not well-defined. Moreover, using the expected pixel-wise values can be less representative for model confidence.\n\nOur approach relies on the spatial assumption in Fig.\\ref{fig:fig-knn}, where the samples to pseudo-label are selected to be near the labeled samples, and expected to have good pseudo-segmentations. This makes the oracle-querying technique independent from the pseudo-labeling method, giving more flexibility to the user. Our pseudo-labeled samples are added to the labeled pool, along with the samples annotated by the oracle. The underlying assumption is that, given a sample labeled by an oracle, the model is more likely to produce good segmentations of images located nearby that sample. Our assumption is verified empirically in the experimental section of this paper. This simple procedure enables to rapidly increase the number of pseudo-labeled samples, and helps improving segmentation performance under a limited amount of oracle-based supervision.\n\n\n\n\n\n\\section{Proposed approach}\n\\label{sec:proposal}\n\n\n\\begin{wrapfigure}{R}{0.4\\textwidth}\n\\centering\n \\includegraphics[scale=0.75]{archi}\n \\caption{Our proposed DL architecture for classification and segmentation composed of: (1) a shared \\textbf{backbone} for feature extraction; (2) a \\textbf{classification head} for the classification task; (3) and a \\textbf{segmentation head} for the segmentation task with a U-Net style \\citep{Ronneberger-unet-2015}. The latter merges representations from the backbone, while gradually upscaling the feature maps to reach the full image resolution for the predicted mask, similarly to the U-Net model.}\n \\label{fig:fig-archi}\n\\vspace*{-0.1in}\n\\end{wrapfigure}\n\n\nWe consider an AL framework for training deep WSL models, where all the training images have class-level annotations, but no pixel-level annotations. Due to their high cost, pixel annotations are gradually acquired for training through oracle queries. It propagate pixel-wise knowledge encoded in the model though the labeled images.\n\nActive learning training consists of sequential training rounds. At each round $r$, the total training set ${\\mathbb{D}}$ that contains $n$ samples with unlabeled and labeled subsets (Fig.\\ref{fig:fig-proposal}).\n\\textbf{(1) Unlabeled subset:} contains samples without pixel-wise annotation (unlabeled samples) ${\\mathbb{U} = \\{\\bm{x}_i, y_i, -\\-\\}_{i=1}^u}$ where ${\\bm{x} \\in \\mathcal{X}}$ is the input image, ${y}$ is its global label; and the pixel label is missing.\n\\textbf{(2) Labeled subset:} contains samples with full supervision ${\\mathbb{L}=\\{\\bm{x}_i, y_i, \\bm{m}_i\\}_{i=1}^l}$ where ${\\bm{m}}$ is the pixel-wise annotation of the sample. ${\\mathbb{L}}$ is initially empty. It is gradually populated from ${\\mathbb{U}}$ by querying the oracle using an acquisition function.\nLet ${f(\\cdot: \\bm{\\theta})}$ denotes a DL model that is able to classify and segment an image ${\\bm{x}}$ (Fig.\\ref{fig:fig-archi}). For clarity, and since we focus on the segmentation task, we omit the notation for the classification task (to simplify the presentation). Therefore, ${f(\\bm{x})}$ refers to the predicted segmentation mask.\nLet ${\\mathbb{U}^{\\prime} \\subseteq \\mathbb{U}}$ and ${\\mathbb{U}^{\\prime\\prime} \\subseteq \\mathbb{U}}$ denote two subsets (Fig.\\ref{fig:fig-proposal}), with ${ \\mathbb{U}^{\\prime} \\cap \\mathbb{U^{\\prime\\prime}} = \\varnothing}$.\nIn our method, we introduce ${\\mathbb{P}}$ as a subset holder for pseudo-labeled samples, which is initially empty and gradually replenished (Fig.\\ref{fig:fig-proposal}, right).\nTo express the dependency of each subset on round ${r}$, we introduce the following notations: ${\\mathbb{U}_r, \\mathbb{L}_r, \\mathbb{P}_r, \\mathbb{U}^{\\prime}_r, \\mathbb{U}^{\\prime\\prime}_r}$. The samples in ${\\mathbb{P}_r}$ are denoted as ${\\{\\bm{x}_i, y_i, \\hat{\\bm{m}}_i\\}}$. The following holds: ${\\forall r: \\mathbb{D} = \\mathbb{L}_r \\cup \\mathbb{U}_r \\cup \\mathbb{P}_r}$.\n\n\n\nAlg.\\ref{alg:alg-0} describes the overall AL process with our pseudo-annotation method.\nFirst, ${\\mathbb{U}^{\\prime}_r}$ is queried, then labeled by the oracle, and added to ${\\mathbb{L}_r}$.\nUsing $k$-nn, ${\\mathbb{U}^{\\prime\\prime}_r}$ is selected based on their proximity to ${\\mathbb{L}_r}$ (Fig.\\ref{fig:fig-knn}); and pseudo-labeled by the model, then added to ${\\mathbb{P}_r}$. To fast-increase the size of ${\\mathbb{L}_r}$, ${\\mathbb{P}_r}$ is protected from being queried for the oracle until it is inevitable. In the \\emph{latter case}, queried samples from ${\\mathbb{P}_r}$ are used to fill ${\\mathbb{U}^{\\prime}}$; and they are no longer considered pseudo-labeled since they will be assigned the oracle annotation.\n\nTo measure image similarity for the $k$-nn method, we used the color distribution to describe image content. This can be a flexible descriptor for highly unstructured images such as histology images. Note that the ${k}$-nn method is considered \\emph{only} for pairs of samples of the \\emph{same class}. The underlying assumption is that samples of the same class, with similar color distributions, are likely to contain relatively similar objects. Consequently, labeling representative samples could be a proxy for supervised learning based on the underlying data distribution. This can increase the likelihood of the model to provide relatively good segmentations of the other samples. The proximity between two images ${(\\bm{x}_i, \\bm{x}_j)}$ is measured using the Jensen-Shannon divergence between their respective color distributions (measured as normalized histograms). For an image with multiple color planes, the similarity is formulated as the sum of similarities, one for each plane.\n\nAt round $r$, the queried and pseudo-labeled samples are both used in training by optimizing the following loss function:\n\\begin{equation}\n\\label{eq:eq-1}\n \\min_{\\bm{\\theta}} \\sum_{\\bm{x}_i \\in \\mathbb{L}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\bm{m}_i) + \\lambda \\sum_{\\bm{x}_i \\in \\mathbb{P}_{r-1}} \\mathcal{L}(f(\\bm{x}_i), \\hat{\\bm{m}}_i),\n\\end{equation}\nwhere ${\\mathcal{L}(\\cdot, \\cdot)}$ is a segmentation loss, and ${\\lambda}$ a positive scalar. Eq.(\\ref{eq:eq-1}) corresponds to training the model (Fig.\\ref{fig:fig-archi}) solely for the segmentation task. Simultaneous training for classification and segmentation in this AL setup is avoided due to the unbalance between the number of samples that are labeled globally and at the pixel level. Therefore, we consider training the model for classification first. Then, we freeze the classifier parameters. Training for the segmentation tasks is resumed later. This yields the best classification performance, and allows a better study of the impact of the queried samples on the segmentation task.\n\nConsidering the relation of our method and label propagation algorithm \\citep{bengiolabelprop2010,zhou2004learning,zhulp2002}, we refer to our proposal as Label\\_prop.\n\n\n\\begin{center}\n\\begin{minipage}{0.6\\linewidth}\n\\IncMargin{0.04in}\n\\begin{algorithm}[H]\n \\SetKwInOut{Input}{Input}\n \n \\Input{\n ${\\mathbb{P}_0 = \\mathbb{L}_0 = \\varnothing}$\n \\\\\n ${\\bm{\\theta}^0}$: Initial parameters of ${f}$ trained on the classification task.\n \\\\\n ${\\texttt{maxr}: \\textrm{Maximum number of AL rounds}}$.\n }\n \\vspace{0.1in}\n Select ${\\mathbb{U}^{\\prime}_0}$ randomly and label them by an oracle. \\\\\n \\vspace{0.025in}\n $ \\mathbb{L}_0 \\ \\leftarrow\\ \\mathbb{U}^{\\prime}_0$. \\\\\n \\For{${r \\in 1 \\cdots \\texttt{maxr}}$} \n {\n \n ${\\bm{\\theta} \\ \\leftarrow\\ \\bm{\\theta}^0}$. \\\\\n \\vspace{0.03in}\n Train $f$ using ${\\mathbb{L}_{r-1}}$ \\colorbox{mybluelight}{${\\cup\\; \\mathbb{P}_{r-1}}$} and the loss in Eq. (\\ref{eq:eq-1}). \\\\\n \\vspace{0.03in}\n Select ${\\mathbb{U}^{\\prime}_r}$ and label them by an oracle. \\\\\n \\vspace{0.03in}\n $ \\mathbb{L}_r \\ \\leftarrow\\ \\mathbb{L}_{r-1} \\cup \\mathbb{U}^{\\prime}_r$. \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{Select ${\\mathbb{U}^{\\prime\\prime}_r}$.} \\\\\n \\vspace{0.03in}\n \\colorbox{mybluelight}{$ \\mathbb{P}_r \\ \\leftarrow\\ \\mathbb{P}_{r-1} \\cup \\mathbb{U}^{\\prime\\prime}_r$.} \\\\ \n \\vspace{0.03in}\n \\colorbox{mybluelight}{Pseudo-label ${\\mathbb{P}_r}$.}\n }\n \\vspace{0.1in}\n \\caption{Standard AL procedure and our proposal. The extra instructions associated with our method are indicated with a \\colorbox{mybluelight}{blue background}.\n }\n \\label{alg:alg-0}\n\\end{algorithm}\n\\DecMargin{0.04in}\n\\end{minipage}\n\\end{center}\n\n\n\n\\section{Results and discussion}\n\\label{sec:experiments}\n\n\n\n \\subsection{Experimental methodology:}\n\n \\begin{figure}[!b]\n \\centering\n \\includegraphics[width=0.99\\linewidth]{samples-datasets}\n \\caption{\\textbf{Top row}: GlaS dataset \\citep{sirinukunwattana2017gland}. \\textbf{Bottom row}: CUB dataset \\citep{WahCUB2002011}. (Best visualized in color.)}\n \\label{fig:fig-datasets}\n\\end{figure}\n\n\\noindent \\textbf{a) Datasets.}\nFor evaluation, datasets should have global and pixel-wise annotation. We consider two public datasets including both medical (histology) and natural images (Fig.\\ref{fig:fig-datasets}).\n\\begin{inparaenum}[(1)]\n\n \\item \\textbf{GlaS dataset}: This dataset contains histology images for colon cancer diagnosis\\footnote{GlaS: \\href{https:\/\/warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}{warwick.ac.uk\/fac\/sci\/dcs\/research\/tia\/glascontest}.} \\citep{sirinukunwattana2017gland}. It includes 165 images derived from 16 Hematoxylin and Eosin (H\\&E) histology sections of two grades (classes): benign and malignant. It is divided into 84 samples for training and 80 samples for testing.\n The ROIs to be segmented are the glandes.\n \n \n \\item \\textbf{CUB-200-2011 dataset (CUB)}\\footnote{CUB: \\href{http:\/\/www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}{www.vision.caltech.edu\/visipedia\/CUB-200-2011.html}} \\citep{WahCUB2002011} is a dataset for bird species with ${11,788}$ samples ($5,994$ for training and $5,794$ for testing) and ${200}$ species. The ROIs to be segmented are the birds.\n \n \\end{inparaenum}\n %\n In GlaS and CUB datasets, we randomly select $80\\%$ of the training samples for effective training, and $20\\%$ for validation (with full supervision) to perform early stopping. The splits are identical to the ones used in \\citep{belharbi2019unimoconstraints,rony2019weak-loc-histo-survey} (split 0, fold 0), and are publicly available.\n\n\\noindent \\textbf{b) Active learning setup.}\nTo assess the performance of different AL acquisition methods, we consider a realistic scenario with respect to the number of samples to be labeled at each AL round, accounting for the load imposed on the oracle.\nTherefore, only a few samples are selected at each round for oracle annotation, and ${\\mathbb{L}}$ is slowly replenished. This allows better comparison between AL selection techniques since we spend more time\nin a phase where ${\\mathbb{L}}$ holds a few samples. Such a phase allows to better measure the impact of the selected samples. Filling ${\\mathbb{L}}$ quickly brings the model's performance to a plateau that hides the impact of newly selected samples.\nThe initial replenishment ($r=1$) is achieved by randomly selecting a few samples. The same samples are used for all AL techniques at round $r=1$ for a fair comparison. To avoid any bias from selecting unbalanced classes that can directly affect the segmentation performance, and hinder AL evaluation, the same number of samples is selected from each class (since the global annotation is known beforehand for all the samples). Note that the oracle is used only to provide pixel-wise annotation. Tab.\\ref{tab:tab0} provides the selection details.\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Number of samples selected for the oracle per round.}\n\\label{tab:tab0}\n\\centering\n \\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r=1$)} & \\makecell[c]{\\#selected samples \\\\ \\emph{per-class} ($r > 1$)} & \\makecell[c]{Max AL rounds \\\\ (\\texttt{maxr} in Alg.\\ref{alg:alg-0})}\\\\\n \\toprule\n GlaS & $4$ & $1$ & $25$ \\\\\n CUB & $1$ & $1$ & $20$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\\noindent \\textbf{c) Evaluation.}\nWe report the classification accuracy obtained by the classification head (Fig.\\ref{fig:fig-archi}). Average Dice index is used to measure the segmentation quality at each AL round forming a Dice index curve over all the rounds. To better assess the \\emph{dominance} of each method \\citep{settles2009active}, the Area Under the Dice index Curve is used (AUC). This provides a fair indicator of the dominant curve, but contrasts with standard AL works, where one or multiple specific operating points in the curve are selected (leading to a biased and less accurate protocol). The average and standard deviation of Dice index curve and AUC metric are reported based on 5 replications of a complete AL session, using a different seed for each session. An AL session across different methods uses the same seed.\n\nWhile our approach, referred to as (Label\\_prop), can operate on top of any AL selection criterion, we demonstrate its efficiency using simple random selection, which is often a baseline for AL experiments. Note that our pseudo-annotations are obtained from the segmentation head shown in Fig.\\ref{fig:fig-archi}. Our method is compared to three different AL selection approaches for segmentation:\n\\textbf{(1) random selection (Random)}: the samples are randomly selected;\n\\textbf{(2) entropy-based selection (Entropy)}: the scoring function per sample is the average entropy at the pixel level \\citep{gaur2016membrane}. Samples with high entropy are selected; and\n\\textbf{(3) Monte-Carlo dropout uncertainty (MC\\_Dropout)}: we use Monte-Carlo dropout \\citep{gorriz2017active,lubrano2019deep} at the pixel level to compute the uncertainty score per sample. Samples are forwarded ${50}$ times in the model, where dropout is set to ${0.2}$ \\citep{gorriz2017active,lubrano2019deep}. Then, the pixel-wise variance is estimated. Samples with high mean variance are selected.\n\n\n\\noindent \\textbf{Lower bound performance (WSL)}: We consider the segmentation performance obtained by WSL method as a lower bound. It is trained using only global annotation. CAMs are used to extract the segmentation mask. WILDCAT method is considered \\citep{durand2017wildcat} (Fig.\\ref{fig:fig-archi}) at the classification head to obtain the CAMs. For WSL method, a pre-trained model over ImageNet \\citep{imagenetcvpr09} is used to initialize the weights of the backbone, which is then fined-tuned.\nThe model is trained over the entire dataset, where samples are labeled globally only.\nThe obtained classifier using seed=0 is frozen and used as a backbone for \\emph{all} the other methods.\n\n\\noindent \\textbf{Upper bound performance (Full\\_sup)}: Fully supervised segmentation is considered as an upper bound on performance. The model in Fig.\\ref{fig:fig-archi} is trained for segmentation only using the entire dataset, where samples are labeled at the pixel level.\n\nFor a fair comparison, all the methods are trained using the same hyper-parameters over the same dataset. WSL and Full\\_sup methods have minor differences. Due to space limitation, all the hyper-parameters are presented in the supplementary material.\nIn Alg.\\ref{alg:alg-0}, notice that for our method, ${\\mathbb{P}_r}$ is not used at the current round $r$ but until the next round ${r+1}$. To take advantage of ${\\mathbb{P}_r}$ at round $r$, instructions from line-4 to line-10 are repeated twice in the provided results.\n\n\n\n\n\n\n\n\n\\subsection{Results}\n\\label{sub:results}\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{glas-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-0}\n \\end{subfigure}%\n \\begin{subfigure}[b]{.5\\linewidth}\n \\centering\n \\includegraphics[scale=0.39]{Caltech-UCSD-Birds-200-2011-dice-idx-test}\n \\caption{}\n \\label{fig:fig-al-1}\n \\end{subfigure}%\n\\caption{Average Dice index of the proposed and baseline methods over test sets.\n(\\protect\\subref{fig:fig-al-0}) GlaS.\n(\\protect\\subref{fig:fig-al-1}) CUB.\n}\n\\label{fig:fig-al-results}\n\\end{figure}\n\n\n\\begin{table}[t!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Classification accuracy over of the proposed deep WSL model on GlaS and CUB test datasets.}\n\\label{tab:tab-cl-acc}\n\\centering\n \\resizebox{.5\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB}\\\\\n \\toprule\n \\makecell{Classification \\\\ accuracy (\\%)} & $99.50 \\pm 0.61$\n & $73.22 \\pm 0.19$\n \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Average AUC and standard deviation (Fig.\\ref{fig:fig-al-results}) for Dice index performance over GlaS and CUB test sets.}\n\\label{tab:tab-seg-perf}\n\\centering\n \\resizebox{.6\\linewidth}{!}{\n\\begin{tabular}{l|ccc}\n Dataset & \\multicolumn{1}{c}{GlaS} & \\multicolumn{1}{c}{CUB} \\\\\n \\toprule\n \\toprule\n WSL & $66.44 \\pm 0.20$ & $39.22 \\pm 0.19$ \\\\\n \\midrule\n Random & $78.57 \\pm 0.93$ & $68.15 \\pm 0.61$ \\\\\n Entropy & $79.13 \\pm 0.26$ & $68.25 \\pm 0.29$ \\\\\n MC\\_Dropout & $77.92 \\pm 0.49$ & $67.69 \\pm 0.27$ \\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{81.48 \\pm 1.03}$ & $\\bm{71.73 \\pm 0.67}$ \\\\\n \\midrule\n Full\\_sup & $86.53 \\pm 0.31$ & $75.29 \\pm 1.50$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\n\n\\begin{table*}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Readings of Dice index (mean $\\pm$ standard deviation) from Fig.\\ref{fig:fig-al-results} over test set for the \\textbf{first 5 queries} formed by each method. We start from the second query since the first query is random but identical for all methods.}\n\\label{tab:tab-dice-q}\n\\centering\n \\resizebox{0.9\\linewidth}{!}{\n\\begin{tabular}{l|cccccc}\n \\textbf{Queries} & \\textbf{q2} & \\textbf{q3} & \\textbf{q4} & \\textbf{q5} & \\textbf{q6} \\\\\n \\toprule\n \\toprule\n \\multicolumn{6}{c}{GlaS} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$66.44 \\pm 0.20$} \\\\\n \\midrule\n Random & $70.26 \\pm 3.02$ & $71.58 \\pm 3.14$ & $71.43 \\pm 1.83$ & $74.05 \\pm 3.14$ & $75.36 \\pm 3.45$\\\\\n Entropy & $\\bm{72.75 \\pm 2.96}$ & $70.93 \\pm 3.58$ & $72.60 \\pm 1.44$ & $73.44 \\pm 1.38$ & $75.15 \\pm 1.63$\\\\\n MC\\_Dropout & $68.44 \\pm 2.89$ & $69.70 \\pm 1.96$ & $69.97 \\pm 1.95$ & $72.71 \\pm 2.21$ & $73.00 \\pm 1.04$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $71.02 \\pm 4.19$ & $\\bm{74.07 \\pm 3.93}$ & $ \\bm{76.52 \\pm 3.49}$ & $\\bm{77.63 \\pm 2.73}$ & $\\bm{78.41 \\pm 1.23}$\\\\\n \n Full\\_sup & \\multicolumn{4}{c}{$86.53 \\pm 0.31$}\\\\\n \\toprule \\toprule\n \\multicolumn{6}{c}{CUB} \\\\\n \\toprule\n WSL & \\multicolumn{4}{c}{$39.22 \\pm 0.18$} \\\\\n \\midrule\n Random & $56.86 \\pm 2.07$ & $61.39 \\pm 1.85$ & $62.97 \\pm 1.13$ & $63.56 \\pm 4.02$ & $66.56 \\pm 2.50$\\\\\n Entropy & $53.37 \\pm 2.06$ & $59.11 \\pm 2.50$ & $60.48 \\pm 3.56$ & $63.81 \\pm 2.75$ & $63.59 \\pm 2.34$\\\\\n MC\\_Dropout & $57.13 \\pm 0.83$ & $59.98 \\pm 2.06$ & $63.52 \\pm 2.26$ & $63.02 \\pm 2.68$ & $64.68 \\pm 1.41$\\\\\n \\rowcolor{mybluelight}\n \\textbf{Label\\_prop (ours)} & $\\bm{62.58 \\pm 2.15}$ & $\\bm{66.32 \\pm 2.34}$ & $ \\bm{67.01 \\pm 2.85}$ & $\\bm{69.40 \\pm 3.40}$ & $\\bm{68.28 \\pm 1.60}$\\\\\n \\midrule\n Full\\_sup & \\multicolumn{4}{c}{$75.29 \\pm 1.50$}\\\\\n \\bottomrule\n \\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\nWe report the classification and segmentation performances following the training the proposed deep WSL model in Fig.\\ref{fig:fig-archi}. Tab.\\ref{tab:tab-cl-acc} reports the Classification accuracy of the classification head using WSL, which is\nclose to the results reported in \\citep{belharbi2019weakly,rony2019weak-loc-histo-survey}. The results of GlaS suggest that it is an easy dataset for classification.\n\nThe segmentation results are reported in Tabs. \\ref{tab:tab-dice-q} and \\ref{tab:tab-seg-perf}, and in Fig \\ref{fig:fig-al-results}.\n\nFig. \\ref{fig:fig-al-0} compares Dice accuracy on the \\textbf{GlaS dataset}. On the latter, we observe that adding more labels increases Dice index for all AL methods, yielding, as expected, better performance than the WSL method. Reading from Tab.\\ref{tab:tab-dice-q}, randomly labeling only 4 samples per class enables to easily outperform WSL. This means that using our approach in Fig.\\ref{fig:fig-archi}, with limited supervision, can lead to more accurate masks compared to using CAMs in the WSL method. From Fig.\\ref{fig:fig-al-0}, one can also observe that Random, Entropy, and MC\\_Dropout methods grow relatively in the same way, leading to the same overall performance, with the Entropy method slightly ahead. Considering the overall behavior of the curves, one may conclude that using advanced selection techniques such as MC\\_Dropout and Entropy provides an accuracy similar to simple random selection. On the one hand, since both methods have shown substantial improvements in AL for classification, and based on the results in Fig.\\ref{fig:fig-al-0}, one may conclude that all samples are equivalently informative for the model. Therefore, there is no better order to acquire them. On the other hand, using simply random selection and pseudo-labeled samples allowed our method to substantially improve the overall performance, demonstrating the benefits of self-learning.\n\nFig.\\ref{fig:fig-al-1} and Tab.\\ref{tab:tab-dice-q} compare Dice accuracy on the \\textbf{CUB dataset}, where labeling only one sample per class yielded a large improvement in Dice index, in comparison to WSL. Adding more samples increases the performance of all the methods. One can observe similar pattern as for GlaS: Random, Entropy and MC\\_Dropout methods yield similar curves, while the AUC performances of Random and Entropy methods are similar, and slightly ahead of MC\\_Dropout.\nSimilar to GlaS analysis, and based on the results of these three methods, one can conclude that none of the methods for ordering the samples is better than simple random selection. Using self-labeled samples in our method shows again its benefits. Simple random selection combined with self-annotation yields an overall best performance. Using two datasets, our empirical results suggest that self-learning, under limited oracle-annotation, has the potential to provide a reliable second source of annotation, which can efficiently enhance model performance, while using simple sample acquisition techniques.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=.7\\linewidth]{dice-pseudo-labeled}\n \\caption{Average Dice index over the pseudo-labeled samples of our method in \\textbf{each} AL round.}\n \\label{fig:fig-dice-pseudo-labeled}\n\\end{figure}\n\n\\noindent \\textbf{Pseudo-annotation performance}. Furthermore, the proposed approach is assessed on the pseudo-labeled samples at each AL round. Fig.\\ref{fig:fig-dice-pseudo-labeled} shows that the model provides good segmentations at the initial rounds. Then, the more supervision, the more accurate the pseudo-segmentation, as expected. This figure shows the interest and potential of self-learning in segmentation, and confirms our assumption that samples near the labeled ones are likely to achieve accurate pseudo-segmentation by the model.\n\n\\noindent \\textbf{Hyper-parameters}. Our approach requires two main hyper-parameters: ${ k \\ \\text{and } \\lambda}$. We conducted an ablation study over ${k}$ on GlaS dataset, and over ${\\lambda}$ on both datasets. Results, which are presented in the supplementary material, suggest that our method is less sensitive to ${k}$. ${\\lambda}$ plays an important role, and\nbased on our study, we recommend using small values of this weighting parameter. In our experiments, we used $\\lambda=0.1$ for Glas and $\\lambda = 0.001$ for CUB. We set ${k=40}$. We note that hyper-parameter tuning in AL is challenging due to the change of the size of the data set, which in turn changes the training dynamics. In all the experiments, we used fixed hyper-parameters across the AL rounds.\nFig.\\ref{fig:fig-dice-pseudo-labeled} suggests that a dynamic ${\\lambda(r)}$ that is increased through AL rounds could be more beneficial. However, this requires a principled update protocol for ${\\lambda}$, which was not explored in this work. Nonetheless, using a fixed value seems to yield promising results overall.\n\n\\noindent \\textbf{Supplementary material}. Due to space limitation, we deferred the hyper-parameters used in the experiments, results of the ablation study, visual results for the similarity measure and examples of predicted masks to the supplementary materials.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nDeep WSL models trained with global image-level annotations can play an important role in CNN visualization and interpretability. However, they are prone to high false-positive rates, especially for challenging images, leading to poor segmentations.\nTo alleviate this issue, we considered using pixel-wise supervision provided gradually through an AL framework. This annotation is integrated into training using an adequate deep convolutional model that allows supervised learning for both tasks: classification and segmentation. Through a few pixel-supervised samples, such a design is intended to provide full-resolution and more accurate masks compared to standard CAMs, which are trained without pixel supervision and often provide coarse resolution. Therefore, it enables a better CNN visualization and interpretation of CNN predictions.\nFurthermore, and unlike standard deep AL methods that focus solely on the acquisition function, we considered using self-learning as a second source of supervision to fast-improve the model segmentation.\nEvaluating our method using a realistic AL protocol over two challenging benchmarks, our results indicate that:\n(1) using a \\emph{few} supervised samples, the proposed architecture yielded more accurate segmentations compared to CAMs, with a large margin using different AL methods. Thus, it provides a solution to enhance pixel-wise predictions\nin real-world visual recognition applications.\n(2) using self-learning with random selection yielded substantial improvements. Self-learning under a limited oracle-budget can, therefore, provide a cost-effective alternative to standard AL protocols, where most of the effort is spent on the acquisition function.\n\n\\section*{Acknowledgment}\nThis research was supported in part by the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, Compute Canada, MITACS, and the Ericsson Global AI Accelerator Montreal.\n\n\\clearpage\n\\newpage\n\n\\appendices\n\n\n\n\\section{Supplementary material for the experiments}\nDue to space limitation, we provide in this supplementary material detailed hyper-parameters used in the experiments, results of the ablation study, visual results to the similarity measure, and examples of predicted masks.\n\n\\subsection{Training hyper-parameters}\n\\label{subsec:hyper-params}\nTab.\\ref{tab:tabx-tr-hyper-params} shows the used hyper-parameters in all our experiments.\n\n\n\\begin{table}[h!]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Training hyper-parameters.}\n\\label{tab:tabx-tr-hyper-params}\n\\centering\n\\resizebox{0.7\\linewidth}{!}{\n\\begin{tabular}{lccc}\n Hyper-parameter & GlaS & CUB\\\\\n \\toprule\n Model backbone & \\multicolumn{2}{c}{ResNet-18 \\citep{heZRS16}}\\\\\n WILDCAT \\citep{durand2017wildcat}: && \\\\\n ${\\alpha}$ & \\multicolumn{2}{c}{${0.6}$} \\\\\n ${kmin}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n ${kmax}$ & \\multicolumn{2}{c}{${0.1}$} \\\\\n modalities & \\multicolumn{2}{c}{${5}$} \\\\\n \\midrule\n Optimizer & \\multicolumn{2}{c}{SGD}\\\\\n Nesterov acceleration & \\multicolumn{2}{c}{True}\\\\\n Momentum & \\multicolumn{2}{c}{$0.9$} \\\\\n Weight decay & \\multicolumn{2}{c}{$0.0001$}\\\\\n Learning rate (LR) & ${0.1}$ (WSL: $10^{-4}$) & ${0.1}$ (WSL: $10^{-2}$)\\\\\n LR decay & ${0.9}$ & ${0.95}$ (WSL: ${0.9}$) \\\\\n LR frequency decay & $100$ epochs & $10$ epochs \\\\\n Mini-batch size & ${20}$ & ${8}$ \\\\\n Learning epochs & ${1000}$ & ${30}$ (WSL: ${90}$) \\\\\n \\midrule\n Horizontal random flip & \\multicolumn{2}{c}{True} \\\\\n Vertical random flip & True & False \\\\\n \n Crop size & \\multicolumn{2}{c}{${416 \\times 416}$}\\\\\n \\midrule\n ${k}$ & \\multicolumn{2}{c}{${40}$}\\\\\n ${\\lambda}$ & ${0.1}$ & ${0.001}$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{knn-ablation-glas-Label_prop-best-k-40}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter $k$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{25 queries for one trial}.\n AUC average $\\pm$ standard deviation: ${81.49 \\pm 0.59}$.\n Best performance in red dot: $k=40, AUC=82.41\\%$.\n }\n \\label{fig:abl-glas-knn}\n\\end{figure}\n\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-glas-Label_prop-best-lambda-0-dot-1}\n \\caption{Ablation study over GlaS dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{15 queries for one trial}.\n Best performance in red dot: $\\lambda=0.1, AUC=79.15\\%$.\n }\n \\label{fig:abl-glas-lambda}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\centering\n \\centering\n \\includegraphics[scale=0.5]{lambda-ablation-Caltech-UCSD-Birds-200-2011-Label_prop-best-lambda-0-dot-001}\n \\caption{Ablation study over CUB dataset (test set) over the hyper-parameter ${\\lambda}$ (x-axis). y-axis: AUC of Dice index (\\%) of \\textbf{5 queries for one trial}.\n Best performance in red dot: $\\lambda=0.001, AUC=66.94\\%$.\n }\n \\label{fig:abl-cub-lambda}\n\\end{figure}\n\n\n\\subsection{Ablation study}\n\\label{sub:ablation}\n\nWe study the impact of $k$ and ${\\lambda}$ on our method. Results are presented in Fig.\\ref{fig:abl-glas-knn}, \\ref{fig:abl-glas-lambda} for GlaS over ${k, \\lambda}$; and in Fig.\\ref{fig:abl-cub-lambda} for CUB over ${\\lambda}$. Due to the expensive computation time required to perform AL experiments, we limited the experiments (${k, \\lambda}$, number of trials, and \\texttt{maxr}).\nThe obtained results of this study show that our method is less sensitive to ${k}$ (standard deviation of ${0.59}$ in Fig.\\ref{fig:abl-glas-knn}).\nIn other hand, the method shows sensitivity to ${\\lambda}$ as expected from penalty-based methods \\citep{bertsekas1999nonlinear}.\nHowever, the method seems more sensitive to ${\\lambda}$ in the case of CUB than GlaS. CUb dataset is more challenging leading to more potential erroneous pseudo-annotation. Using Large ${\\lambda}$ will systematically push the model to learn on the wrong annotation (Fig.\\ref{fig:abl-cub-lambda}) which leads to poor results. In the other hand, GlaS seems to allow obtaining good segmentation where using large values of ${\\lambda}$ did not hinder the performance quickly (\\ref{fig:abl-glas-lambda}).\nThe obtained results recommend using small values that lead to better and stable performance. Using high values, combined with the pseudo-annotation errors, push the network to learn erroneous annotation leading to overall poor performance.\n\n\n\n\n\n\\subsection{Similarity measure}\n\\label{subsec:similarity}\nIn this section, we present some samples with their nearest neighbors. Although, it is difficult to quantitatively evaluate the quality of such measure.\nFig.\\ref{fig:glas-sim} shows the case of GlaS. Overall, the similarity shows good behavior of capturing the general stain of the image which is what was intended for since the structure of such histology images is subject to high variation. Since the stain variation is one of the challenging aspects in histology images \\citep{rony2019weak-loc-histo-survey}, labeling a sample with a common stain can help the model in segmenting other samples with similar stain.\nThe case of CUB, presented in Fig.\\ref{fig:cub-sim}, is more difficult to judge the quality since the images contain always the same species within their natural habitat. Often, the similarity succeeds to capture the overall color, background which can help segmenting the object in the neighbors and also the background. In some cases, the similarity captures samples with large zoom-in where the bird color dominate the image.\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=1.\\linewidth]{glas}\n \\caption{Examples of ${k}$-nn over GlaS dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:glas-sim}\n\\end{figure*}\n\n\n\\begin{figure*}[hbt!]\n\\centering\n \\centering\n \\includegraphics[width=0.9\\linewidth]{Caltech-UCSD-Birds-200-2011}\n \\caption{Examples of ${k}$-nn over CUB dataset. The images represents the 10 nearest images to the first image in the extreme left ordered from the nearest.}\n \\label{fig:cub-sim}\n\\end{figure*}\n\n\n\\subsection{Predicted mask visualization}\n\\label{subsec:mask-vis}\nFig.\\ref{fig:cub-results-visu} shows several test examples of predicted masks of different methods over CUB test set at the first AL round (${r=1}$) where only one sample per class has been labeled by the oracle. This interesting functioning point shows that by labeling only one sample per class, the performance of the average Dice index can go from ${39.08 \\pm 08}$ for WSL method up to ${62.58 \\pm 2.15}$ for Label\\_prop and other AL methods. The figure shows that WSL tend to spot small part of the object in addition to the background leading high false positive. Using few supervision in combination with the proposed architecture, better segmentation is achieved by spotting large part of the object with less confusion with the background.\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{sample-test-Caltech-UCSD-Birds-200-2011} \\\\\n \\includegraphics[width=0.95\\linewidth]{code}\n \\caption{Qualitative results (on several CUB test images) of the predicted binary mask for each method after being trained in the first round ${r=1}$ (\\emph{i.e.}\\@\\xspace after labeling 1 sample per class) using seed=0. The average Dice index over the test set of each method is: ${40.16\\%}$ (WSL), ${55.32\\%}$ (Random), ${55.41\\%}$ (Entropy), ${55.52\\%}$ (MC\\_Dropout), ${59.00\\%}$ (Label\\_prop), and ${75.29\\%}$ (Full\\_sup). (Best visualized in color.)}\n \\label{fig:cub-results-visu}\n\\end{figure*}\n\n\\FloatBarrier\n\n\n\n\n\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\n\\label{sec1}\nStar formation in galaxies is maintained through the continued accretion of gas from the intergalactic medium. In turn, the chemical properties of the interstellar medium within galaxies are modulated by the processes of stellar evolution, stellar feedback, and by the abundances of elements within the accreted material. Studies of large samples of galaxies have demonstrated correlations between the gas-phase oxygen abundance in the interstellar medium of galaxies and the total stellar mass \\citep{Tremonti2004}, star formation rate \\citep{Mannucci2010} and gravitational potential \\citep{D'Eugenio2018}. \n\nThese empirical correlations between global galaxy properties and metallicity hint at three processes that can influence the metal content of galaxies. With the buildup of stellar mass through star formation, heavy elements are synthesised in the cores of stars and deposited in the interstellar medium through stellar winds and the terminal phases of the stellar life cycle \\citep[see e.g.][]{Maiolino2019}. Assuming that this enriched material mixes efficiently in to the ISM, the metallicity will then depend on the the metal yield from the stars and the amount of gas into which these metals are dispersed. In other words, the metallicity depends on the degree to which the nucleosynthetic products of stellar evolution are diluted in the ISM \\citep[e.g.][]{Larson1972,Moran2012,Bothwell2013}. Meanwhile, winds driven by the energy or momentum injected into the ISM through stellar feedback have been shown to remove enriched material from galaxies \\citep{Heckman1990}. The magnitude of the effect of metal loss from galaxies due to outflows on setting their metallicity is not known precisely. This is determined by both the mass loading factor, $\\lambda$, which quantifies the total mass lost for a given rate of star formation, and also the metal loading factor, which describes the metal content of the outflow. Recent work by \\cite{Chisholm2018} found that the the metallicity of outflowing material is independent of the stellar mass of a galaxy, and therefore of the ISM metallicity. However, it is often assumed that the expelled gas has the same metallicity as the ISM \\citep{Erb2008,Finlator2008}.\n\nThe tradeoff between these different galactic processes has been captured by a number of different attempts to model the chemical evolution of galaxies \\citep[e.g.][]{Finlator2008,Peeples2011,Lilly2013}. The \\cite{Lilly2013} model makes the simplifying assumption that the evolution of a galaxy`s star formation and metallicity are determined almost entirely by the total gas content. Making simple assumptions about the flow of gas into and out of galaxies, these models are able to reproduce the global mass-metallicity relation, along with the star formation rate - stellar mass relation.\n\nGiven the success of analytic models in describing the global properties of galaxies, there has been recent effort to extend the global scaling relations to kpc scales within galaxies \\citep[e.g.][]{RosalesOrtega2012,BarreraBallesteros2016,CanoDiaz2016,Medling2018}. These studies have been enabled by the introduction of large-scale spatially-resolved spectroscopic surveys such as the Calar Alto Legacy Integral Field Area Survey \\citep[CALIFA][]{Sanchez2012}, the Sydney-AAO Multi-object Integral Field Spectrograph \\citep[SAMI][]{Croom2012,Bryant2015} and Mapping Nearby Galaxies at Apache Point Observatory \\citep[MaNGA][]{Bundy2015}. \\cite{CanoDiaz2016} found that the gas phase metallicity in kpc-sized regions of galaxies is tightly correlated with the stellar mass surface density, $\\Sigma_{*}$. However, \\cite{BarreraBallesteros2016} showed that this correlation is also dependent on the total stellar mass of the galaxy. That is, at fixed $\\Sigma_{*}$ the gas-phase metallicity is correlated with the integrated stellar mass. In direct analogy the argument connecting the global MZR to the increasing depth of the gravitational potential well in more massive galaxies, \\cite{BarreraBallesteros2018} showed that the local metallicity also correlates with the local escape velocity. This may explain the connection of the local $\\Sigma_{*}$-metallicity relation to the integrated stellar mass.\n\n\nThe existence of local scaling relations are an indicator that some kind of regulatory process is at play, and that processes occurring on local scales may be able to explain the results seen in single-fibre spectroscopic surveys \\citep{BarreraBallesteros2016}. Indeed work by \\cite{Carton2015} and \\cite{BarreraBallesteros2018} found that the \\cite{Lilly2013} gas regulator model is able to fit the metallicity given a reasonable estimate of the local gas fraction and mass loading factors.\n\nThe ability of gas regulated models to achieve an equilibrium is in part determined by the rate at which gas is accreted and the metallicity of that gas. Historically it was often assumed that the gas fueling star formation is accreted in a chemically pristine state \\citep[e.g.][]{Larson1972,Quirk1973,Finlator2008,Mannucci2010}. However, there is mounting evidence that this is not the case \\citep{Oppenheimer2010,Rubin2012,Brook2014,Kacprzak2016,Gupta2018}. While work such as that done by \\cite{Peng2014} relies on the inference of the properties of gas in the intergalactic medium from modeling, there is a growing body of observations that are able to directly probe the metallicity of gas outside of galaxies by measuring the absorption of background quasar light by extragalactic clouds \\citep{Lehner2013,Prochaska2017}. These clouds are observed to be cool and metal rich, and are expected to be accreted onto their host galaxies in the future. Indeed, hydrodynamic simulations \\citep{Oppenheimer2010} have shown that below $z \\sim 1$ the dominant source of accreted gas onto galaxies is material the was previously ejected.\n\nIn addition to internal processes such as star formation and the expulsion of gas by feedback, there are a number of ways in which the gas content of a galaxy could be diminished by external environmental processes. Starvation \\citep{Larson1980}, which can occur when a galaxy enters a dense environment and the acquisition of new gas is prevented. In this instance, the existing gas reservoir is consumed by star formation over several Gyr and the metallicity of the ISM increases. Starvation can also be initiated through the heating of gasses in the intergalactic medium by galactic feedback \\citep[e.g.][]{Fielding2017}, or by the shock heating of accreted material \\citep[e.g.][]{Birnboim2003}. Alternatively, a kinetic interaction between the interstellar medium of a galaxy and the intergalactic medium can result in the ram pressure stripping of gas from a galaxy \\citep{Gunn1972}. Occurring on relatively short ($<100 \\, \\mathrm{Myr}$) timescales, ram pressure stripping is not expected to alter the chemical abundances in a galaxy before it is fully quenched. \n\nStudies of the environmental effect on galaxy metallicities generally find a small, but significant dependence. For example, \\cite{Cooper2008} find that approximately $15 \\%$ of the scatter in the mass metallicity relation is attributable to an increase in the metallicity of galaxies in high-density environments. Observationally, there is consensus that the environment has the largest effect on low-mass galaxies \\citep{Pasquali2012,Peng2014,Wu2017}. However, the interpretation of this fact is contentious. \\cite{Wu2017} attribute the elevated metallicity at fixed stellar mass at greater local galaxy overdensity to a reduction in the gas accretion rate. However, \\cite{Peng2014} showed that even when the star formation rates of satellite galaxies in different environments is kept constant, implying no change in the total accretion rate, the metallicity offset is still evident. Their observations and modeling led them to the conclusion that satellite galaxies in dense environments must acquire more enriched gas from their surroundings.\n\nIn this paper we make use of the broad range of galaxy environments and stellar masses covered by the MaNGA survey to explore how the local metallicity scaling relations are affected by the environment for satellite and central galaxies. With MaNGA's wide wavelength coverage and spatial resolution we are able to estimate local gas-phase metallicities, gas mass fractions and escape velocities to compare the observations to a model for the gas regulation of the metallicity, and from this modeling infer environmental trends for the metallicity of gas inflows.\n\nIn Section \\ref{Methods}, we present our data and analysis techniques. In Section \\ref{Results} we investigate how galaxy environments change the local metallicity scaling relations, and in Section \\ref{Discussion}, we discuss our observations in the context of the \\cite{Lilly2013} gas regulator model. Throughout this paper we assume a flat $\\Lambda$CDM cosmology, with $H_{0}=70 \\mathrm{km \\, s^{-1} \\, Mpc^{-1}}$, $\\Omega_{\\Lambda} = 0.7$ and $\\Omega_{m} = 0.3$. Unless otherwise stated we assume a \\cite{Chabrier2003} stellar initial mass function. We will make use of two oxygen abundance indicators, which each have different absolute abundance scales. The O3N2 \\citep{Pettini2004} assumes $12+\\log(\\mathrm{O\/H})_{\\odot} = 8.69$ and the N2S2H$\\alpha$ indicator \\citep{Dopita2016} assumes $12+\\log(\\mathrm{O\/H})_{\\odot} = 8.77$.\n\n\n\\section{Methods}\\label{Methods}\nTo investigate the trends of the spatially-resolved metal distribution in galaxies with environment, we make use of the $8$th MaNGA product launch (MPL-8) internal data release, which is similar to the SDSS DR15 \\citep{Aguado2018}, but includes data from of $6507\\,$ unique galaxies. In this Section we describe the data, sample selection and methods used to perform our analysis on these data.\n\n\\subsection{MaNGA data}\nThe MaNGA survey is the largest optical integral field spectroscopic survey of galaxies to date. Run on the $2.5 \\, \\mathrm{m}$ SDSS telescope \\citep{Gunn2006} at Apache point observatory, the MaNGA survey aims to observe approximately $10,000$ galaxies. This sample comprises a primary and a secondary subsample that were selected to have approximately flat distributions in the $i$-band absolute magnitude, $M_{i}$. These provide coverage of galaxies out to $1.5$ and $2.5 \\, R_{e}$ respectively \\citep{Wake2017} and median physical resolutions of $1.37$ and $2.5 \\, \\mathrm{kpc}$.\n\nObservations of each galaxy are made with one of $17$ hexagonal optical fibre hexabundles, each comprising between $19$ and $127$ $2\\arcsec$ optical fibres, subtending between $12\\arcsec$ and $32\\arcsec$ on the sky. The fibre faces fill the bundle with $56\\%$ efficiency, so each target is observed with a three-point dither pattern with $15$-minute exposures per pointing. This pattern of observations is repeated until a median S\/N of $20 \\, fibre^{-1} \\, pixel^{-1}$ is achieved in the $g$-band, which is typically $2-3$ hours in total \\citep{Law2015,Yan2016}. Light from each hexabundle is taken from the fibres to the BOSS spectrograph \\citep{Smee2013} where it is split by a dichroic at $\\sim 6000 \\, \\mathrm{\\AA}$ into red and blue channels, then dispersed at $R \\approx 2000$. The resulting spectra are then mapped onto a regular grid with $0.5\\arcsec$ square spaxels, with continuous wavelength coverage between $3600 \\, \\mathrm{\\AA}$ and $10300 \\, \\mathrm{\\AA}$. For an in-depth discussion of the MaNGA data reduction pipeline, see \\cite{Law2016}.\n\nThe reduced data are analysed by the MaNGA Data Analysis Pipeline \\citep[\\DAP;][]{Belfiore2019,Westfall2019}, which extracts stellar kinematics, measures the strengths of continuum features, and extracts emission line fluxes, equivalent widths and kinematics for each galaxy. For this work, we make use of the emission line fluxes from the \\DAP's hybrid binning scheme. In this scheme, the data cubes are Voronoi binned \\citep{Cappellari2003} to a S\/N of at least $10$ in the continuum. Within each of these bins, {\\tt pPXF} ~\\citep{Cappellari2004} is used to fit an optimal continuum template which is made up of a linear combination of hierarchically-clustered templates from the MILES stellar library \\citep{SanchezBlazquez2006,FalconBarroso2011} as well as an $8$th degree multiplicative Legendre polynomial. This optimal continuum template constrains the stellar populations within the Voronoi bin, and is fitted by {\\tt pPXF} ~in conjunction with a set of Gaussian emission line templates to each individual spaxel in the bin. For a full description of the \\DAP ~fitting process, see \\cite{Westfall2019}, and for a discussion of the robustness of the emission line measurements, see \\cite{Belfiore2019}.\n\n\\subsection{Sample Selection}\\label{sample_selection}\nGalaxies in the MaNGA survey are selected such that the full sample has a roughly flat distribution of stellar masses. However, this parent sample contains galaxies with a wide range of morphologies and star formation rates. Our goal with this work is to make spatially-resolved measurements of the properties of the gas in galaxies as a function of the local environment. The metallicity indicators mentioned in Section \\ref{Calibrators} are only calibrated for HII regions, and therefore cannot be applied to a fraction of the spaxels in MaNGA. To make this determination, we compare the $\\mathrm{[NII]\\lambda 6584 \/ H\\alpha}$ and $\\mathrm{[OIII]\\lambda 5007 \/ H\\beta}$ emission line ratios on a \\cite{Baldwin1981} (BPT) diagram. Only spaxels with emission line ratios that satisfy both the \\cite{Kauffmann2003} and \\cite{Kewley2001} criteria for excitation of the gas by a young stellar population are included in our analysis. We further exclude spaxels for which the S\/N ratio in the emission lines used for the metallicity and determination of star formation are less than $3$. \\cite{Belfiore2019} showed that the fluxes of lines above this threshold in MaNGA are relatively robust to systematic effects. From these constraints we calculate the fraction of spaxels for a data cube for which we are able to reliably determine a metallicity. In computing this fraction, we include only spaxels where the $g$-band flux from the data cube is detected at a S\/N of $2$ or greater. This condition is imposed so that galaxies that do not fully fill the IFU field of view are not unduly excluded. Galaxies for which the fraction of spaxels with a measurable metallicity is larger than $60\\%$ are retained for our analysis.\n\nWe make a further cut on the galaxies in our sample based on our ability to robustly measure their metallicity gradients. Galaxies with an elliptical minor to major axis ratio ($b\/a$) less than $0.6$, as determined by the NASA-Sloan Atlas \\citep[NSA;][]{Blanton2011} Elliptical Petrosian photometry, were excluded to give a sample of face-on galaxies. We made a further restriction on the measured $r$-band effective radius, $R_{e}$. Galaxies with $R_{e}<4\\arcsec$ are also rejected. These criteria are motivated by the analysis performed by \\cite{Belfiore2017}, who showed that beam-smearing effects are non-negligible for highly inclined systems, or for galaxies that are small relative to the MaNGA point spread function. Similarly, \\cite{Pellegrini2019} used a set of realistic simulations to show that light-weighted quantities, such as dust attenuation, are systematically overestimated when observed in highly inclined systems.\n\n\nOur final sample consists of $1008 \\,$ galaxies, with stellar masses in the range $7.8< \\log(M_{*}\/M_{\\odot})<11.4$. For the majority of our analysis we restrict the stellar masses considered to $9< \\log(M_{*}\/M_{\\odot})<11$, and in this range our sample comprises $967$ galaxies. The distribution of galaxies in our sample the colour-mass plane is shown in Figure \\ref{M_star_cmd}.\n\n\\begin{figure}\n\\includegraphics{M_star_cmd.pdf}\n\\caption{In panel $a)$ we show the distribution of stellar mass, $\\mathrm{M_{*}}$ for the input sample (\\textit{grey}) and for the final sample (\\textit{red}). The attrition of sources occurs preferentially at high stellar mass, which is consistent with the rising fraction of passive galaxies. In $b)$ the positions of galaxies in the input sample (\\textit{grey}) and final sample (\\textit{red}) on the $u-r$ colour-mass diagram is shown. Galaxies that satisfy our selection criteria are predominantly in the blue cloud and forming stars.}\\label{M_star_cmd}\n\\end{figure}\n\n\\subsection{Gas-phase metallicities}\\label{Calibrators}\nTo probe the chemical evolution of the galaxies in our sample, we use the \\DAP ~emission line measurements to estimate the gas-phase oxygen abundances in these systems. For convenience we will use the terms `gas-phase oxygen abundance' and `metallicity' interchangeably throughout this work.\nThe estimation of gas-phase oxygen abundances with optical spectroscopy is often achieved by measuring a set of emission line ratios that vary with the conditions of the gas. The metallicity can be calculated by comparing the measured line ratios to theoretical models for HII regions \\citep[e.g.][]{Blanc2015}. Alternatively it can be estimated by using relationships that are empirically calibrated by comparing these line ratios to the spectra of HII regions for which the metallicity has been measured directly using temperature-sensitive emission line ratios, such as $\\mathrm{[OIII] \\lambda 4363 \/ [OIII] \\lambda 5007}$. While it is generally accepted that the direct method of the oxygen abundance determination is more robust than theoretical modeling or using empirical calibrations, it is generally not possible with datasets such as MaNGA as the $\\mathrm{[OIII] \\lambda 4363}$ line is typically $\\sim 100$ times fainter than the $\\mathrm{[OIII] \\lambda 5007}$ line.\n\nMany of the empirical calibrations suffer from systematics, being biased either high or low due to variations in the ionisation parameter in the gas, or contamination from diffuse ionised gas or light from an AGN. To account for this fact, we will make use of two different metallicity calibrations. \n\\subsubsection{O3N2}\nFor consistency with \\cite{BarreraBallesteros2018}, we use the \\cite{Pettini2004} $O3N2$ oxygen abundance diagnostic. This method makes use of the $\\mathrm{[OIII] \\lambda5007\/H\\beta}$ and $\\mathrm{[NII]\\lambda6584 \/ H\\alpha}$ emission line ratios, and was calibrated against a set of $137$ extragalactic HII regions for which a metallicity had been determined either by the direct $T_{e}$ method or by photoionisation modeling of their spectra. Taking $O3N2 = \\mathrm{\\log([OIII]\\lambda 5007 \/ H\\beta) - \\log([NII]\\lambda6584 \/ H\\alpha)}$, the metallicity of an HII region can be calculated as\n\\begin{equation}\n12+\\log(\\mathrm{O\/H}) = 8.73 - 0.32 \\times O3N2,\n\\end{equation} \nover the range $8.1 < 12+\\log(\\mathrm{O\/H}) < 9.05$. It should be noted that this calibration suffers from some degeneracy with variation in the ionisation parameter within the gas. For this reason it cannot be used in spectra that include significant contamination from diffuse ionised gas, or active galactic nuclei, and may also be biased by variations in $q$ between star-forming regions within a galaxy \\citep[see e.g.][]{Poetrodjojo2018}.\n\n\\subsubsection{N2S2H$\\alpha$}\nAn alternative method for calculating the metallicity of HII regions based on the relative intensities of the $\\mathrm{[NII]\\lambda 6584}$, $\\mathrm{[SII]\\lambda 6717,6731}$ and H$\\alpha$ lines was presented by \\cite{Dopita2016}. Assuming a simple relationship between N\/O and O\/H, and modeling theoretical HII regions with a variety of gas pressures and ionisation parameters using the {\\tt MAPPINGS 5.0} software, they found\n\\begin{equation}\n\\begin{aligned}\n12+\\mathrm{\\log(O\/H)} = & 8.77 + \\log(\\mathrm{[NII]\\lambda6584 \/[SII]\\lambda6717,6731}) \\\\\n\t& + 0.264 \\log(\\mathrm{[NII]\\lambda6584 \/H\\alpha}),\n\\end{aligned}\n\\end{equation}\nwhich they showed has very little dependence on the ionisation parameter and is valid over the range $8.0 \\lesssim12 + \\log(\\mathrm{O\/H}) < 9.05$. There is some evidence that the relationship between N\/O and O\/H varies with the total stellar mass of a galaxy \\citep{Belfiore2017}, but this should be negligible if analysis is carried out within narrow bins of stellar mass.\n\n\nEach of the two metallicity calibrations outlined above have different absolute abundance scalings. While it is not possible to directly compare the metallicities of galaxies derived with different calibrations, relative differences between two measurements made with the same indicator are likely to reflect real differences in the chemical composition of the galaxies in question.\n\n\n\\subsection{Gas density}\nFollowing the methodology of \\cite{BarreraBallesteros2018} we estimate the local neutral gas surface density from the dust attenuation derived from the observed Balmer line ratios. Under the assumption of a fixed gas-to-dust ratio, \\cite{BarreraBallesteros2018} utilised the observation that the gas surface density is related to the V-band attenuation via $\\Sigma_{gas} = 30 \\times \\mathrm{A_{V}} \\, \\mathrm{pc^{-2}}$. We apply a small correction to this to account for the variation in the dust-to-gas ratio with the gas phase metallicity using the relation given by \\cite{Wuyts2011},\n\\begin{equation}\n\\Sigma_{gas} = 30 \\times A_{V} \\times \\left( \\frac{Z}{Z_{\\odot}} \\right)^{-1} \\, \\mathrm{pc^{-2}},\n\\end{equation}\nfor $Z2$.\n\nWe have rescaled the estimated stellar masses from a \\cite{Kroupa2001} to a \\cite{Chabrier2003} initial mass function by dividing by $1.06$ \\citep{Zahid2012}. To calculate stellar mass surface density ($\\Sigma_{*}$) in a spaxel, we take these values and divide them by the projected area of a $0\\farcs 5$ square spaxel at the systemic redshift the host galaxy. A small correction for inclination of the galaxy is applied by multiplying these surface densities by the elliptical Petrosian minor-to-major axis ratio.\n\n\n\n\\subsection{Estimating the local escape velocity}\nWe estimate the local escape velocity from the halo assuming a spherically-symmetric dark matter halo that is described by a \\cite[NFW]{Navarro1997} profile using the same procedure as \\cite{BarreraBallesteros2018}. This method assumes that the star-forming gas is confined to a thin disk coplanar with the optical disk, and is in a circular orbit around the centre of the galaxy. We extract a rotation curve by taking the maximum and minimum measured line-of-sight velocity within a $30^{\\circ}$ wedge along the photometric major axis of the galaxy, similar to \\cite{BarreraBallesteros2014}. This rotation curve is corrected for the galaxies inclination to our line of site using the $r$-band elliptical Petrosian major to minor axis ratio. We fit the resulting rotation curve using the \\cite{Bohm2004} parametrisation,\n\n\\begin{equation}\nV(r_{depro}) = V_{max} \\frac{r_{depro}}{\\left(R_{turn}^{\\alpha} + r_{depro}^{\\alpha} \\right)^{1\/\\alpha}},\n\\end{equation}\nwhere $V_{max}$ is the maximum velocity of rotation, $r_{depro}$ is the deprojected radius, $R_{turn}$ is the radius at which the rotation curve flattens, and $\\alpha$ is a parameter that determines the shape of the rotation curve. This formulation is a special case of the phenomenological model presented by \\cite{Courteau1997}. The fitted parameters $V_{max}$ and $R_{turn}$ are then used to derive the local escape velocity using the following formula:\n\\begin{equation}\n V_{esc}^{2} = \n\t\\begin{cases}\n\t\tV_{esc,in}^{2} + V_{esc,out}^{2} & r_{depro} < R_{turn} \\\\\n\t\tV_{esc,out}^{2} & r_{depro} > R_{turn}\n\t\\end{cases} \n\\end{equation}\nwhere\n\\begin{equation*}\nV_{esc,in} = \\left(V_{max}\/R_{turn}\\right)^{2} \\left(R_{turn} - r_{depro} \\right)^{2} + V_{esc,out}^{2}\n\\end{equation*}\nand\n\\begin{equation*}\nV_{esc,out}^{2} = 2V_{max}^{2} \\ln \\left( R_{vir}\/r_{depro} \\right) + 2V_{max}^2.\n\\end{equation*}\nIn this relation, $R_{vir}$ is the virial radius of the galaxy's halo, which we obtain by estimating the galaxy's total halo mass from its stellar mass using the relation derived by \\cite{Behroozi2010}. This computation of the local escape velocity assumes the galaxy potential to be spherically symmetric. \\cite{BarreraBallesteros2018} tested how a more complicated two-component halo, that includes a contribution to the gravitational field from the baryons in galaxy disk and found that this causes a deviation in the estimated escape velocity of only $\\sim 5\\%$ from the simpler spherical case.\n\n\n\n\\subsection{Environment}\nThe aim of our current work is to explore how the local environments that galaxies are in today impact their chemical evolution. While there are many different ways of characterising environment, each capable of tracing a variety of different physical processes that can occur during a galaxy's lifetime, we will use the satellite\/central classification of our sample as the primary metric for environment. This has been shown to be a good predictor of the star-forming properties of galaxies at fixed stellar mass \\citep[e.g.][]{Peng2012}. We make use of the \\cite{Tempel2017} catalogue, which uses a friends-of-friends algorithm to provide estimates of group membership, group richness, and dark matter halo mass for galaxies in SDSS DR12 \\citep{Eisenstein2011, Alam2015}. Of the $6507\\,$ galaxies in MPL-8, $5333 \\,$ were associated with groups in the \\cite{Tempel2017} catalogue, of which $3447$ are identified as the centrals of their halo and $1886$ are satellites. This sample comprises a wide range of group masses, $M_{200}$, which is the mass contained within $R_{200}$ of the group centre, the radius at which the density of an NFW profile drops to $200$ times the average density of the Universe. Groups within this catalogue contain as few as two galaxies and up to $254$ members for the most massive halo. We show the distribution of halo masses for our final sample in Figure \\ref{M_200_dist}.\n\\begin{figure}\n\\includegraphics{M_200_dist.pdf}\n\\caption{The distribution of group halo mass, $M_{200}$ for the input sample (\\textit{grey}) and for the final sample (\\textit{red}). The loss of sources from the input sample at higher halo mass is more severe than in low mass halos due to the higher fraction of passive galaxies in the most dense environments. }\\label{M_200_dist}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics{RP_vs_mass.pdf}\n\\end{center}\n\\caption{The median metallicity radial profile for galaxies in the stellar mass ranges indicated by the legend at the top of panel $a)$. The solid curves represent the median profiles, while the shaded regions of the same colour represent the $1\\sigma$ error range on the median. In panel $a)$ we show the metallicity median profiles made using the \\cite{Pettini2004} O3N2 indicator and in panel $b)$ we show the results from the \\cite{Dopita2016} $\\mathrm{N2S2H\\alpha}$ indicator. Note that each metallicity indicator has a different abundance scaling on the y-axis, and in each panel we mark the assumed solar abundance with a grey dashed line.}\\label{RP_vs_mass}\n\\end{figure*}\n\n\\subsection{Calculating metallicity radial profiles}\nTo characterise the radial dependence of metallicity in the galaxies in our sample, we construct the radial profile in the following way. In each galaxy, the deprojected distance from the centre of the galaxy has been calculated by the \\DAP based on the $r$-band surface brightness distribution in the SDSS imaging, and assuming each galaxy to be a tilted thin disk. We measure the metallicity as a function of radius and take the average value for spaxels in $0.5\\arcsec$-wide bins. The metallicity measurements are included only if they are classified as star-forming on the BPT diagram, if they have $\\mathrm{S\/N} >3$ in all emission lines utilised, and if the $\\mathrm{H\\alpha}$ equivalent width is greater than $6 \\, \\mathrm{\\AA}$ in emission. This $\\mathrm{H}\\alpha$ equivalent width criterion is consistent with that chosen by \\cite{BarreraBallesteros2018} to minimize contamination of the emission line fluxes from diffuse ionised gas.\n\nTo understand the behaviour of an ensemble of galaxies we measure what we will call the `median profile' for the metallicity. For this, we take the median of the individual radial profiles within radial bins that are $0.2 \\, R_{e}$ wide. Once the median has been calculated we perform a bootstrap resampling of the galaxies, recalculating the median profile for $1000$ realisations of the sample. At each radius, the uncertainty on the sample median is estimated to be the standard deviation of the bootstrapped median profiles. \n\n\\section{Results}\\label{Results}\n\n\n\n\n\\subsection{Metallicity profiles}\nFollowing \\cite{Belfiore2017} we calculate the median metallicity profiles in narrow bins of stellar mass using two different oxygen abundance indicators. The median profiles for all galaxies in our sample are shown in Figure \\ref{RP_vs_mass}. In panel $a)$ of this figure, we show the \\cite{Pettini2004} O3N2 oxygen abundance median profiles in $0.5 \\, \\mathrm{dex}$-wide bins of stellar mass. The profiles shown here are consistent with those shown in Figure 3 of \\cite{Belfiore2017}, in particular with the steepening of the metallicity gradient at higher stellar mass. Panel $b)$ shows the profiles for the same galaxies derived using the \\cite{Dopita2016} $\\mathrm{N2S2H\\alpha}$ indicator. This indicator shows qualitatively different results to O3N2 in the centres of galaxies above $\\log(M\/M_{\\odot}) \\sim 10.25$. While the $\\mathrm{N2S2H\\alpha}$ metallicities both continue to rise in the centres of massive galaxies, the O3N2 indicator shows a flattening. The origin of the mismatch in the behaviour of the metallicity profiles in the centres of massive galaxies between different abundance calibrations may be due to the fact that the \\cite{Dopita2016} metallicity calibration is strongly tied to the N\/O ratio. \\cite{Belfiore2017} showed N\/O to increase towards the centres of massive galaxies, while O\/H does not.\n\nAgain, we caution that the interpretation of metallicity radial profiles from datasets with kpc-scale physical resolution such as MaNGA is subject to the flattening of gradients by the observational point spread function \\citep{Yuan2013,Mast2014}. \\cite{Carton2017} presented a method to account for this effect, however they reported that it was not always robust in the presence of clumpy star formation distributions. Our galaxy size and inclination selection criteria that were outlined in Section \\ref{sample_selection} should mitigate the most severe resolution effects \\citep{Belfiore2017}, but we note that the most accurate determinations of metallicity gradients require observations with finer spatial resolution. The core conclusions of this work, particularly those based on local scaling relations, will be only minimally affected by this issue. \n\n\n\\subsubsection{Satellites vs. centrals}\nTo investigate the environmental dependence of the radial distribution of the oxygen abundance, we split our sample into satellites and centrals, then re-calculate the median profiles with each metallicity indicator. We show these radial profiles in Figure \\ref{sat_cen_rp_all}. At fixed stellar mass there are minor qualitative and quantitative differences between the radial distributions of the oxygen abundance. At all radii, the absolute differences in the median metallicity at fixed stellar mass are less than $\\sim 0.05 \\, \\mathrm{dex}$ for O3N2, and less than $\\sim 0.1 \\, \\mathrm{dex}$ for $\\mathrm{N2S2H\\alpha}$. We note that in the highest stellar mass bins, there are very small numbers of star-forming satellites, and their distribution is biased towards the lower stellar masses. For this reason the differences for the most massive galaxies are not robust in this sample. At lower stellar masses, however, the stellar mass distributions of satellites and centrals are similar, the sample sizes are larger and a fairer comparison can be made.\n\nSince each metallicity indicator has different systematics and biases, we only deem a difference to be real if it is reflected in both the O3N2 and $\\mathrm{N2S2H\\alpha}$ data. For galaxies in the mass range $9.4 < \\log(M_{*}\/M_{\\odot})<10.2$, there is a systematic offset in the metallicity, with satellite galaxies being more metal rich at all radii sampled, but with no significant difference in the gradient. In the lowest mass range, the O3N2 indicator shows a change in the metallicity gradient, however this is not evident in the $\\mathrm{N2S2H\\alpha}$ metallicity. \n\n\n\\begin{figure*}\n\\includegraphics{sat_cen_rp_all.pdf}\n\\caption{The median metallicity radial profiles in bins of stellar mass split into satellites and centrals. In the upper row, we show the profiles for the O3N2 indicator, while in the lower row the results for $\\mathrm{N2S2H\\alpha}$ are shown. The profiles for central galaxies are shown in the left column, and for the satellite galaxies in the middle column. On the right we show the difference between the satellites and centrals. Satellite galaxies in the range $9.4 < \\log(M_{*}\/M_{\\odot})<10.2$ are systematically more metal rich than centrals of the same mass in both metallicity indicators.}\\label{sat_cen_rp_all}\n\\end{figure*}\n\n\\subsection{Local scaling relations}\\label{local_scaling_relations}\nIn addition to the global scaling relations relating a global metallicity measurement to the integrated stellar mass of a galaxy, there are also local correlations between the stellar mass surface density \\citep{Moran2012,RosalesOrtega2012} and the local gas-phase metallicity. These local scaling relations capture the response of the chemical abundance of gas to processes occuring on local $\\sim \\mathrm{kpc}$ scales. \n\n\\begin{figure*}\n\\includegraphics{sig_mass_met_PP04.pdf}\n\\includegraphics{sig_mass_met_D16.pdf}\n\\caption{The relationship between local stellar mass surface density and metallicity in bins of total stellar mass for satellite and central galaxies. The greyscale background represents the density of data points for central galaxies, while the red contours represent the distribution of data points from satellite galaxies. For clarity, the distribution for satellite galaxies was smoothed to make the contours less subject to noise. Blue points represent the median values of metallicity in the central galaxies at a fixed $\\Sigma_{*}$, and the red points are the medians for satellite galaxies. These are only calculated where there are sufficient data. We include bootstrapped standard errors of the median, but these uncertainties are often smaller than the data points. In the top row we show the results for the \\cite{Pettini2004} O3N2 indicator, while in the bottom row we show the result for the \\cite{Dopita2016} N2S2H$\\alpha$ indicator. The metallicity of satellite galaxies at fixed stellar mass surface density is slightly higher $(\\sim 0.01 \\, \\mathrm{dex})$ than for central galaxies.}\\label{sig_mass_met}\n\\end{figure*}\n\n\\subsubsection{Metallicity and Stellar Density}\nAs more stars form and the stellar surface density ($\\Sigma_{*}$) increases, the amount of enrichment of the ISM also increases. We explore this relation in Figure \\ref{sig_mass_met}, where we show the relationship between $\\Sigma_{*}$ and metallicity for satellite and central galaxies. \\cite{Hwang2019} showed that in addition to the relationship between the local $\\Sigma_{*}$ and $12+\\log(\\mathrm{O\/H})$, there is a secondary dependence on the total stellar mass. For this reason we have split our analysis into $0.5 \\, \\mathrm{dex}$-wide bins of integrated $M_{*}$. Using both the O3N2 and N2S2H$\\alpha$, there is a small ($\\sim 0.01 \\, \\mathrm{dex}$) difference between the metallicity at fixed $\\Sigma_{*}$ between satellites and centrals, particularly in the $9.5 < \\log(\\mathrm{M_{*}\/M_{\\odot}})<10$ interval. While the formal uncertainties on the medians indicate that these differences are statistically significant, they are a factor of ten smaller than the standard deviations of the metallicity distributions. We note that for systems of low stellar mass, a satellite galaxy may occupy a group with a wide range of possible halo masses, corresponding to very different environments.\n\nGiven that the largest differences in the metallicity between satellites and centrals occurs at the lowest stellar masses, we show the impact of varying the stellar mass of the central in Figure \\ref{sig_mass_met_cen_mass}. Choosing satellite galaxies between $9<\\log(\\mathrm{M_{*}\/M_{\\odot}})<10$, we find a large systematic offset in metallicity for satellites of more massive central galaxies, corresponding to more massive group halos. Satellite galaxies associated with centrals more massive than $\\log(M_{*}\/\\mathrm{M_{\\odot}}) = 10.5$ have metallicities that are on average $0.08 \\pm 0.009 \\, \\mathrm{dex}$ higher than those galaxies which are satellites of centrals with $\\log(M_{*}\/\\mathrm{M_{\\odot}}) < 10$. To eliminate the possibility that a different distribution of total stellar masses for the targeted galaxies within the central stellar mass bin is responsible for the discrepancy, we perform a two-sample Kolmogorov-Smirnov test \\citep{Smirnov1939}. This test returns a statistic of $D=0.17$ with $p=0.71$, indicating no statistically significant difference in the total stellar masses.\n\n\n\\begin{figure}\n\\includegraphics{sig_mass_met_cen_mass_PP04.pdf}\n\\includegraphics{sig_mass_met_cen_mass_D16.pdf}\n\\caption{The local $\\Sigma_{*}-\\mathrm{O\/H}$ relation for satellite galaxies with $9<\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$ split by the mass of the galaxy which is central to their halo. In the upper panel, we show the results for the PP04 indicator, and in the lower panel we show the results for the D16 indicator. Blue points show the median metallicity at a given $\\Sigma_{*}$ for satellites of low-mass centrals ($\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$). These points trace the median of the grey-shaded distribution. Red points are the median metallicity as a function of $\\Sigma_{*}$ for satellites of high-mass centrals, shown by the red contours. The oxygen abundance is systematically higher for satellites of more massive centrals. }\\label{sig_mass_met_cen_mass}\n\\end{figure}\n\n\\subsubsection{Metallicity and gas fraction}\nModels predict \\citep[e.g.][]{Lilly2013}, and observations show \\citep{Mannucci2010, Moran2012}, that if low-metallicity gas is accreted onto the galaxy and the local gas fraction rises, then the metal content is diluted and the total metallicity of the gas will decrease. This relationship is investigated in Figure \\ref{mu_met}, where we show the gas-phase metallicity as a function of the local gas fraction, $\\mu$ in intervals of total stellar mass. In narrow bins of stellar mass, we find a tight correlation between the local gas fraction and the metallicity of the ISM. Once again, there is a small difference in the metallicities between satellites and centrals, with the difference being largest in galaxies between $10^{9.5}$ and $10^{10} \\, \\mathrm{M_{\\odot}}$. \n\n\n\n\\begin{figure*}\n\\includegraphics{mu_met_PP04.pdf}\\\\\n\\includegraphics{mu_met_D16.pdf}\n\\caption{The dependence of $12+\\log(\\mathrm{O\/H})$ on the local gas fraction, $\\mu$ in different bins of stellar mass. The contours, greyscale and coloured points are the same as in Figure \\ref{sig_mass_met}. The difference in metallicity at fixed $\\mu$ between satellites and centrals is largest in the range $9.5 < \\log(M_{*}\/\\mathrm{M_{\\odot}}) < 10$, where it reaches $\\sim 0.015 \\, \\mathrm{dex}$. }\\label{mu_met}\n\\end{figure*}\n\n\nFocussing again on the lower-mass satellite galaxies in our sample, we see in Figure \\ref{mu_met_cen_mass} that the satellites of massive galaxies are more enriched at fixed $\\mu$ than the satellites of less massive centrals. Comparing the contours of spaxels in the $\\mu$-metallicity plane, we see that on average, the satellites of more massive centrals have a lower inferred gas fraction. Nevertheless, at fixed $\\mu$ the offset in O\/H remains.\n\n\n\\begin{figure}\n\\includegraphics{mu_met_cen_mass_PP04.pdf}\\\\\n\\includegraphics{mu_met_cen_mass_D16.pdf}\n\\caption{The relationship between $\\mu$ and O\/H for satellites of low-mass centrals (grey background with blue points indicating the median) and high-mass (red contours with red points indicating the medians) for PP04 O3N2 (top) and D16 N2S2H$\\alpha$ (bottom). At fixed gas fraction, the median metallicity is $\\sim 0.1 \\, \\mathrm{dex}$ higher for the satellites of massive centrals.}\\label{mu_met_cen_mass}\n\\end{figure}\n\n\n\n\\section{Discussion}\\label{Discussion}\n\\subsection{The impact of environment on local scaling relations}\nWe have shown that the metallicity versus stellar mass, local escape velocity and gas fraction local scaling relations vary with the environment that galaxies inhabit. In this study we utilised the mass of the largest galaxy in the group as our estimate of environment. This quantity is correlated with the total mass of the halo \\citep{Behroozi2010}, though does not suffer from the large uncertainties involved in estimating halo dynamical masses from spectroscopy \\citep[see][for an excellent discussion of this point]{Robotham2011}. For satellite galaxies, the magnitude of the difference in metallicity appears to be a function of the stellar mass of the galaxy that is central to the group. In Figures \\ref{sig_mass_met_cen_mass} and \\ref{mu_met_cen_mass} we showed that the satellites of central galaxies more massive than $\\log{\\mathrm{M_{*}}\/M_{\\odot}}>10.5$ have local metallicities that are enhanced by $\\sim 0.1 \\, \\mathrm{dex}$ over similar galaxies which are satellites of less massive ($\\log{\\mathrm{M_{*}}\/M_{\\odot}}<10$) centrals. This enhancement appears to be independent of the local gas fraction and escape velocity. \n\n\n\\subsection{Accounting for outflows with the gas-regulator model}\nWhile differences in the metallicity of satellite galaxies at fixed $\\Sigma_{*}$ and $\\mu$ may be suggestive of some intrinsic difference between the chemical evolution of satellites in different mass halos, these simple scaling relations taken individually are unable to account for all factors that may influence the oxygen abundance. Neither of these scaling relations explicitly accounts for the loss of metals and corresponding reduction in oxygen abundance through outflows. To control for all of these factors at once, we fit the gas regulator model of \\cite{Lilly2013} to the data. \n\nThe gas regulator model for galaxy evolution makes the simple assumption that a galaxy's current star formation and rate and metallicity are largely determined by the present day gas fraction. While it was originally devised to apply to entire galaxies as a whole, some authors have recently showed that it can be applied to galaxies locally on $\\sim \\mathrm{kpc}$ scales \\citep{Carton2015, BarreraBallesteros2018}. In their derivation of this model, \\cite{Lilly2013} showed that the metal content of galaxies will reach an equilibrium on timescales shorter than the time it takes for their total gas content to be depleted. At equilibrium, the metallicity is\n\n\\begin{equation}\\label{gas_regulator}\nZ_{eq} = Z_{0} + \\frac{y}{1 + r_{gas} + (1-R)^{-1} \\left(\\lambda + \\epsilon^{-1}\\frac{d \\ln(r_{gas})}{dt} \\right)},\n\\end{equation}\nwhere $r_{gas}$ is the ratio of gas to stellar mass, $R$ is the fraction of gas returned from stars to the ISM by stellar evolution, and $\\epsilon$ is the star formation efficiency. While this equation contains several unknown quantities, we can fix these to sensible values based on previous estimates from the literature. We adopt a value of $R = 0.4$, which is consistent with the predictions of stellar population synthesis models \\citep{Bruzual2003}, and is in line with the assumptions underlying previous work on this topic \\citep{Lilly2013,Carton2015,BarreraBallesteros2018}. Further, based on fitting the mass-metallicity relation for SDSS galaxies, \\cite{Lilly2013} were able to constrain the product $ \\epsilon^{-1}\\frac{d\\ln( r_{gas})}{dt} = -0.25$. The nucleosynthetic yield, $y$, is also not well known. The yield per stellar generation (and gas return fraction, $R$) is dependent on the stellar initial mass function, which some suggest may not be universal \\citep[e.g.][]{Gunawardhana2011,Parikh2018}. \\cite{Finlator2008} estimate the yield to be in the range $0.008 \\leq y \\leq 0.023$, but we assume the value near the middle of this range of $0.014$. This is the value calculated by \\cite{BarreraBallesteros2018} based on both theoretical modeling using { \\tt STARBURST99} \\citep{Leitherer2014} and by closed-box modeling of galaxy cluster data \\citep{Renzini2014}. We assume that this value is constant and valid throughout our entire sample. For a rigorous discussion of the impact of variations of the assumed yield on the calibration and interpretation of metallicities, see \\cite{Vincenzo2016}.\n\n\n\n\nIn the gas regulator model, the outflows are described by the mass-loading factor, $\\lambda$, which is the ratio of the star formation rate to the rate of mass loss due to stellar feedback and winds. We parametrized the mass-loading factor similar to the formulation of \\cite{Peeples2011}, and assuming the metallicity of outflows is the same as the metallicity in the ISM of the galaxy\n\n\\begin{equation}\\label{lambda}\n\\lambda = \\left( \\frac{v_{0}}{v_{esc}(r)}\\right)^{\\alpha}.\n\\end{equation}\n\n\\cite{Peeples2011} suggest either $\\alpha=1$ or $2$, however we note that the they parametrise $\\lambda$ in terms of the virial velocity of galaxy halos. The relationship between the virial velocity and the local escape velocity is complicated, and so to account for this we allow $\\alpha$ to vary freely. \\cite{BarreraBallesteros2018} also include an additive constant in their parametrisation of $\\lambda$, which imposes a minimum level of outflows from even the deepest potential well. For a reasonable choice of yield, we find that this has the effect of limiting the maximum metallicity that a gas-regulated system can achieve.\n\n\n\\subsubsection{Fitting the \\cite{Pettini2004} metallicity}\nIn order to constrain the values of $v_{0}$ and $\\alpha$ for our model, we fit equation \\ref{gas_regulator} to the metallicity, gas fraction and local escape velocities inferred from the MaNGA spaxel data for all galaxies in our sample. We find $v_{0}=368 \\, \\mathrm{km \\, s^{-1}}$, and $\\alpha = 0.52$. In their work on deriving the $v_{esc}$-dependence of $\\lambda$, \\cite{BarreraBallesteros2018} noted that the gas regulator model does not necessarily provide a good fit to the data, though it was preferred to the leaky-box model of \\cite{Zhu2017} on the grounds that it provided more realistic estimates of $\\lambda$. In our fits of the gas-regulator model to the MaNGA data on kpc scales, we find smaller residuals at low gas fraction. This is a direct result of our choice not to include an additive constant in our parametrisation of $\\lambda$. In the analysis that follows we fix the dependence of $\\lambda$ on the escape velocity, reducing this problem to fitting only one variable, the metallicity of accreted gas, $Z_{0}$.\n\nWe perform a least-squares fit of the gas regulator model to the O3N2-based gas-phase metallicities, fitting for $Z_{0}$ in the sub-populations where the largest difference in metallicity is seen. This is for satellite galaxies with stellar masses in the range $9<\\log (M_{*}\/\\mathrm{M_{\\odot}})<10$, split based on the stellar mass of the corresponding central galaxy. For satellites of low-mass centrals ($\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$), the metallicity of the gas precipitating onto their discs inferred from the modeling is $Z_{0} = (4.68 \\pm 0.11) \\times 10^{-4}$, corresponding to $12+\\log(\\mathrm{O\/H})=7 .46 \\pm 0.01$. For the gas being accreted onto the $9<\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$ satellites of high-mass central galaxies, we derive a metallicity of $Z_{0}= (1.17 \\pm 0.001) \\times 10^{-3}$, or $12+\\log(\\mathrm{O\/H})=7.87 \\pm 0.003$. \n\n\n\\subsubsection{Fitting the \\cite{Dopita2016} metallicity}\nThe metallicities derived from the \\cite{Dopita2016} N2S2H$\\alpha$ calibration have a different absolute abundance scaling and cover larger range for the same set of spectra. This can be seen by comparing the $y$-axes in Figure \\ref{sig_mass_met}. Using the same values of the yield and gas return fraction as were used to fit the O3N2 metallicities, the data favours a negative value of $Z_{0}$, which is unphysical. With this indicator, the values of $\\lambda$ allowed by this parametrisation that also gives an appropriate shape to the distribution of modeled data in the $\\mu -Z $ plane, are too large. Using the $\\lambda$ parametrisation of \\cite{BarreraBallesteros2018}, $\\lambda=\\left(v_{0}\/v_{esc} \\right)^{\\alpha} + \\lambda_{0}$, we find $Z_{0} = (3.0 \\pm 0.12 )\\times 10^{-4}$ or $12+\\log(\\mathrm{O\/H})=7.27 \\pm 0.01$ for the satellites of low-mass centrals. For the satellites of high-mass centrals we find $Z_{0} = (8.6 \\pm 0.1 )\\times 10^{-4}$ or $12+\\log(\\mathrm{O\/H})=7.86 \\pm 0.003$. \n\n\n\nWe note that the absolute value for these inferred quantities is correlated with a number of unconstrained parameters including the nucleosynthetic yield of oxygen, $y$, the gas return fraction from stars, $R$, and the precise form of the mass loading factor, $\\lambda$. Nevertheless, we argue that the assumption that these parameters do not vary between star-forming galaxies in a relatively narrow mass range is reasonable, and that the choice to fix them for this comparison is justified. With this limitation it is not possible to derive an absolute abundance for the accreted gas, but the existence of a difference is robust. As was shown in Figures \\ref{sig_mass_met_cen_mass} and \\ref{mu_met_cen_mass}, the difference in local metallicity scaling relations is significant between these two galaxy subpopulations. The gas regulator model provides an interpretive framework to describe these differences in terms of the variation of the metallicity of the intergalactic medium in different environments, while controlling for small differences in the estimated local gas fraction and escape velocity.\n\n\n\n\\subsection{Can starvation explain our results?}\\label{Starvation}\nOur results are analogous to those seen by previous studies of the environmental dependence of the global mass-metallicity relation \\citep[e.g.][]{Cooper2008,Pasquali2012,Peng2014,Wu2017}. While different studies have found qualitatively similar results, there is considerable disagreement in the interpretation. \\cite{Peng2014}, argue that the primary driver of this trend must be the elevation of the metallicity of gas being accreted onto galaxies in dense environments. This argument hinges on their observation that the distribution of star-formation rates in their sample is independent of environment.\nThis conclusion contrasts starkly with the interpretation of \\cite{Wu2017}, who suggested that the environmental variation of the mass-metallicity relation can be explained by the reduction in the gas-fractions of galaxies with the local galaxy overdensity. \n\n\\begin{figure}\n\\includegraphics{sig_M_sig_SFR_cen_mass.pdf}\n\\caption{The star formation rate surface density as a function of stellar mass surface density for galaxies with $9<\\log(M_{*}\/\\mathrm{M_{\\odot}})<10$. The greyscale shows the distribution of measurements from satellites of low-mass centrals ($\\log(M_{*,cen}\/\\mathrm{M_{\\odot}})<10$), with the median of this distribution shown by blue points, and the $16$th and $84$th percentiles shown by blue lines. The red contours indicate the $\\Sigma_{*} - \\Sigma_{SFR}$ distribution for satellites of massive galaxies ($\\log(M_{*,cen}\/\\mathrm{M_{\\odot}})>10.5$), with the red points showing the median and the red lines marking the $16$th and $84$th percentile of the distribution. There is very little difference in the two distributions for the vast majority of spaxels in the two samples.}\\label{sig_M_sig_SFR_cen_mass}\n\\end{figure}\n\n\nStarvation, whereby the accretion of gas onto galaxies is curtailed and the gas reservoir is not replenished following star formation \\citep{Larson1980}, will have the effect of increasing the gas-phase metallicity of a galaxy or a region of the galaxy. This is a natural consequence of maintaining a constant metal yield from stellar evolution, while reducing the replenishment of the reservoir with relatively low-metallicity gas. Within the framework of gas-regulated galaxy evolution, this implies an anti-correllaton between the gas surface density or star formation rate surface density and the metallicity in the gas. Starvation has been suggested as a key component for determining the star-forming properties of galaxies today \\citep{Peng2015,Trussler2018}, with environment appearing to play a role in instigating this process \\citep{vonderlinden2010, Davies2016}. \n\nWhile we do infer gas fractions that are, on average, lower for galaxies that are in more extreme environments (for example low-mass satellites of high-mass galaxies), we find that at fixed gas fraction the metallicity is higher for satellites relative to centrals, even in spaxels with high $\\mu$. The differing distributions of $\\log(\\mu)$ evident in Figure \\ref{mu_met_cen_mass} are largely driven by the differences in the distributions of $\\Sigma_{*}$. Although the distributions of total $M_{*}$ between the two subsamples used are not significantly different, the distributions of local stellar mass surface densities are. In Figure \\ref{sig_M_sig_SFR_cen_mass} we show the joint distributions for $\\Sigma_{*}$ and $\\Sigma_{SFR}$ for low-mass satellites, split by the stellar mass of the central galaxy. At fixed $\\Sigma_{*}$, the difference between the means of the $\\Sigma_{SFR}$ is smaller than $0.03$ dex, except above $\\log(\\Sigma_{*})=8.1$, but this range accounts for only $\\sim 10 \\% $ of the data and therefore has a minimal impact on our model fitting. \n\nIt is possible that the distribution of star formation has also changed. \\cite{Schaefer2019} showed that in dense environments, the outer parts of a galaxy can be quenched, leaving star formation in the inner regions unaffected. This transformation was nevertheless accompanied by a reduction in the total specific star formation rate (sSFR). To test for this, we perform a KS-test on the integrated specific star formation rates of the two subsamples. This yields $KS = 0.17$ with $p=0.58$. Furthermore, the median of the sSFR for the satellites of high mass galaxies is $-10.12 \\pm 0.07 \\, \\mathrm{yr^{-1}}$ and the median for the satellites of more massive galaxies is $-10.21 \\pm 0.04 \\, \\mathrm{Gyr^{-1}}$, where the error on the median has been estimated using a bootstrap resampling. The difference of medians is within the error margin. \n\nThe similar distributions of $\\Sigma_{SFR}$ and sSFR disfavour the interpretation that the changing gas fraction due to starvation is responsible to the environmental differences in metallicity on kpc scales within our sample. This is not to say that starvation does not occur in dense environments; our sample selection simply favours the most star-forming galaxies, which are unlikely to have had their star formation rates reduced by environmental effects yet. Environmental differences in metallicity may occur in satellite galaxies before the onset of environment quenching.\n\n\n\\subsection{Comparison to simulations}\nNumerical simulations of galaxy evolution are beginning to show that the gas being accreted onto galaxies cannot be assumed to be pristine in all environments \\citep{Oppenheimer2010,Gupta2018}. In the simulations, the origin of accreted gas is observed to be highly dependent on redshift, with cosmological accretion of low-metallicity gas dominating at high redshift. However, as time progresses, feedback from star formation and AGN activity expel gas from the interstellar media of galaxies, which enriches their local environment with material that subsequently falls onto their neighbours. In the FIRE simulations, \\cite{AnglesAlcazar2017} found that the exchange of gas between galaxies that is facilitated by galactic winds dominates the accretion budget by $z=0$. \n\n\\cite{Gupta2018} explored this effect using data from the IllustrisTNG simulations. They showed that the enrichment of the intergalactic medium and the associated accretion onto galaxies is dependent on both the halo mass and whether a galaxy is infalling into its host halo or whether it has been a satellite for some time. At $z<0.5$, they find that the metallicity of gas being accreted onto galaxies with $9<\\log(M_{*}\/M_{\\odot})<10$ that are infalling into clusters is approximately $0.35 \\, Z_{\\odot}$, which is $1.5$ - $2$ times more metal rich than for similar galaxies in the field. This is consistent with the metallicity difference that we have inferred between the satellites of low-mass and high-mass centrals, though the absolute abundances differ. We again note that the value of $Z_{0}$ returned when the model represented by Equation \\ref{gas_regulator} is fitted to the data is sensitive to the precise values of the yield, $y$, and the gas return fraction, $R$, which are not well constrained by observations. The choice of these values will change the estimate of $Z_{0}$, but the relative difference between subsamples will not be greatly affected.\n\n\n\\subsection{Other studies of the environmental dependence of metallicity}\nThe impact of environment on galaxy evolution, in particular star formation and metallicity, is subtle. For this reason there have been very few observational works that measure the impact of environment on the spatial patterns of chemical abundances in galaxies. It has only been recently that large enough samples of integral field spectroscopic data have become available to adequately measure these effects. \n\nIn a recent study, Lian et al. (\\emph{in prep.}) used MaNGA data to study the metallicity gradients of galaxies as a function of the local environmental overdensity. They find that the metallicity gradients in low-mass satellite galaxies are shallower in dense environments, with a higher metallicity in their outer parts than similar galaxies in the field. They also find that the star formation rate gradients in galaxies in dense environments are steeper and conclude that the most likely explanation for these observations is a variation in the gas accretion timescale in different environments. Superficially this would seem to contradict our results, but we argue that this apparent disagreement can be resolved by noting the differences between the samples of galaxies considered. Lian et al. place less stringent constraints on the number of star-forming spaxels than we do, meaning that their galaxies have lower specific star formation rates on average. They therefore study galaxies that are likely to have inhabited their host halos for a longer period of time, and are more affected by environment quenching processes. This point is made in Section \\ref{Starvation}, where we rule out starvation as the primary driver of the environmental effects discussed in this work. Additionally, we note that Lian et al. placed no constraints on the inclination of galaxies in their sample to the line of sight. This may explain the differences in the metallicity gradients to those reported in our work.\n\n\n\n\n\\section{Conclusions}\nWe have estimated local metallicities, gas fractions, escape velocities and star formation rate surface densities for a sample of nearly face-on star-forming galaxies observed by MaNGA. In this sample we have explored the impact of the environment on local scaling relations between these estimated quantities, with a particular focus on satellite galaxies. \nWe find\n\\begin{itemize}\n\\item{At fixed stellar mass, we find a small but global offset of $0.025 \\, \\mathrm{dex}$ (for O3N2) or $0.05 \\, \\mathrm{dex}$ (for N2S2H$\\alpha$) in the metallicities of galaxies between satellites and centrals. For our sample we find little evidence for changes in the metallicity gradient between satellites and centrals.}\n\\item{The disparity between the metallicity of satellites and centrals is also evident in the O\/H -- $\\Sigma_{*}$ and O\/H -- $\\mu$ local scaling relations. We find the greatest offset when we split our satellite sample by the stellar mass of the galaxy that is central to the respective halo. For satellite galaxies in the range $9<\\log(M_{*}\/M_{\\odot})<10$, the local scaling relations are $\\sim 0.1 \\, \\mathrm{dex}$ more oxygen rich for satellites of hosts more massive than $10^{10.5} \\, M_{\\odot}$ than for hosts less massive than $10^{10} \\, M_{\\odot}$.}\n\\item{The offset in metallicity for satellite galaxies is found to exist between different environments at constant stellar mass surface density, gas mass fraction and star formation rate surface density. From these we conclude that the observed differences cannot be explained by gas starvation occurring in satellites around more massive centrals. Interestingly, the impact of environment on the chemical enrichment of galaxies appears to precede the onset of the quenching of star formation in their disks. }\n\\item{Measured on kpc-scales, local metallicities and gas fractions are found to be quantitatively consistent with the gas regulator model of \\cite{Lilly2013}. We assume that the mass loading factor describing outflows in galaxies is a function of the local escape velocity. Within the framework of the gas regulator model the only explanation for the elevated metallicity is an increase in $Z_{0}$ for satellites of high-mass galaxies. We estimate that the oxygen abundance in the inflowing gas changes from $12+\\log(\\mathrm{O\/H}) = 7.54 \\pm 0.01$ to $7.86 \\pm 0.003$ using the \\cite{Pettini2004} O3N2 indicator, or $12+\\log(\\mathrm{O\/H}) = 7.27 \\pm 0.01 $ to $7.86 \\pm 0.003 $ for N2S2H$\\alpha$.}\n\n\n\\end{itemize}\n\nGiven these conclusions, we interpret the enhanced metallicity of the satellites of more massive centrals to be evidence for the exchange of enriched gas between galaxies. In this picture, which has been motivated by both observations \\citep{Peng2014} and simulations \\citep{Oppenheimer2010,AnglesAlcazar2017,Gupta2018}, feedback driven winds expel metal-rich gas from a massive star-forming central which is subsequently accreted onto nearby satellites. While our estimates for the metallicity of gas accreted onto satellite galaxies are a factor of $\\sim 3$ lower than the predictions of \\cite{Gupta2018}, we note that the values returned by our modeling are subject to inherent uncertainties in the nucleosynthetic yield, the gas return fraction from stellar evolution, and the absolute abundance scaling of the strong-line metallicity diagnostics. Notwithstanding these systematic effects, the inferred differential in the metallicity of gas accreted onto satellites in different environments is qualitatively in good agreement with the simulations.\n\nThe impact of environment on the gas phase metallicity distribution of galaxies is likely to be complicated and multifaceted. In addition to the accretion of enriched gas in dense environments that we have studied here, tidal interactions, mergers and ram pressure can influence the distribution of metals in a galaxy. A complete understanding of the effect of environment on gas phase metallicities must take these other processes into account. A more comprehensive analysis of the detailed environmental dependence of the chemical properties of galaxies will be made possible when the full MaNGA sample becomes available, or with future integral field spectroscopic surveys such as HECTOR \\citep{Bryant2016}.\n\n\n\n\\acknowledgements \nWe would like to thank the anonymous referee, whose constructive comments were extremely helpful in clarifying some aspects of this paper. \nALS, ZJP and CT acknowledge NSF CAREER Award AST-1554877. AJ acknowledges NSF Award 1616547. RM acknowledges ERC Advanced Grant 695671 \"QUENCH\" and support from the the Science and Technology Facilities Council (STFC).\n\nThis research made use of \\texttt{Astropy}, a community-developed core \\texttt{python} package for astronomy \\citep{Astropy2013,Astropy2018}; \\texttt{matplotlib} \\citep{Matplotlib}, an open-source \\texttt{python} plotting library; and \\texttt{LMFIT} \\citep{LMFIT}, an interface for non-linear optimization in \\texttt{python}.\nFunding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\\'{i}sica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) \/ University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\\\"{u}r Astrophysik Potsdam (AIP), Max-Planck-Institut f\\\"{u}r Astronomie (MPIA Heidelberg), Max-Planck-Institut f\\\"{u}r Astrophysik (MPA Garching), Max-Planck-Institut f\\\"{u}r Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\\'{o}rio Nacional \/ MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\\'{o}noma de M\\'{e}xico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.\n\n\n\n \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCharalambos presented the $q-$ deformed Vandermonde and Cauchy formulae. Moreover, the $q-$ deformed univariate discrete probability distributions were investigated.Their properties and limiting distributions were derived \\cite{CA1}.\n\nFurthermore, the $q-$ deformed multinomial coefficients was defined and their recurrence relations were deduced. Also, the $q-$ deformed multinomial and negative $q-$ deformed multinomial probability distributions of the first and second kind were presented \\cite{CA2}. \n\nThe same author extended the multivariate $q-$ deformed vandermonde and Cauchy formulae. Also, the multivariate $q-$ Pol\\'ya and inverse $q-$ Pol\\'ya were constructed \\cite{CA3}.\n\nLet $p$ and $q$ be two positive real numbers such that $ 00, \\forall n\\in\\mathbb{N},$ and $\\mathcal{R}(1,1)=0$ by definition. We denote by $\\mathbb{D}_{R}$ the bidisk \\begin{eqnarray*}\n\t\\mathbb{D}_{R}\n\n\n\t&=&\\left\\lbrace w=(w_1,w_2)\\in\\mathbb{C}^2: |w_j| 1$, it is possible to build a\nmicro tree decomposition $MS$ of $P$ in linear time such that\n$|MS| = O(\\ceil{n_P\/s})$ and $|V(M)| \\leq s$ for any $M \\in MS$\n\\end{lemma}\n\n\\subsection{Implementing the Algorithm} In this section we show\nhow to implement the $\\ensuremath{\\textsc{Down}}$ procedure using the micro tree\ndecomposition. First decompose $P$ according to\nLemma~\\ref{clustering} for a parameter $s$ to be chosen later.\nHence, each micro tree has at most $s$ nodes and $|MS| =\nO(\\ceil{n_P\/s})$. We represent the state $X$ compactly using a bit vector\nfor each micro tree. Specifically, for any micro tree $M$ we store\na bit vector $X_M = [b_{1}, \\ldots, b_{s}]$, such that $X_M[i] = 1$\niff the $i$th node in a preorder traversal of $M$ is in $X$. If\n$|V(M)| < s$ we leave the remaining values undefined. Later we\nchoose $s= \\Theta(\\log n_{T})$ such that each bit vector can be\nrepresented in a single word.\n\nNext we define a $\\ensuremath{\\textsc{Down}}_{M}$ procedure on each micro tree $M\\in\nMS$. Due to the overlap between micro trees the $\\ensuremath{\\textsc{Down}}_{M}$\nprocedure takes a bit $b$ which will be used to propagate\ninformation between micro trees. For each micro tree $M \\in MS$,\nbit vector $X_M$, bit $b$, and $y\\in V(T)$ define:\n\\begin{relate}\n\\item[$\\ensuremath{\\textsc{Down}}_{M}(X_M, b, y)$:] Compute the state $X'_M := \\ensuremath{\\textsc{Child}}(\\{x \\in X_M \\mid \\ensuremath{\\mathrm{label}}(x) = \\ensuremath{\\mathrm{label}}(y)\\}) \\cup \\{x \\in X_M \\mid \\ensuremath{\\mathrm{label}}(x) \\neq \\ensuremath{\\mathrm{label}}(y)\\}$. If $b=0$, return $X_M'$, else return $X_M' \\cup \\{\\ensuremath{\\mathrm{root}}(M)\\}$.\n\\end{relate}\nLater we will show how to implemenent $\\ensuremath{\\textsc{Down}}_{M}$ in constant time\nfor $s = \\Theta(\\log n_{T})$. First we show how to use $\\ensuremath{\\textsc{Down}}_M$ to\nsimulate $\\ensuremath{\\textsc{Down}}$ on $P$. We define a recursive procedure $\\ensuremath{\\textsc{Down}}$\nwhich traverse the hiearchy of micro trees. For micro tree\n$M$, state $X$, bit $b$, and $y \\in V(T)$ define:\n\\begin{relate}\n\\item[$\\ensuremath{\\textsc{Down}}(X,M,b,y)$:] Let $M_1, \\ldots, M_k$ be the children of $M$.\n\\begin{enumerate}\n\\item Compute $X_M := \\ensuremath{\\textsc{Down}}_{M}(X_M, b, y)$.\n\\item For $i:=1$ to $k$ do:\n\\begin{enumerate}\n\\item Compute $\\ensuremath{\\textsc{Down}}(X, M_i, b_{i}, y)$, where $b_{i} = 1$\niff\n\n$\\ensuremath{\\mathrm{root}}(M_i) \\in X_M$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{relate}\nIntuitively, the $\\ensuremath{\\textsc{Down}}$ procedure works in a top-down fashion\nusing the $b$ bit to propagate the new state of the root of micro\ntree. To solve the problem within our framework we initially\nconstruct the state representing $\\{\\ensuremath{\\mathrm{root}}(P)\\}$. Then, at each\nstep we call $\\ensuremath{\\textsc{Down}}(R_{j}, 0, y)$ on each root micro tree $R_{j}$.\nWe formally show that this is correct:\n\\begin{lemma}\nThe above algorithm correctly simulates the $\\ensuremath{\\textsc{Down}}$ procedure on\n$P$.\n\\end{lemma}\n\\begin{proof}\nLet $X$ be the state and let $X' :=\\ensuremath{\\textsc{Down}}(X, y)$. For\nsimplicity, assume that there is only one root micro tree $R$.\nSince the root micro trees can only overlap at $\\ensuremath{\\mathrm{root}}(P)$ it is\nstraightforward to generalize the result to any number of roots.\nWe show that if $X$ is represented by bit vectors at each micro\ntree then calling $\\ensuremath{\\textsc{Down}}(R, 0, y)$ correctly produces the new\nstate $X'$.\n\nIf $R$ is the only micro tree then only line 1 is executed. Since\n$b = 0$ this produces the correct state by definition of\n$\\ensuremath{\\textsc{Down}}_{M}$. Otherwise, consider a micro tree $M$ with children\n$M_{1}, \\ldots, M_{k}$ and assume that $b = 1$ iff $\\ensuremath{\\mathrm{root}}(M) \\in\nX'$. Line 1 computes and stores the new state returned by\n$\\ensuremath{\\textsc{Down}}_{M}$. If $b=0$ the correctness follows immediately. If\n$b=1$ observe that $\\ensuremath{\\textsc{Down}}_{M}$ first computes the new state and\nthen adds $\\ensuremath{\\mathrm{root}}(M)$. Hence, in both cases the state of $M$ is\ncorrectly computed. Line 2 recursively computes the new state of\nthe children of $M$.\n\\end{proof}\n\nIf each micro tree has size at most $s$ and $\\ensuremath{\\textsc{Down}}_{M}$ can be\ncomputed in constant time it follows that the above algorithm\nsolves TPS in $O(\\ceil{n_{P}\/s})$ time. In the following section we show\nhow to do this for $s = \\Theta(\\log n_{T})$, while maintaining\nlinear space.\n\n\\subsection{Representing Micro Trees} In this section we show\nhow to preprocess all micro trees $M \\in MS$ such that\n$\\ensuremath{\\textsc{Down}}_{M}$ can be computed in constant time. This preprocessing\nmay be viewed as a ``Four Russian Technique''~\\cite{ADKF1970}. To\nachieve this in linear space we need the following auxiliary\nprocedures on micro trees. For each micro tree $M$, bit vector\n$X_M$, and $\\alpha \\in \\Sigma$ define:\n\\begin{relate}\n\\item[$\\ensuremath{\\textsc{Child}}_{M}(X_M)$:] Return the bit vector of nodes in $M$ that are children of nodes in $X_M$.\n\\item[$\\ensuremath{\\textsc{Eq}}_{M}(\\alpha)$:] Return the bit vector of nodes in $M$ labeled $\\alpha$.\n\\end{relate}\nBy definition it follows that:\n\\begin{equation*}\n\\begin{aligned}\n\\ensuremath{\\textsc{Down}}_{M}(X_M,b, y) &=\n\\begin{cases}\n\\ensuremath{\\textsc{Child}}_{M}(X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y)))\\; \\cup \\\\\n\\quad (X_M \\backslash (X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y))) & \\text{if $b = 0$}, \\\\\n\\ensuremath{\\textsc{Child}}_{M}(X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y))) \\; \\cup \\\\\n\\quad (X_M \\backslash (X_M \\cap \\ensuremath{\\textsc{Eq}}_M(\\ensuremath{\\mathrm{label}}(y))) \\cup \\{\\ensuremath{\\mathrm{root}}(M)\\}\n& \\text{if $b=1$}.\n\\end{cases} \\\\\n\\end{aligned}\n\\end{equation*}\nRecall that the bit vectors are represented in a single word. Hence,\ngiven $\\ensuremath{\\textsc{Child}}_{M}$ and $\\ensuremath{\\textsc{Eq}}_{M}$ we can compute $\\ensuremath{\\textsc{Down}}_M$ using\nstandard bit-operations in constant time.\n\nNext we show how to efficiently implement the operations. For each\nmicro tree $M \\in MS$ we store the value $\\ensuremath{\\textsc{Eq}}_{M}(\\alpha)$ in a\nhash table indexed by $\\alpha$. Since the total number of\ndifferent characters in any $M\\in MS$ is at most $s$, the hash\ntable $\\ensuremath{\\textsc{Eq}}_{M}$ contains at most $s$ entries. Hence, the total\nnumber of entries in all hash tables is $O(n_{P})$. Using perfect\nhashing we can thus represent $\\ensuremath{\\textsc{Eq}}_{M}$ for all micro trees, $M\\in\nMS$, in $O(n_{P})$ space and $O(1)$ worst-case lookup time. The\npreprocessing time is expected $O(n_{P})$ w.h.p.. To get a worst-case bound we \nuse the deterministic dictionary of Hagerup et. al.\n\\cite{HMP2001} with $O((n_{P})\\log (n_{P}))$ worst-case preprocessing\ntime. \n\nNext consider implementing $\\ensuremath{\\textsc{Child}}_{M}$. Since this\nprocedure is independent of the labeling of $M$ it suffices to\nprecompute it for all \\emph{topologically} different rooted trees\nof size at most $s$. The total number of such trees is less than\n$2^{2s}$ and the number of different states in each tree is at\nmost $2^{s}$. Therefore $\\ensuremath{\\textsc{Child}}_{M}$ has to be computed for a\ntotal of $2^{2s}\\cdot 2^{s} = 2^{3s}$ different inputs. For any\ngiven tree and any given state, the value of $\\ensuremath{\\textsc{Child}}_{M}$ can be\ncomputed and encoded in $O(s)$ time. In total we can precompute\nall values of $\\ensuremath{\\textsc{Child}}_{M}$ in $O(s2^{3s})$ time. Choosing the\nlargest $s$ such that $3s + \\log s \\leq n_{T}$ (hence $s =\n\\Theta(\\log n_{T})$) we can precompute\nall values of $\\ensuremath{\\textsc{Child}}_{M}$ in $O(s2^{3s}) = O(n_{T})$ time and space. Each of\nthe inputs to $\\ensuremath{\\textsc{Child}}_{M}$ are encoded in a single word such that\nwe can look them up in constant time.\n\nFinally, note that we also need to report the leaves of a state efficiently since this is needed in line 1 in the $\\ensuremath{\\textsc{Visit}}$-procedure. To do this compute the state $L$ corresponding to all leaves in $P$. Clearly, the leaves of a state $X$ can be computed by performing a bitwise AND of each pair of bit vectors in $L$ and $X$. Computing $L$ uses $O(n_{P})$ time and the bitwise AND operation uses $O(\\ceil{n_{P}\/s})$ time.\n\nCombining the results, we decompose $P$, for $s$ as described\nabove, and compute all values of $\\ensuremath{\\textsc{Eq}}_{M}$ and $\\ensuremath{\\textsc{Child}}_{M}$. Then, we solve TPS using the heavy-path\ntraversal. Since $s = \\Theta(\\log n_{T})$, from Lemmas~\\ref{traversal} and \\ref{clustering} we have the following\ntheorem:\n\\begin{theorem}\\label{faster} For trees $P$ and $T$ the tree path subsequence problem can be solved in $O(\\frac{n_Pn_T}{\\log n_T} +n_T+ n_P\\log n_P)$ time and $O(n_P + n_T)$ space.\n\\end{theorem}\nCombining the results of Theorems~\\ref{simple} and \\ref{faster} proves Theorem~\\ref{main}.\n\n\\section{Acknowledgments}\nThe authors would like to thank Anna {\\\"O}stlin Pagh for many helpful comments.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMost of the problems that arise in different disciplines of science can be\nformulated by the equations in the for\n\\begin{equation}\nFx=0\\text{,} \\label{eqn1}\n\\end{equation\nwhere $F$ is some function. The equations given by (1) can easily be\nreformulated as the fixed point equations of type \n\\begin{equation}\nTx=x\\text{.} \\label{eqn2}\n\\end{equation\nwhere $T$ is a self-map of an ambient space $X$ and $x\\in X$. These\nequations are often classified as linear or nonlinear, depending on whether\nthe mappings used in the equation is linear with respect to the variables.\nOver the years, a considerable attention has been paid to solving such\nequations by using different techniques such as direct and iterative\nmethods. In case of linear equations, both direct and iterative methods are\nused to obtain solutions of the equations. But in case of nonlinear\nequations, due to various reasons, direct methods can be impractical or fail\nin solving equations, and thus iterative methods become a viable\nalternative. Nonlinear problems are of importance interest to\nmathematicians, physicists and engineers and many other scientists, simply\nbecause most systems are intrinsically nonlinear in nature. That is why,\nresearchers in various disciplines of sciences are often faced with the\nsolving such problems. It would be hard to fudge that the role of iterative\napproximation of fixed points have played in the recent progress of\nnonlinear science. Indeed, the phrase iterative approximation has been\nintroduced to describe repetition-based researches into nonlinear problems\nthat are inaccessible to analytic methods. For this reason, the iterative\napproximation of fixed points has become one of the major and basic tools in\nthe theory of equations, and as a result, numerous iterative methods have\nbeen introduced or improved and studied for many years in detail from\nvarious points of aspects by a wide audience of researchers, see, [1-14,\n16-42, 44, 45].\n\nIn this paper, we show that a Picard-S iteration method \\cite{Gursoy} can be\nused to approximate fixed point of weak-contraction mappings. Also, we show\nthat this iteration method is equivalent and converges faster than CR\niteration method \\cite{CR} for the aforementioned class of mappings.\nFurthermore, by providing an example, it is shown that the Picard-S\niteration method converges faster than CR iteration method and hence also\nfaster than all Picard \\cite{Picard}, Mann \\cite{Mann}, Ishikawa \\cit\n{Ishikawa}, Noor \\cite{Noor}, SP \\cite{SP}, S \\cite{S} and some other\niteration methods in the existing literature when applied to\nweak-contraction mappings. Finally, a data dependence result is proven for\nfixed point of weak-contraction mappings with the help of the Picard-S\niteration method.\n\nThroughout this paper the set of all positive integers and zero is shown by \n\\mathbb{N}\n$. Let $B$ be a Banach space, $D$ be a nonempty closed convex subset of $B$\nand $T$ a self-map of $D$. An element $x_{\\ast }$ of $D$ is called a fixed\npoint of $T$ if and only if $Tx_{\\ast }=x_{\\ast }$. The set of all fixed\npoint of $T$ denoted by $F_{T}$. Let $\\left\\{ a_{n}^{i}\\right\\}\n_{n=0}^{\\infty }$, $i\\in \\left\\{ 0,1,2\\right\\} $ be real sequences in $\\left[\n0,1\\right] $ satisfying certain control condition(s).\n\nRenowned Picard iteration method \\cite{Picard} is formulated as follo\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\np_{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \np_{n+1}=Tp_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn3}\n\\end{equation\nand generally used to approximate fixed points of contraction mappings\nsatisfying: for all $x$, $y\\in B$ there exists a $\\delta \\in \\left(\n0,1\\right) $ such tha\n\\begin{equation}\n\\left\\Vert Tx-Ty\\right\\Vert \\leq \\delta \\left\\Vert x-y\\right\\Vert \\text{.}\n\\label{eqn4}\n\\end{equation\nThe following iteration methods are known as Noor \\cite{Noor} and SP \\cit\n{SP} iterations, respectively\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\omega _{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ } \\\\ \n\\omega _{n+1}=\\left( 1-a_{n}^{0}\\right) \\omega _{n}+a_{n}^{0}T\\varpi _{n\n\\text{, \\ \\ } \\\\ \n\\varpi _{n}=\\left( 1-a_{n}^{1}\\right) \\omega _{n}+a_{n}^{1}T\\rho _{n}\\text{, \n} \\\\ \n\\rho _{n}=\\left( 1-a_{n}^{2}\\right) \\omega _{n}+a_{n}^{2}T\\omega _{n}\\text{, \n}n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn5}\n\\end{equation\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nq_{0}\\in D\\text{,\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ } \\\\ \nq_{n+1}=\\left( 1-a_{n}^{0}\\right) r_{n}+a_{n}^{0}Tr_{n}\\text{, \\ \\ } \\\\ \nr_{n}=\\left( 1-a_{n}^{1}\\right) s_{n}+a_{n}^{1}Ts_{n}\\text{,} \\\\ \ns_{n}=\\left( 1-a_{n}^{2}\\right) q_{n}+a_{n}^{2}Tq_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn6}\n\\end{equation}\n\n\\begin{remark}\n(i) If $a_{n}^{2}=0$ for each $n\\in \n\\mathbb{N}\n$ , then the Noor iteration method reduces to iterative method of Ishikawa \n\\cite{Ishikawa}.\n\n(ii) If $a_{n}^{2}=0$ for each $n\\in \n\\mathbb{N}\n$ , then the SP iteration method reduces to iterative method of Thianwan \n\\cite{iam}. \n\n(iii) When $a_{n}^{1}=$ $a_{n}^{2}=0$ for each $n\\in \n\\mathbb{N}\n$, then both Noor and SP iteration methods reduce to an iteration method due\nto Mann \\cite{Mann}. \n\\end{remark}\n\nRecently, G\\\"{u}rsoy and Karakaya \\cite{Gursoy} introduced a Picard-S\niterative scheme as follows: \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\nx_{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \nx_{n+1}=Ty_{n}\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ } \\\\ \ny_{n}=\\left( 1-a_{n}^{1}\\right) Tx_{n}+a_{n}^{1}Tz_{n}\\text{, \\ \\ \\ \\ \\ \\ }\n\\\\ \nz_{n}=\\left( 1-a_{n}^{2}\\right) x_{n}+a_{n}^{2}Tx_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn7}\n\\end{equation\nThe following definitions and lemmas will be needed in obtaining the main\nresults of this article.\n\n\\begin{definition}\n\\cite{Berinde} Let $\\left\\{ a_{n}\\right\\} _{n=0}^{\\infty }$ and $\\left\\{\nb_{n}\\right\\} _{n=0}^{\\infty }$ be two sequences of real numbers with limits \n$a$ and $b$, respectively. Assume that there exist\n\\begin{equation}\n\\underset{n\\rightarrow \\infty }{\\lim }\\frac{\\left\\vert a_{n}-a\\right\\vert }\n\\left\\vert b_{n}-b\\right\\vert }=l\\text{.} \\label{eqn8}\n\\end{equation}\n\n(i) If $l=0$, the we say that $\\left\\{ a_{n}\\right\\} _{n=0}^{\\infty }$\nconverges faster to $a$ than $\\left\\{ b_{n}\\right\\} _{n=0}^{\\infty }$ to $b$.\n\n(ii) If $00$ we have \n\\begin{equation}\n\\left\\Vert Tx-\\widetilde{T}x\\right\\Vert \\leq \\varepsilon . \\label{eqn12}\n\\end{equation}\n\\end{definition}\n\n\\begin{lemma}\n\\cite{Weng}Let $\\left\\{ \\beta _{n}\\right\\} _{n=0}^{\\infty }$ and $\\left\\{\n\\rho _{n}\\right\\} _{n=0}^{\\infty }$ be nonnegative real sequences satisfying\nthe following inequality\n\\begin{equation}\n\\beta _{n+1}\\leq \\left( 1-\\lambda _{n}\\right) \\beta _{n}+\\rho _{n}\\text{,}\n\\label{eqn13}\n\\end{equation\nwhere $\\lambda _{n}\\in \\left( 0,1\\right) $, for all $n\\geq n_{0}$, \n\\dsum\\nolimits_{n=1}^{\\infty }\\lambda _{n}=\\infty $, and $\\frac{\\rho _{n}}\n\\lambda _{n}}\\rightarrow 0$ as $n\\rightarrow \\infty $. Then \n\\lim_{n\\rightarrow \\infty }\\beta _{n}=0$.\n\\end{lemma}\n\n\\begin{lemma}\n\\cite{Data Is 2} Let $\\left\\{ \\beta _{n}\\right\\} _{n=0}^{\\infty }$ be a\nnonnegative sequence for which one assumes there exists $n_{0}\\in \n\\mathbb{N}\n$, such that for all $n\\geq n_{0}$ one has satisfied the inequality \n\\begin{equation}\n\\beta _{n+1}\\leq \\left( 1-\\mu _{n}\\right) \\beta _{n}+\\mu _{n}\\gamma _{n\n\\text{,} \\label{eqn15}\n\\end{equation\nwhere $\\mu _{n}\\in \\left( 0,1\\right) ,$ for all $n\\in \n\\mathbb{N}\n$, $\\sum\\limits_{n=0}^{\\infty }\\mu _{n}=\\infty $ and $\\gamma _{n}\\geq 0$, \n\\forall n\\in \n\\mathbb{N}\n$. Then the following inequality holds \n\\begin{equation}\n0\\leq \\lim \\sup_{n\\rightarrow \\infty }\\beta _{n}\\leq \\lim \\sup_{n\\rightarrow\n\\infty }\\gamma _{n}. \\label{eqn16}\n\\end{equation}\n\\end{lemma}\n\n\\section{Main Results}\n\n\\begin{theorem}\nLet $T:D\\rightarrow D$ be a weak-contraction map satisfying condition (11)\nwith $F_{T}\\neq \\emptyset $ and $\\left\\{ x_{n}\\right\\} _{n=0}^{\\infty }$ an\niterative sequence defined by (7) with real sequences $\\left\\{\na_{n}^{i}\\right\\} _{n=0}^{\\infty }$, $i\\in \\left\\{ 1,2\\right\\} $ in $\\left[\n0,1\\right] $ satisfying $\\sum_{k=0}^{\\infty }a_{k}^{1}a_{k}^{2}=\\infty $.\nThen $\\ \\left\\{ x_{n}\\right\\} _{n=0}^{\\infty }$ converges to a unique fixed\npoint $u^{\\ast }$of $T$.\n\\end{theorem}\n\n\\begin{proof}\nUniqueness of $u^{\\ast }$ comes from condition (11). Using Picard-S\niterative scheme (7) and condition (11), we obtai\n\\begin{eqnarray}\n\\left\\Vert z_{n}-u^{\\ast }\\right\\Vert &\\leq &\\left( 1-a_{n}^{2}\\right)\n\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert +a_{n}^{2}\\left\\Vert Tx_{n}-Tu^{\\ast\n}\\right\\Vert \\notag \\\\\n&\\leq &\\left( 1-a_{n}^{2}\\right) \\left\\Vert x_{n}-u^{\\ast }\\right\\Vert\n+a_{n}^{2}\\delta \\left\\Vert x_{n}-u^{\\ast }\\right\\Vert +a_{n}^{2}L\\left\\Vert\nu^{\\ast }-Tu^{\\ast }\\right\\Vert \\notag \\\\\n&=&\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-u^{\\ast }\\right\\Vert \\text{,} \\label{eqn17}\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert y_{n}-u^{\\ast }\\right\\Vert &\\leq &\\left( 1-a_{n}^{1}\\right)\n\\left\\Vert Tx_{n}-Tu^{\\ast }\\right\\Vert +a_{n}^{1}\\left\\Vert Tz_{n}-Tu^{\\ast\n}\\right\\Vert \\notag \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\delta \\left\\Vert x_{n}-u^{\\ast\n}\\right\\Vert +a_{n}^{1}\\delta \\left\\Vert z_{n}-u^{\\ast }\\right\\Vert \\text{,}\n\\label{eqn18}\n\\end{eqnarray\n\\begin{equation}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert \\leq \\delta \\left\\Vert y_{n}-u^{\\ast\n}\\right\\Vert \\text{.} \\label{eqn19}\n\\end{equation\nCombining (16), (17) and (18\n\\begin{equation}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert \\leq \\delta ^{2}\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}-u^{\\ast\n}\\right\\Vert \\text{.} \\label{eqn20}\n\\end{equation\nBy inductio\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert &\\leq &\\delta ^{2\\left( n+1\\right)\n}\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right)\n\\right] \\left\\Vert x_{0}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\leq &\\delta ^{2\\left( n+1\\right) }\\left\\Vert x_{0}-u^{\\ast }\\right\\Vert\n^{n+1}e^{-\\left( 1-\\delta \\right) \\dsum\\nolimits_{k=0}^{n}a_{k}^{1}a_{k}^{2}\n\\text{.} \\label{eqn21}\n\\end{eqnarray\nSince $\\sum_{k=0}^{\\infty }a_{k}^{1}a_{k}^{2}=\\infty $\n\\begin{equation}\ne^{-\\left( 1-\\delta \\right)\n\\dsum\\nolimits_{k=0}^{n}a_{k}^{1}a_{k}^{2}}\\rightarrow 0\\text{ as \nn\\rightarrow \\infty \\text{,} \\label{eqn22}\n\\end{equation\nwhich implies $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast\n}\\right\\Vert $.\n\\end{proof}\n\n\\begin{theorem}\nLet $T:D\\rightarrow D$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $\nbe as in Theorem 1 and $\\{q_{n}\\}_{n=0}^{\\infty }$, $\\{x_{n}\\}_{n=0}^{\\infty\n}$ two iterative sequences defined by SP (6) and Picard-S (7) iteration\nmethods with real sequences $\\left\\{ a_{n}^{i}\\right\\} _{n=0}^{\\infty }$, \ni\\in \\left\\{ 0,1,2\\right\\} $ in $\\left[ 0,1\\right] $ satisfying \n\\sum_{k=0}^{n}a_{k}^{1}a_{k}^{2}=\\infty $. Then the following are equivalent:\n\n(i) $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert =0$;\n\n(ii) $\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert =0$.\n\n\\begin{proof}\n(i)$\\Rightarrow $(ii): It follows from (6), (7), and condition (11) tha\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-q_{n+1}\\right\\Vert &=&\\left\\Vert \\left(\n1-a_{n}^{0}\\right) \\left( Ty_{n}-r_{n}\\right) +a_{n}^{0}\\left(\nTy_{n}-Tr_{n}\\right) \\right\\Vert \\label{eqn23} \\\\\n&\\leq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert Ty_{n}-r_{n}\\right\\Vert\n+a_{n}^{0}\\left\\Vert Ty_{n}-Tr_{n}\\right\\Vert \\notag \\\\\n&\\leq &\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\Vert\ny_{n}-r_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert y_{n}-r_{n}\\right\\Vert &=&\\left\\Vert \\left( 1-a_{n}^{1}\\right)\n\\left( Tx_{n}-s_{n}\\right) +a_{n}^{1}\\left( Tz_{n}-Ts_{n}\\right) \\right\\Vert \n\\label{eqn24} \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\left\\Vert Tx_{n}-s_{n}\\right\\Vert\n+a_{n}^{1}\\left\\Vert Tz_{n}-Ts_{n}\\right\\Vert \\notag \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\left\\Vert Tx_{n}-s_{n}\\right\\Vert\n+a_{n}^{1}\\delta \\left\\Vert z_{n}-s_{n}\\right\\Vert +a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert Tx_{n}-s_{n}\\right\\Vert &=&\\left\\Vert \\left( 1-a_{n}^{2}\\right)\n\\left( Tx_{n}-q_{n}\\right) +a_{n}^{2}\\left( Tx_{n}-Tq_{n}\\right) \\right\\Vert \n\\label{eqn25} \\\\\n&\\leq &\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-q_{n}\\right\\Vert +\\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n\\left\\Vert x_{n}-Tx_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert z_{n}-s_{n}\\right\\Vert &\\leq &\\left( 1-a_{n}^{2}\\right)\n\\left\\Vert x_{n}-q_{n}\\right\\Vert +a_{n}^{2}\\left\\Vert\nTx_{n}-Tq_{n}\\right\\Vert \\label{eqn26} \\\\\n&\\leq &\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-q_{n}\\right\\Vert +a_{n}^{2}L\\left\\Vert x_{n}-Tx_{n}\\right\\Vert \\text{.}\n\\notag\n\\end{eqnarray\nCombining (22), (23), (24), and (25\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-q_{n+1}\\right\\Vert &\\leq &\\left[ 1-a_{n}^{0}\\left(\n1-\\delta \\right) \\right] \\left[ 1-a_{n}^{1}\\left( 1-\\delta \\right) \\right]\n\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nx_{n}-q_{n}\\right\\Vert \\label{eqn27} \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\{ \\left(\n1-a_{n}^{1}\\right) \\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n+a_{n}^{1}a_{n}^{2}\\delta L\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \n\\notag \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{.} \\notag\n\\end{eqnarray\nIt follows from the facts $\\delta \\in \\left( 0,1\\right) $ and $a_{n}^{i}\\in\n\\left[ 0,1\\right] $, $\\forall n\\in \n\\mathbb{N}\n$, $i\\in \\left\\{ 0,1,2\\right\\} $ tha\n\\begin{equation}\n\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left[ 1-a_{n}^{1}\\left(\n1-\\delta \\right) \\right] \\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right]\n<1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\text{.} \\label{eqn28}\n\\end{equation\nHence, inequality (26) become\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-q_{n+1}\\right\\Vert &\\leq &\\left[ 1-a_{n}^{1}a_{n}^{2\n\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}-q_{n}\\right\\Vert \n\\label{eqn29} \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\{ \\left(\n1-a_{n}^{1}\\right) \\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n+a_{n}^{1}a_{n}^{2}\\delta L\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \n\\notag \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{.} \\notag\n\\end{eqnarray\nDenote tha\n\\begin{eqnarray}\n\\beta _{n} &:&=\\left\\Vert x_{n}-q_{n}\\right\\Vert \\text{, \\ \\ } \\label{eqn30}\n\\\\\n\\lambda _{n} &:&=a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\in \\left(\n0,1\\right) \\text{, \\ } \\notag \\\\\n\\rho _{n} &:&=\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] \\left\\{\n\\left( 1-a_{n}^{1}\\right) \\left[ 1-a_{n}^{2}\\left( 1-L\\right) \\right]\n+a_{n}^{1}a_{n}^{2}\\delta L\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \n\\notag \\\\\n&&+\\left[ 1-a_{n}^{0}\\left( 1-\\delta \\right) \\right] a_{n}^{1}L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +\\left[ 1-a_{n}^{0}\\left( 1-L\\right) \\right]\n\\left\\Vert y_{n}-Ty_{n}\\right\\Vert \\text{.} \\notag\n\\end{eqnarray\nSince $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert =0$\nand $Tu^{\\ast }=u^{\\ast }\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert z_{n}-Tz_{n}\\right\\Vert =0\\text{,}\n\\label{eqn31}\n\\end{equation\nwhich implies $\\frac{\\rho _{n}}{\\lambda _{n}}\\rightarrow 0$ as $n\\rightarrow\n\\infty $. Therefore, inequality (28) perform all assumptions in Lemma 1 and\nthus we obtain $\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-q_{n}\\right\\Vert\n=0$. Sinc\n\\begin{equation}\n\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert \\leq \\left\\Vert x_{n}-q_{n}\\right\\Vert\n+\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert \\rightarrow 0\\text{ as }n\\rightarrow\n\\infty \\text{,} \\label{eqn32}\n\\end{equation\n$\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert =0$.\n\n(ii)$\\Rightarrow $(i): It follows from (6), (7), and condition (11) tha\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-x_{n+1}\\right\\Vert &=&\\left\\Vert\nr_{n}-Ty_{n}+a_{n}^{0}\\left( Tr_{n}-r_{n}\\right) \\right\\Vert \\label{eqn33}\n\\\\\n&\\leq &\\delta \\left\\Vert r_{n}-y_{n}\\right\\Vert +\\left( 1+a_{n}^{0}+L\\right)\n\\left\\Vert r_{n}-Tr_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert r_{n}-y_{n}\\right\\Vert &\\leq &\\left( 1-a_{n}^{1}\\right)\n\\left\\Vert s_{n}-Tx_{n}\\right\\Vert +a_{n}^{1}\\left\\Vert\nTs_{n}-Tz_{n}\\right\\Vert \\label{eqn34} \\\\\n&\\leq &\\left( 1-a_{n}^{1}\\right) \\left\\Vert s_{n}-Tx_{n}\\right\\Vert\n+a_{n}^{1}\\delta \\left\\Vert s_{n}-z_{n}\\right\\Vert +a_{n}^{1}L\\left\\Vert\ns_{n}-Ts_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert s_{n}-Tx_{n}\\right\\Vert &\\leq &\\left\\Vert\nTs_{n}-Tx_{n}\\right\\Vert +\\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\label{eqn35}\n\\\\\n&\\leq &\\delta \\left\\Vert s_{n}-x_{n}\\right\\Vert +\\left( 1+L\\right)\n\\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\notag \\\\\n&\\leq &\\delta \\left\\Vert q_{n}-x_{n}\\right\\Vert +\\delta a_{n}^{2}\\left\\Vert\nTq_{n}-q_{n}\\right\\Vert +\\left( 1+L\\right) \\left\\Vert\ns_{n}-Ts_{n}\\right\\Vert \\text{,} \\notag\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert s_{n}-z_{n}\\right\\Vert &\\leq &\\left( 1-a_{n}^{2}\\right)\n\\left\\Vert q_{n}-x_{n}\\right\\Vert +a_{n}^{2}\\left\\Vert\nTq_{n}-Tx_{n}\\right\\Vert \\label{eqn36} \\\\\n&\\leq &\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nq_{n}-x_{n}\\right\\Vert +a_{n}^{2}L\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\text{.}\n\\notag\n\\end{eqnarray\nCombining (32), (33), (34), and (35\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-x_{n+1}\\right\\Vert &\\leq &\\delta ^{2}\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert\nq_{n}-x_{n}\\right\\Vert \\label{eqn37} \\\\\n&&+\\delta ^{2}a_{n}^{2}\\left[ 1-a_{n}^{1}\\left( 1-L\\right) \\right]\n\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1+a_{n}^{0}+L\\right) \\left\\Vert r_{n}-Tr_{n}\\right\\Vert +\\delta\n\\left( 1-a_{n}^{1}+L\\right) \\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\text{.} \n\\notag\n\\end{eqnarray\nSince $\\delta \\in \\left( 0,1\\right) \n\\begin{equation}\n\\delta ^{2}\\left[ 1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right]\n<1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\text{.} \\label{eqn38}\n\\end{equation\nHence, inequality (36) become\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-x_{n+1}\\right\\Vert &\\leq &\\left[ 1-a_{n}^{1}a_{n}^{2\n\\left( 1-\\delta \\right) \\right] \\left\\Vert q_{n}-x_{n}\\right\\Vert \n\\label{eqn39} \\\\\n&&+\\delta ^{2}a_{n}^{2}\\left[ 1-a_{n}^{1}\\left( 1-L\\right) \\right]\n\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1+a_{n}^{0}+L\\right) \\left\\Vert r_{n}-Tr_{n}\\right\\Vert +\\delta\n\\left( 1-a_{n}^{1}+L\\right) \\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\text{.} \n\\notag\n\\end{eqnarray\nDenote tha\n\\begin{eqnarray}\n\\beta _{n} &:&=\\left\\Vert q_{n}-x_{n}\\right\\Vert \\text{, \\ \\ } \\label{eqn40}\n\\\\\n\\lambda _{n} &:&=a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\in \\left(\n0,1\\right) \\text{, \\ } \\notag \\\\\n\\rho _{n} &:&=\\delta ^{2}a_{n}^{2}\\left[ 1-a_{n}^{1}\\left( 1-L\\right) \\right]\n\\left\\Vert q_{n}-Tq_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1+a_{n}^{0}+L\\right) \\left\\Vert r_{n}-Tr_{n}\\right\\Vert +\\delta\n\\left( 1-a_{n}^{1}+L\\right) \\left\\Vert s_{n}-Ts_{n}\\right\\Vert \\text{.} \n\\notag\n\\end{eqnarray\nSince $\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert =0$\nand $Tu^{\\ast }=u^{\\ast }\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-Tq_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert r_{n}-Tr_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert s_{n}-Ts_{n}\\right\\Vert =0\\text{,}\n\\label{eqn41}\n\\end{equation\nwhich implies $\\frac{\\rho _{n}}{\\lambda _{n}}\\rightarrow 0$ as $n\\rightarrow\n\\infty $. Therefore, inequality (38) perform all assumptions in Lemma 1 and\nthus we obtain $\\lim_{n\\rightarrow \\infty }\\left\\Vert q_{n}-x_{n}\\right\\Vert\n=0$. Sinc\n\\begin{equation}\n\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert \\leq \\left\\Vert q_{n}-x_{n}\\right\\Vert\n+\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert \\rightarrow 0\\text{ as }n\\rightarrow\n\\infty \\text{,} \\label{eqn42}\n\\end{equation\n$\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-u^{\\ast }\\right\\Vert =0$.\n\\end{proof}\n\\end{theorem}\n\nTaking R. Chugh et al.'s result (\\cite{CR}, Corollary 3.2) into account,\nTheorem 2 leads to the following corollary under weaker assumption:\n\n\\begin{corollary}\nLet $T:D\\rightarrow D$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $\nbe as in Theorem 1. Then the followings are equivalent:\n\n1)The Picard iteration method (3) converges to $u^{\\ast }$,\n\n2) The Mann iteration method \\cite{Mann} converges to $u^{\\ast }$,\n\n3) The Ishikawa iteration method \\cite{Ishikawa} converges to $u^{\\ast }$,\n\n4) The Noor iteration method (5) converges to $u^{\\ast }$,\n\n5) S-iteration method \\cite{S} converges to $u^{\\ast }$,\n\n6) The SP-iteration method (6) converges to $u^{\\ast }$,\n\n7) CR-iteration method \\cite{CR} converges to $u^{\\ast }$,\n\n8) The Picard-S iteration method (7) converges to $u^{\\ast }$.\n\\end{corollary}\n\n\\begin{theorem}\nLet $T:D\\rightarrow D$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $\nbe as in Theorem 1. Suppose that $\\left\\{ \\omega _{n}\\right\\} _{n=0}^{\\infty\n}$, $\\left\\{ q_{n}\\right\\} _{n=0}^{\\infty }$ and $\\left\\{ x_{n}\\right\\}\n_{n=0}^{\\infty }$ are iterative sequences, respectively, defined by Noor\n(5), SP (6) and Picard-S (7) iterative schemes with real sequences $\\left\\{\na_{n}^{i}\\right\\} _{n=0}^{\\infty }\\subset \\left[ 0,1\\right] $, $i\\in \\left\\{\n0\\text{,}1\\text{,}2\\right\\} $ satisfying\n\n(i) $0\\leq a_{n}^{i}<\\frac{1}{1+\\delta }$,\n\n(ii) $\\lim_{n\\rightarrow \\infty }a_{n}^{i}=0$.\n\nThen the iterative sequence defined by (7) converges faster than the\niterative sequences defined by (5) and (6) to a unique fixed point of $T$,\nprovided that the initial point is the same for all iterations.\n\\end{theorem}\n\n\\begin{proof}\nFrom inequality (20), we hav\n\\begin{equation}\n\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert \\leq \\delta ^{2\\left( n+1\\right)\n}\\left\\Vert x_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\dprod\\nolimits_{k=0}^{n}\\left[\n1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] \\label{eqn43}\n\\end{equation\nUsing (6) we obtai\n\\begin{eqnarray}\n\\left\\Vert q_{n+1}-u^{\\ast }\\right\\Vert &=&\\left\\Vert \\left(\n1-a_{n}^{0}\\right) r_{n}+a_{n}^{0}Tr_{n}-u^{\\ast }\\right\\Vert \\label{eqn44}\n\\\\\n&\\geq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert r_{n}-u^{\\ast }\\right\\Vert\n-a_{n}^{0}\\left\\Vert Tr_{n}-Tu^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left\\Vert\nr_{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left\\{ \\left(\n1-a_{n}^{1}\\right) \\left\\Vert s_{n}-u^{\\ast }\\right\\Vert -a_{n}^{1}\\delta\n\\left\\Vert s_{n}-u^{\\ast }\\right\\Vert \\right\\} \\notag \\\\\n&=&\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left[\n1-a_{n}^{1}\\left( 1+\\delta \\right) \\right] \\left\\Vert s_{n}-u^{\\ast\n}\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left[\n1-a_{n}^{1}\\left( 1+\\delta \\right) \\right] \\left\\{ \\left( 1-a_{n}^{2}\\right)\n\\left\\Vert q_{n}-u^{\\ast }\\right\\Vert -a_{n}^{2}\\delta \\left\\Vert\nq_{n}-u^{\\ast }\\right\\Vert \\right\\} \\notag \\\\\n&=&\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left[\n1-a_{n}^{1}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{n}^{2}\\left( 1+\\delta\n\\right) \\right] \\left\\Vert q_{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\cdots \\notag \\\\\n&\\geq &\\left\\Vert q_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\prod\\limits_{k=0}^{n}\n\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] \n\\text{.} \\notag\n\\end{eqnarray\nUsing now (42) and (43\n\\begin{equation}\n\\frac{\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert }{\\left\\Vert q_{n+1}-u^{\\ast\n}\\right\\Vert }\\leq \\frac{\\delta ^{2\\left( n+1\\right) }\\left\\Vert\nx_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\dprod\\nolimits_{k=0}^{n}\\left[\n1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }{\\left\\Vert\nq_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\prod\\limits_{k=0}^{n}\\left[\n1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left( 1+\\delta\n\\right) \\right] \\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] }\\text{.}\n\\label{eqn45}\n\\end{equation\nDefine \n\\begin{equation}\n\\theta _{n}=\\frac{\\delta ^{2\\left( n+1\\right) }\\dprod\\nolimits_{k=0}^{n\n\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }\n\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right]\n\\left[ 1-a_{k}^{1}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{k}^{2}\\left(\n1+\\delta \\right) \\right] }\\text{.} \\label{eqn46}\n\\end{equation\nBy the assumtio\n\\begin{eqnarray}\n&&\\lim_{n\\rightarrow \\infty }\\frac{\\theta _{n+1}}{\\theta _{n}} \\label{eqn47}\n\\\\\n&=&\\lim_{n\\rightarrow \\infty }\\frac{\\frac{\\delta ^{2\\left( n+2\\right)\n}\\dprod\\nolimits_{k=0}^{n+1}\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta\n\\right) \\right] }{\\dprod\\nolimits_{k=0}^{n+1}\\left[ 1-a_{k}^{0}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left( 1+\\delta \\right) \\right]\n\\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] }}{\\frac{\\delta ^{2\\left(\nn+1\\right) }\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{1}a_{k}^{2}\\left(\n1-\\delta \\right) \\right] }{\\dprod\\nolimits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{k}^{1}\\left( 1+\\delta \\right) \\right]\n\\left[ 1-a_{k}^{2}\\left( 1+\\delta \\right) \\right] }} \\notag \\\\\n&=&\\lim_{n\\rightarrow \\infty }\\frac{\\delta ^{2}\\left[\n1-a_{n+1}^{1}a_{n+1}^{2}\\left( 1-\\delta \\right) \\right] }{\\left[\n1-a_{n+1}^{0}\\left( 1+\\delta \\right) \\right] \\left[ 1-a_{n+1}^{1}\\left(\n1+\\delta \\right) \\right] \\left[ 1-a_{n+1}^{2}\\left( 1+\\delta \\right) \\right] \n} \\notag \\\\\n&=&\\delta ^{2}<1\\text{.} \\notag\n\\end{eqnarray\nIt thus follows from ratio test that $\\sum\\limits_{n=0}^{\\infty }\\theta\n_{n}<\\infty $. Hence, we have $\\lim_{n\\rightarrow \\infty }\\theta _{n}=0$\nwhich implies that the iterative sequence defined by (7) converges faster\nthan the iterative sequence defined by SP iteration method (6).\n\nUsing Noor iteration method (5), we ge\n\\begin{eqnarray}\n\\left\\Vert \\omega _{n+1}-u^{\\ast }\\right\\Vert &=&\\left\\Vert \\left(\n1-a_{n}^{0}\\right) \\omega _{n}+a_{n}^{0}T\\varpi _{n}-u^{\\ast }\\right\\Vert \n\\label{eqn48} \\\\\n&\\geq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert\n-a_{n}^{0}\\left\\Vert T\\varpi _{n}-Tu^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left( 1-a_{n}^{0}\\right) \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert\n-a_{n}^{0}\\delta \\left\\Vert \\varpi _{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left[ 1-a_{n}^{0}-a_{n}^{0}\\delta \\left( 1-a_{n}^{1}\\right) \\right]\n\\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert -a_{n}^{0}a_{n}^{1}\\delta\n^{2}\\left\\Vert \\rho _{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left\\{ 1-a_{n}^{0}-a_{n}^{0}\\delta \\left( 1-a_{n}^{1}\\right)\n-a_{n}^{0}a_{n}^{1}\\delta ^{2}\\left[ 1-a_{n}^{2}\\left( 1-\\delta \\right)\n\\right] \\right\\} \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\left\\{ 1-a_{n}^{0}-a_{n}^{0}\\delta \\left[ 1-a_{n}^{1}\\left( 1-\\delta\n\\right) \\right] \\right\\} \\left\\Vert \\omega _{n}-u^{\\ast }\\right\\Vert \\notag\n\\\\\n&\\geq &\\left[ 1-a_{n}^{0}\\left( 1+\\delta \\right) \\right] \\left\\Vert \\omega\n_{n}-u^{\\ast }\\right\\Vert \\notag \\\\\n&\\geq &\\cdots \\notag \\\\\n&\\geq &\\left\\Vert \\omega _{0}-u^{\\ast }\\right\\Vert\n^{n+1}\\prod\\limits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right]\n\\text{.} \\notag\n\\end{eqnarray\nIt follows by (42) and (47) tha\n\\begin{equation}\n\\frac{\\left\\Vert x_{n+1}-u^{\\ast }\\right\\Vert }{\\left\\Vert \\omega\n_{n+1}-x_{\\ast }\\right\\Vert }\\leq \\frac{\\delta ^{2\\left( n+1\\right)\n}\\left\\Vert x_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\dprod\\nolimits_{k=0}^{n}\\left[\n1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }{\\left\\Vert \\omega\n_{0}-u^{\\ast }\\right\\Vert ^{n+1}\\prod\\limits_{k=0}^{n}\\left[\n1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] }\\text{.} \\label{eqn49}\n\\end{equation\nDefine \n\\begin{equation}\n\\theta _{n}=\\frac{\\delta ^{2\\left( n+1\\right) }\\dprod\\nolimits_{k=0}^{n\n\\left[ 1-a_{k}^{1}a_{k}^{2}\\left( 1-\\delta \\right) \\right] }\n\\prod\\limits_{k=0}^{n}\\left[ 1-a_{k}^{0}\\left( 1+\\delta \\right) \\right] \n\\text{.} \\label{eqn50}\n\\end{equation\nBy the assumptio\n\\begin{eqnarray}\n\\lim_{n\\rightarrow \\infty }\\frac{\\theta _{n+1}}{\\theta _{n}}\n&=&\\lim_{n\\rightarrow \\infty }\\frac{\\delta ^{2}\\left[\n1-a_{n+1}^{1}a_{n+1}^{2}\\left( 1-\\delta \\right) \\right] }{\\left[\n1-a_{n+1}^{0}\\left( 1+\\delta \\right) \\right] } \\label{eqn51} \\\\\n&=&\\delta ^{2}<1\\text{.} \\notag\n\\end{eqnarray\nIt thus follows from ratio test that $\\sum\\limits_{n=0}^{\\infty }\\theta\n_{n}<\\infty $. Hence, we have $\\lim_{n\\rightarrow \\infty }\\theta _{n}=0$\nwhich implies that the iterative sequence defined by (7) converges faster\nthan the iterative sequence defined by Noor iteration method (5).\n\\end{proof}\n\nBy use of the following example due to \\cite{QR}, it was shown in (\\cite{CR\n, Example 4.1) that CR iterative method \\cite{CR} is faster than all of \\\nPicard (3), S \\cite{S}, Noor (5) and SP (6) iterative methods for a\nparticular class of \\ operators which is included in the class of\nweak-contraction mappings satisfying (11). In the following, for the sake of\nconsistent comparison, we will use the same example as that of \\ (\\cite{CR},\nExample 4.1) in order to compare the rates of convergence between the\nPicard-S iterative scheme (7) and the CR iteration method \\cite{CR} for\nweak-contraction mappings. In the following example, for convenience, we use\nthe notations $\\left( PS_{n}\\right) $ and $\\left( CR_{n}\\right) $ for the\niterative sequences associated to Picard-S (7) and CR \\cite{CR} iteration\nmethods, respectively.\n\n\\begin{example}\n\\cite{QR} Define a mapping $T:\\left[ 0,1\\right] \\rightarrow \\left[ 0,1\\right]\n$ as $Tx=\\frac{x}{2}$. Let $a_{n}^{0}=a_{n}^{1}=a_{n}^{2}=0$, for \nn=1,2,...,24$ and $a_{n}^{0}=a_{n}^{1}=a_{n}^{2}=\\frac{4}{\\sqrt{n}}$, for\nall $n\\geq 25$.\n\nIt can be seen easily that the mapping $T$ satisfies condition (11) with the\nunique fixed point $0\\in F_{T}$. Furthermore, it is easy to see that Example\n1 satisfies all the conditions of Theorem 1. Indeed, let $x_{0}\\neq 0$ be an\ninitial point for the iterative sequences $\\left( PS_{n}\\right) $ and \n\\left( CR_{n}\\right) $. Utilizing Picard-S (7) and CR \\cite{CR} iteration\nmethods we obtain \n\\begin{eqnarray}\nPS_{n} &=&\\frac{1}{2}\\left( \\frac{1}{2}-\\frac{4}{n}\\right) x_{n} \\notag \\\\\n&=&\\cdots \\notag \\\\\n&=&\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{4}-\\frac{2}{k}\\right) x_{0}\\text{\n} \\label{eqn52}\n\\end{eqnarray\n\\begin{eqnarray}\nCR_{n} &=&\\left( \\frac{1}{2}-\\frac{1}{\\sqrt{n}}-\\frac{4}{n}+\\frac{8}{n\\sqrt{\n}}\\right) x_{n} \\notag \\\\\n&=&\\cdots \\notag \\\\\n&=&\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{2}-\\frac{1}{\\sqrt{k}}-\\frac{4}{k}\n\\frac{8}{k\\sqrt{k}}\\right) x_{0}\\text{.} \\label{eqn53}\n\\end{eqnarray\nIt follows from (51) and (52) tha\n\\begin{equation*}\n\\frac{\\left\\vert PS_{n}-0\\right\\vert }{\\left\\vert CR_{n}-0\\right\\vert }\n\\frac{\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{4}-\\frac{2}{k}\\right) x_{0}}\n\\dprod\\limits_{k=25}^{n}\\left( \\frac{1}{2}-\\frac{1}{\\sqrt{k}}-\\frac{4}{k}\n\\frac{8}{k\\sqrt{k}}\\right) x_{0}}\n\\end{equation*\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ }=\\dprod\\limits_{k=25}^{n}\\frac{\\frac{1}{4}-\\frac{2}{k\n}{\\frac{1}{2}-\\frac{1}{\\sqrt{k}}-\\frac{4}{k}+\\frac{8}{k\\sqrt{k}}}\n\\end{equation*\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }=\\dprod\\limits_{k=25}^{n}\\frac\n\\left( k-8\\right) \\sqrt{k}}{2\\left( k\\sqrt{k}-2k-8\\sqrt{k}+16\\right) }\n\\end{equation*\n\\begin{eqnarray}\n\\text{ \\ \\ \\ \\ \\ \\ } &=&\\dprod\\limits_{k=25}^{n}\\frac{\\left( k-8\\right) \n\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) \\left( k-8\\right) } \\notag \\\\\n&=&\\dprod\\limits_{k=25}^{n}\\frac{\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) }\\text{\n} \\label{eqn54}\n\\end{eqnarray\nFor all $k\\geq 25$, we hav\n\\begin{eqnarray}\n\\frac{\\left( k-2\\right) \\left( \\sqrt{k}-4\\right) }{4} &>&1 \\notag \\\\\n&\\Rightarrow &\\left( k-2\\right) \\left( \\sqrt{k}-4\\right) >4 \\notag \\\\\n&\\Rightarrow &k\\left( \\sqrt{k}-4\\right) >2\\left( \\sqrt{k}-2\\right) \\notag\n\\\\\n&\\Rightarrow &\\frac{\\sqrt{k}-4}{2\\left( \\sqrt{k}-2\\right) }>\\frac{1}{k} \n\\notag \\\\\n&\\Rightarrow &\\frac{\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) }<1-\\frac{1}{k}\\text\n,} \\label{eqn55}\n\\end{eqnarray\nwhich yield\n\\begin{equation}\n\\frac{\\left\\vert PS_{n}-0\\right\\vert }{\\left\\vert CR_{n}-0\\right\\vert \n=\\dprod\\limits_{k=25}^{n}\\frac{\\sqrt{k}}{2\\left( \\sqrt{k}-2\\right) \n<\\dprod\\limits_{k=25}^{n}\\left( 1-\\frac{1}{k}\\right) =\\frac{24}{n}\\text{.}\n\\label{eqn56}\n\\end{equation\nTherefore, we hav\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\frac{\\left\\vert PS_{n}-0\\right\\vert }{\\left\\vert\nCR_{n}-0\\right\\vert }=0\\text{,} \\label{eqn57}\n\\end{equation\nwhich implies that the Picard-S iterative scheme (7) is faster than the CR\niteration method \\cite{CR}$.$\n\\end{example}\n\nHaving regard to R. Chugh et al.'s result (\\cite{CR}, Example 4.1), L.B.\nCiric et al.'s results \\cite{Ciric} and Example 1 above, we conclude that\nPicard-S iteration method is faster than all Picard (3), Mann \\cite{Mann},\nIshikawa \\cite{Ishikawa}, S \\cite{S}, Noor (5) and SP (6) iterative methods.\n\nWe are now able to establish the following data dependence result.\n\n\\begin{theorem}\nLet $T$ with fixed point $u^{\\ast }\\in F_{T}\\neq \\emptyset $ be as in\nTheorem 1 and $\\widetilde{T}$ an approximate operator of $T$. Let $\\left\\{\nx_{n}\\right\\} _{n=0}^{\\infty }$ be an iterative sequence generated by (7)\nfor $T$ and define an iterative sequence $\\left\\{ \\widetilde{x}_{n}\\right\\}\n_{n=0}^{\\infty }$ as follow\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\widetilde{x}_{0}\\in D\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \n\\widetilde{x}_{n+1}=\\widetilde{T}\\widetilde{y}_{n}\\text{, \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\\ \n\\widetilde{y}_{n}=\\left( 1-a_{n}^{1}\\right) \\widetilde{T}\\widetilde{x\n_{n}+a_{n}^{1}\\widetilde{T}\\widetilde{z}_{n}\\text{, \\ \\ \\ \\ \\ \\ } \\\\ \n\\widetilde{z}_{n}=\\left( 1-a_{n}^{2}\\right) \\widetilde{x}_{n}+a_{n}^{2\n\\widetilde{T}\\widetilde{x}_{n}\\text{, }n\\in \n\\mathbb{N}\n\\text{,\n\\end{array\n\\right. \\label{eqn58}\n\\end{equation\nwhere $\\left\\{ a_{n}^{i}\\right\\} _{n=0}^{\\infty }$, $i\\in \\left\\{\n1,2\\right\\} $ be real sequences in $\\left[ 0,1\\right] $ satisfying (i) \n\\frac{1}{2}\\leq a_{n}^{1}a_{n}^{2}$ for all $n\\in \n\\mathbb{N}\n$, and (ii) $\\sum\\limits_{n=0}^{\\infty }a_{n}^{1}a_{n}^{2}=\\infty $. If \n\\widetilde{T}\\widetilde{u}^{\\ast }=\\widetilde{u}^{\\ast }$ such that \n\\widetilde{x}_{n}\\rightarrow \\widetilde{u}^{\\ast }$ as $n\\rightarrow \\infty \n, then we hav\n\\begin{equation}\n\\left\\Vert u^{\\ast }-\\widetilde{u}^{\\ast }\\right\\Vert \\leq \\frac\n5\\varepsilon }{1-\\delta }\\text{,} \\label{eqn59}\n\\end{equation\nwhere $\\varepsilon >0$ is a fixed number.\n\\end{theorem}\n\n\\begin{proof}\nIt follows from (7), (11), (12), and (57) tha\n\\begin{eqnarray}\n\\left\\Vert z_{n}-\\widetilde{z}_{n}\\right\\Vert &\\leq &\\left(\n1-a_{n}^{2}\\right) \\left\\Vert x_{n}-\\widetilde{x}_{n}\\right\\Vert\n+a_{n}^{2}\\left\\Vert Tx_{n}-\\widetilde{T}\\widetilde{x}_{n}\\right\\Vert \\notag\n\\\\\n&\\leq &\\left[ 1-a_{n}^{2}+a_{n}^{2}\\delta \\right] \\left\\Vert x_{n}\n\\widetilde{x}_{n}\\right\\Vert +a_{n}^{2}L\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n+a_{n}^{2}\\varepsilon \\text{,} \\label{eqn60}\n\\end{eqnarray\n\\begin{eqnarray}\n\\left\\Vert y_{n}-\\widetilde{y}_{n}\\right\\Vert &\\leq &\\left(\n1-a_{n}^{1}\\right) \\delta \\left\\Vert x_{n}-\\widetilde{x}_{n}\\right\\Vert\n+a_{n}^{1}\\delta \\left\\Vert z_{n}-\\widetilde{z}_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1-a_{n}^{1}\\right) L\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n+a_{n}^{1}L\\left\\Vert z_{n}-Tz_{n}\\right\\Vert \\notag \\\\\n&&+\\left( 1-a_{n}^{1}\\right) \\varepsilon +a_{n}^{1}\\varepsilon \\text{,}\n\\label{eqn61}\n\\end{eqnarray\n\\begin{equation}\n\\left\\Vert x_{n+1}-\\widetilde{x}_{n+1}\\right\\Vert \\leq \\delta \\left\\Vert\ny_{n}-\\widetilde{y}_{n}\\right\\Vert +L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n+\\varepsilon \\text{.} \\label{eqn62}\n\\end{equation\nFrom the relations (59), (60), and (61\n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-\\widetilde{x}_{n+1}\\right\\Vert &\\leq &\\delta ^{2}\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}\n\\widetilde{x}_{n}\\right\\Vert \\notag \\\\\n&&+\\left\\{ a_{n}^{1}a_{n}^{2}\\delta ^{2}L+\\left( 1-a_{n}^{1}\\right) \\delta\nL\\right\\} \\left\\Vert x_{n}-Tx_{n}\\right\\Vert \\notag \\\\\n&&+L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert +a_{n}^{1}\\delta L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert \\notag \\\\\n&&+a_{n}^{1}a_{n}^{2}\\delta ^{2}\\varepsilon +\\left( 1-a_{n}^{1}\\right)\n\\delta \\varepsilon +a_{n}^{1}\\delta \\varepsilon +\\varepsilon \\text{.}\n\\label{eqn63}\n\\end{eqnarray\nSince $a_{n}^{1}$, $a_{n}^{2}\\in \\left[ 0,1\\right] $ $\\ $and $\\frac{1}{2\n\\leq a_{n}^{1}a_{n}^{2}$ for all $n\\in \n\\mathbb{N}\n\n\\begin{equation}\n1-a_{n}^{1}a_{n}^{2}\\leq a_{n}^{1}a_{n}^{2}\\text{,} \\label{eqn64}\n\\end{equation\n\\begin{equation}\n1-a_{n}^{1}\\leq 1-a_{n}^{1}a_{n}^{2}\\leq a_{n}^{1}a_{n}^{2}\\text{,}\n\\label{eqn65}\n\\end{equation\n\\begin{equation}\n1\\leq 2a_{n}^{1}a_{n}^{2}\\text{.} \\label{eqn66}\n\\end{equation\nUse of the facts $\\delta $, $\\delta ^{2}\\in \\left( 0,1\\right) $, (63), (64),\nand (65) in (62) yields \n\\begin{eqnarray}\n\\left\\Vert x_{n+1}-\\widetilde{x}_{n+1}\\right\\Vert &\\leq &\\left[\n1-a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\right] \\left\\Vert x_{n}\n\\widetilde{x}_{n}\\right\\Vert \\notag \\\\\n&&+a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\left\\{ \\frac{L\\delta \\left(\n1+\\delta \\right) \\left\\Vert x_{n}-Tx_{n}\\right\\Vert }{1-\\delta }\\right. \n\\notag \\\\\n&&\\left. +\\frac{2L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert +2\\delta L\\left\\Vert\nz_{n}-Tz_{n}\\right\\Vert +5\\varepsilon }{1-\\delta }\\right\\} \\text{.}\n\\label{eqn67}\n\\end{eqnarray\nDefin\n\\begin{eqnarray}\n\\beta _{n} &:&=\\left\\Vert x_{n}-\\widetilde{x}_{n}\\right\\Vert \\text{,}\n\\label{eqn68} \\\\\n\\mu _{n} &:&=a_{n}^{1}a_{n}^{2}\\left( 1-\\delta \\right) \\in \\left( 0,1\\right) \n\\text{,} \\notag \\\\\n\\gamma _{n} &:&=\\frac{L\\delta \\left( 1+\\delta \\right) \\left\\Vert\nx_{n}-Tx_{n}\\right\\Vert +2L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert +2\\delta\nL\\left\\Vert z_{n}-Tz_{n}\\right\\Vert +5\\varepsilon }{1-\\delta }\\geq 0\\text{.}\n\\notag\n\\end{eqnarray\nHence, the inequality (66) perform all assumptions in Lemma 2 and thus an\napplication of Lemma 2 to (66) yields \n\\begin{eqnarray}\n0 &\\leq &\\lim \\sup_{n\\rightarrow \\infty }\\left\\Vert x_{n}-\\widetilde{x\n_{n}\\right\\Vert \\label{eqn69} \\\\\n&\\leq &\\lim \\sup_{n\\rightarrow \\infty }\\frac{L\\delta \\left( 1+\\delta \\right)\n\\left\\Vert x_{n}-Tx_{n}\\right\\Vert +2L\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n+2\\delta L\\left\\Vert z_{n}-Tz_{n}\\right\\Vert +5\\varepsilon }{1-\\delta }. \n\\notag\n\\end{eqnarray\nWe know from Theorem 1 that $\\lim_{n\\rightarrow \\infty }x_{n}=u^{\\ast }$ and\nsince $Tu^{\\ast }=u^{\\ast }\n\\begin{equation}\n\\lim_{n\\rightarrow \\infty }\\left\\Vert x_{n}-Tx_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert y_{n}-Ty_{n}\\right\\Vert\n=\\lim_{n\\rightarrow \\infty }\\left\\Vert z_{n}-Tz_{n}\\right\\Vert =0\\text{.}\n\\label{eqn70}\n\\end{equation\nTherefore the inequality (68) becomes \n\\begin{equation}\n\\left\\Vert u^{\\ast }-\\widetilde{u}^{\\ast }\\right\\Vert \\leq \\frac\n5\\varepsilon }{1-\\delta }. \\label{eqn71}\n\\end{equation}\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzatdy b/data_all_eng_slimpj/shuffled/split2/finalzzatdy new file mode 100644 index 0000000000000000000000000000000000000000..3743ed60789070dc13b093e7b2ec49dded4c47a1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzatdy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\n\nThe observation of a superfluid (SF) to Mott insulator (MI) transition in an \noptical lattice \\cite{greiner_02} have opened a new paradigm to explore the\nphysics of quantum many-body systems. Optical lattices are clean and highly\ncontrollable; in contrast, the condensed matter systems of interest are never\ndevoid of impurities. Thus, some of the fundamental questions in condensed \nmatter physics are related to quantum phase transitions in the presence of \ndisorder. The presence of disorder constrains the evolution of a quantum \nsystem in the Hilbert space and gives rise to quantum glassy phases \nlike Bose glass (BG) \\cite{fisher_89,lin_12} and phenomena like Anderson \nlocalization \\cite{anderson_58, schulte_05, billy_08, roati_08}. The early\ntheoretical investigations of disordered Bose Hubbard model (DBHM) \n\\cite{fisher_89, giamarchi_88} showed that there is no MI-SF transition in \npresence of diagonal disorder as the BG phase always occurs as the intermediate\nphase. The theorem of inclusion \\cite{pollet_09, gurarie_09} agrees well with \nthis prediction while identifying BG phase as a Griffiths phase containing \nthe rare regions. In these rare-regions, the energy gap of adding \nanother boson to the system vanishes and thus can be identified as SF islands.\n\n The DBHM have been studied with diverse techniques: \nmean field \\cite{krutitsky_06}, projected Gutzwiller method \\cite{lin_12}, \nsite independent and multisite mean-field method \n\\cite{buonsante_07, pisarski_11}, stochastic mean field \\cite{bissbort_10},\nquantum Monte Carlo \\cite{gimperlein_05, soyler_11, sengupta_07}, \ndensity matrix renormalisation group (DMRG) \\cite{rapsch_99, gerster_16} for \n1D system and numerous others \n\\cite{pai_96, nikolay_04, kruger_09, kruger_11,carrasquilla_10}.\nIn all the cases the introduction of disorder leads to the emergence of BG \nphase which is characterized by finite compressibility and zero \nsuperfluid stiffness. In the present work, we study 2D DBHM at finite \ntemperatures using single site Gutzwiller and cluster Gutzwiller mean field \ntheories. More importantly, we examine the effect of the artificial gauge\nfields in DBHM. Here, it must be emphasized that most of the theoretical \ninvestigations of DBHM are at zero temperatures, but the experimental \nrealizations are at finite temperatures. This gap is addressed in the present \nwork by examining the consequent effects of thermal fluctuations to the BG \nphase. One key finding is the presence of normal fluid (NF) phase at finite \ntemperatures and melting of Bose glass phase. The latter is consistent with \nthe findings reported in ref. \\cite{thomson_16}. \n\n In optical lattices it is possible to create an equivalent of Lorentz force \nwith artificial gauge fields\n\\cite{lin_09,lin_11,dalibard_11,hof_76, garcia_12, aidelsburger_11} and\nis associated with what is referred to as synthetic magnetic field.\nThe introduction of the artificial gauge field breaks time reversal symmetry\nand modify the band structure. Through the introduction of tunable artificial\ngauge field it has been possible to observe the single particle mobility \nedge \\cite{gadway_18} in zig-zag chains. Apart from the transport properties, \nthe localization effect of the artificial gauge field can enhance the glassy \nfeatures of DBHM. Indeed, our study reveals that localization in DBHM\ncan be controlled through the artificial gauge field. For this we use\nEdward Anderson order parameter (EAOP) as a measure of localization \nwhile tuning the strength of artificial gauge field. The EAOP is a measure of \nnumber fluctuation over disorder realizations and it is finite for the BG \nphase, but zero and close to zero for the MI and SF\nphases, respectively. From the values of EAOP we find that\nthere is enhancement of the BG phase in the presence of artificial gauge\nfield. From the experimental point of view this is important as it could\nfacilitate detailed studies of the BG phase.\n\n Experimentally, DBHM can be realized by the addition of speckle type of \ndisorder \\cite{clement_05, clement_08, white_09}, or by the generation of \nincommensurate multichromatic lattice \\cite{damski_03,fallani_07}. Indirect\nmeasurements on SF-BG transition have been reported in 1D \\cite{gadway_11} and \n3D \\cite{pasienski_10,meldgin_16} systems through transport and coherence \nmeasurements. In 2D, the observation of center of mass dynamics \\cite{yan_17} \nhas been theoretically proposed as a method to detect the BG phase \nwhile ref. \\cite{delande_09} suggests measuring the radius of the atomic cloud.\nReplica symmetry breaking \\cite{thomson_14, morrison_08} also has been \nproposed as a possible detection scheme. In spite of these various \nproposals and progresses towards the realization of a Bose glass, a clear \nand unambiguous experimental evidence of BG phase is yet to be achieved. \nIn future studies, quantum gas microscopes \\cite{bakr_09} could probe the\nproperties of the BG phase as it can study the SF islands in BG phase.\nAnd, recent work has proposed it as an experimental tool to detect \nBG phases \\cite{thomson_16}. \n \n This paper is organized as follows. In the Section II we give an account of \nthe single site and cluster Gutzwiller mean field theories. This is then \nfollowed by a description of the artificial gauge field and observable \nmeasures to distinguish different phases in Section III and IV. Then, in \nSection V we provide detailed description of the results obtained from our\nstudies and discuss our observations. And, we then conclude in Section VI.\n\n\n\n\\section{Model and Gutzwiller mean field theory}\n\\label{model}\n\nThe DBHM for a square lattice with nearest neighbour hopping is defined by the \nHamiltonian\n\\begin{eqnarray}\n \\hat{H} &= &-\\sum_{p,q}\\left [ J_x\\left( \\hat{b}_{p+1, q}^{\\dagger}\n \\hat{b}_{p,q} + {\\rm H.c.} \\right ) \n + J_y\\left( \\hat{b}_{p, q+1}^{\\dagger}\n \\hat{b}_{p,q} + {\\rm H.c.}\\right ) \\right ]\n \\nonumber \\\\\n && + \\sum_{p,q}\\hat{n}_{p,q} \\left [\\frac{U}{2}(\\hat{n}_{p,q}\n -1) -\\tilde\\mu_{p,q} \\right],\n \\label{dbhm}\n\\end{eqnarray}\nwhere $p$ ($q$) is the lattice index along $x$ ($y$) axis, \n$\\hat{b}_{p,q}^{\\dagger}$ ($\\hat{b}_{p,q}$) is the creation (annihilation) \noperator for a a boson at the $(p,q)$ lattice site, and $\\hat{n}_{p,q}$ is the boson \ndensity operator; $J_x$ ($J_y$) is the hopping strength between two nearest \nneighbour sites along $x$ ($y$) axis, $U>0$ is the on-site inter-atomic \ninteraction strength, and $\\tilde\\mu_{p,q} = \\mu - \\epsilon_{p,q}$ is the local\nchemical potential. The disorder is introduced through the random energy \noffset $\\epsilon_{p,q}$ which are uniformly distributed independent random \nnumbers $r_{p,q} \\in [-D, D]$ and $D$ is bound of \nrandom numbers. Depending on the ratio of $J$ and $U$ the above Hamiltonian \ncan describe three possible phases of the system --- MI, BG and \nSF~\\cite{fisher_89}. In the strong on-site interaction limit \n$(J\/U\\rightarrow 0)$ the system is either in the MI phase (gapped phase), or \nin the BG phase. Whereas the system is in SF phase when the tunneling overcomes \nrepulsive interaction. \n \n\n\n\n\\subsection{Zero temperature Gutzwiller mean-field theory}\n\\label{gmf}\n\n In the present work we employ the Gutzwiller mean-field theory to compute the \nproperties of the DBHM. In this section we describe two variants of the \nGutzwiller mean field theory: First is the single site Gutzwiller mean-field \n(SGMF) method, where the lattice sites are correlated through a scalar mean \nfield $\\phi$ and cannot describe entangled states such as the quantum Hall state. \nAnd, the second is the cluster Gutzwiller mean field (CGMF) method, which \nincorporates the correlation within a cluster of neighbouring lattice sites \nexactly and inter-cluster correlation through $\\phi$. A larger cluster captures the\ncorrelation effects better but at the cost of higher computation.\n\n\n\n\\subsubsection{SGMF method}\n\\label{sgmf}\n\nIn the SGMF method, $\\hat{b}_{p, q}$ ( $\\hat{b}^\\dagger_{p, q}$) at a \nparticular lattice site $(p,q)$ is decomposed into mean field \n$\\phi_{p, q}$ ($\\phi^{*}_{p, q}$) and fluctuation $\\delta \\hat{b}_{p, q}$\n($\\delta \\hat{b}^{\\dagger}_{p, q}$) parts as\n\\begin{subequations}\n\\begin{eqnarray}\n \\hat{b}_{p, q} &=& \\phi_{p,q} + \\delta \\hat{b}_{p, q}, \\\\\n \\hat{b}^{\\dagger}_{p, q} &=& \\phi^{*}_{p, q} + \\delta \\hat{b}^{\\dagger}_{p, q}\n \\label{decompose} \n\\end{eqnarray}\n\\end{subequations}\nwhere, $\\phi_{p,q} = \\langle\\hat{b}_{p,q}\\rangle$, and $\\phi^{*}_{p, q} = \n\\langle\\hat{b}^{\\dagger}_{p,q}\\rangle$ are the mean field and its complex\nconjugate, respectively. The expectations are defined with respect to the \nground state of the system. Employing this decomposition, the Hamiltonian\nin Eq. (\\ref{dbhm}) is reduced to the SGMF Hamiltonian\n\\begin{eqnarray}\n \\hat{H}^{\\rm MF} &=& \\sum_{p, q}\\Biggr\\{-J_x\n \\bigg [ \\Big(\\hat{b}_{p + 1, q}^{\\dagger}\\phi_{p, q} \n + \\phi_{p + 1, q}^{*}\\hat{b}_{p, q} \n - \\phi_{p+1,q}^{*}\\phi_{p, q}\\Big) \n \\nonumber\\\\ \n && + {\\rm H.c.}\\bigg ] \n - J_y\\bigg [ \\Big(\\hat{b}_{p, q+1}^{\\dagger} \\phi_{p, q} \n + \\phi_{p, q+1}^{*}\\hat{b}_{p, q} \n - \\phi_{p, q+1}^{*}\\phi_{p, q}\\Big)\n \\nonumber\\\\ \n && + {\\rm H.c.}\\bigg ] \n + \\biggr[\\frac{U}{2}\\hat{n}_{p, q}\n (\\hat{n}_{p, q}-1) - \\tilde{\\mu}_{p, q}\n \\hat{n}_{p, q}\\biggr] \\Bigg \\},\n\\label{mf_hamil}\n\\end{eqnarray}\nwhere terms up to linear in fluctuation operators are considered and those\nquadratic in fluctuation operators are neglected. The total Hamiltonian \nin the above expression can be rewritten as \n$\\hat{H}^{\\rm MF} = \\sum_{p,q}\\hat{H}^{\\rm MF}_{p,q}$, where \n$\\hat{H}^{\\rm MF}_{p,q}$ is the single site mean field Hamiltonian. The mean \nfield $\\phi_{p, q}$ can be identified as the SF order parameter which\ndefines the MI to BG phase-transition in DBHM. Thus, $\\phi_{p, q}$ is\nzero, when the system is in MI phase, and finite in BG as well as in the SF phase. \n\nTo compute the ground state of the system the Hamiltonian matrix of \n$\\hat{H}^{\\rm MF}_{p,q}$ can be diagonalized for each lattice site $(p, q)$ separately. \nAnd, then the ground state of the system is direct product of the single\nsite ground states $\\ket{\\psi}_{p,q}$. Using the Gutzwiller approximation, the \nground state of the system is\n\\begin{eqnarray}\n \\ket{\\Psi_{\\rm GW}} = \\prod_{p, q}\\ket{\\psi}_{p, q}\n = \\prod_{p, q} \\sum_{n = 0}^{N_{\\rm b}}c^{(p,q)}_n\n \\ket{n}_{p, q},\n \\label{gw_state}\n\\end{eqnarray}\nwhere $N_b$ is the maximum allowed occupation number basis (Fock space basis),\nand $c^{(p,q)}_n$ are the coefficients of the occupation number state $\\ket{n}$\nat the lattice site $(p,q)$. From $\\ket{\\Psi_{\\rm GW}}$ we can calculate\n$\\phi_{p, q}$, the SF order parameter, as\n\\begin{equation}\n\\phi_{p, q} = \\langle\\Psi_{\\rm GW}|\\hat{b}_{p, q}|\\Psi_{\\rm GW}\\rangle \n = \\sum_{n = 0}^{N_{\\rm b}}\\sqrt{n} \n {c^{(p,q)}_{n-1}}^{*}c^{(p,q)}_{n}.\n\\label{gw_phi} \n\\end{equation}\nFrom the above expression it is evident that $\\phi_{p, q}$ is zero in the MI \nphase as only one occupation number state $\\ket{n}$ contributes to \n$\\ket{\\psi}_{p,q}$ and hence only one $c^{(p,q)}_n$ has nonzero value. \nSimilarly, the occupancy and number fluctuation at a lattice site are\n\\begin{eqnarray}\n \\langle \\hat{n}_{p,q}\\rangle &=& \n \\sum_{n = 0}^{N_{\\rm b}} | c_n^{(p,q})|^2 n_{p,q},\\label{number} \\\\\n \\delta n _{p,q} &=& \\sqrt{\\langle \\hat{n}_{p,q}^2\\rangle \n - \\langle \\hat{n}_{p,q}\\rangle ^2 }\n \\label{deltan} \n\\end{eqnarray}\nIn the MI phase $\\delta n _{p,q}$ is zero, which makes MI phase incoherent. In \nthe BG and SF phase $\\delta n _{p,q}$ has nonzero value, but the value of \n$\\delta n _{p,q}$ in the BG phase is very small which arises due to\nthe presence of SF islands in the BG phase. The nonzero and \nrelatively large $\\delta n _{p,q}$ in the SF phase indicates strong phase \ncoherence. Thus $\\delta n _{p,q}$ can also be considered as the\norder parameter for MI-BG phase transition. \n\n\n\n\n\\subsubsection{CGMF method}\n\\label{cgmf}\n In the CGMF method, to incorporate the hopping term exactly and hence improve\nthe correlation effects, the total lattice considered is partitioned into \nclusters. That is, for an optical lattice of dimension $K\\times L$, we can \nseparate it into $W$ clusters ($C$) of size $M\\times N$, that is \n$W=(K\\times L)\/(M\\times N)$. Thus, the case of CGMF with $M = N = 1$ is \nequivalent to SGMF. In CGMF, the kinetic energy or the hopping term is decomposed\ninto two types. First is the intra-cluster or hopping within the lattice sites\nin a cluster, and second is the inter-cluster which is between neighbouring \nlattice sites which lie on the boundary of different clusters. The details of\nthe present implementation of the CGMF method is reported in ref. \\cite{bai_18} \nand the Hamiltonian of a single cluster is\n\\begin{eqnarray}\n \\hat{H}_C & =& -{\\sum_{p, q \\in C}}'\\biggr[J_x \n \\hat{b}_{p+1, q}^{\\dagger}\\hat{b}_{p, q} \n + J_y \\hat{b}_{p, q+1}^{\\dagger}\\hat{b}_{p, q}\n + {\\rm H.c.}\\biggr]\\nonumber\\\\\n &&-\\sum_{p, q\\in \\delta C}\n \\biggr[J_x (\\phi^c_{p+1,q})^{\\ast}\\hat{b}_{p, q} \n + J_y (\\phi^c_{p,q+1})^{\\ast}\\hat{b}_{p, q}\n + {\\rm H.c.}\\biggr]\\nonumber\\\\\n && +\\sum_{p, q \\in C}\n \\biggr[\\frac{U}{2}\\hat{n}_{p, q}(\\hat{n}_{p, q}-1) - \n \\tilde{\\mu}_{p, q}\\hat{n}_{p, q}\\biggr] \n\\label{cg_hamil} \n\\end{eqnarray}\nwhere $(\\phi^c_{p,q})^{\\ast} = \\sum_{p^{'},q^{'}\\not\\in C} \\langle \nb_{p^{'},q^{'}}\\rangle$ is the SF order parameter at the lattice site $(p, q)$ which lies\nat the boundary of neighbouring cluster. The prime in the summation of the \nfirst term is to indicate that the $(p+1,q)$ and $(p,q+1)$ lattice points are\nalso within the cluster. And, in the second term $\\delta C$ denotes the lattice\nsites at the boundary of the cluster. The matrix element of $\\hat{H}_C$ is\ndefined in terms of the cluster basis states\n\\begin{equation}\n \\ket{\\Phi_c}_\\ell = \\prod_{q=0}^{N-1}\\prod_{p=0}^{M-1} \\ket{n_p^q},\n\\end{equation}\nwhere $\\ket{n_p^q}$ is the occupation number basis at the $(p,q)$ lattice\nsite, and $\\ell \\equiv \\{n_0^0, n_1^0, \\ldots, n_{M-1}^0, n_0^1, n_1^1,\\ldots\nn_{M-1}^1, \\ldots, n_{M-1}^{N-1}\\}$ is \nthe index quantum number to identify the cluster state. After diagonalizing \nthe Hamiltonian, we can get the ground state of the cluster as\n\\begin{equation}\n |\\Psi_c\\rangle = \\sum_{\\ell} C_\\ell\\ket{\\Phi_c}_\\ell.\n\\end{equation}\nwhere $C_\\ell$ is the coefficient of the cluster state.\nThe ground state of the entire $K\\times L$ lattice, like in SGMF, is the direct\nproduct of the cluster ground states\n\\begin{equation}\n \\ket{\\Psi^c_{\\rm GW}} = \\prod_k\\ket{\\Psi_c}_k\n \\label{cgw_state}\n\\end{equation}\nwhere, $k$ is the cluster index and varies from 1 to \n$W$. The SF order parameter $\\phi$ is computed similar \nto Eq.~(\\ref{gw_phi}) as\n\\begin{equation}\n \\phi_{p,q} = \\bra{\\Psi^c_{\\rm GW}}\\hat{b}_{p,q}\\ket{\\Psi^c_{\\rm GW}}.\n\\label{cgw_phi} \n\\end{equation}\nWith respect to cluster basis, the average occupancy and number fluctuation \nof lattice sites in the $k$th cluster are\n\\begin{eqnarray}\n \\langle \\hat{n}\\rangle _k &=& \\frac{\\sum_{p,q\\in C} \\langle\\hat{n}_{p,q}\n \\rangle _k}{MN} \\label{cnumber} \\\\ \n (\\delta n)_k &=& \\sqrt{\\langle\\hat{n}^2\\rangle _k - \n \\langle\\hat{n}\\rangle^2_k}.\n \\label{cdeltan} \n\\end{eqnarray}\nFor the entire lattice, the average density can be defined as the mean of the\naverage occupancy of the clusters.\n\n\n\n\n\\subsection{Finite temperature Gutzwiller mean field theory}\n\\label{gutz_t}\n\n To incorporate finite temperature effects we require the entire set of \neigenvalues and eigenfunctions obtained from the diagonalization of the mean\nfield Hamiltonian. So, in the case of SGMF we use all the single site \neigenvectors $\\ket{\\psi}^l_{p,q}$ and corresponding eigenvalues $E^l_{p,q}$ to\ndefine the single site partition function\n\\begin{eqnarray}\n Z = \\sum_{l=1}^{N_b}e^{-\\beta E_l},\n \\label{pf}\n\\end{eqnarray}\nwhere $\\beta = 1\/k_BT$, $T$ being the temperature of the system. Since the \nenergy $E_l$ is scaled with respect to $U$, $T$ is in units of $U\/k_{\\rm B}$ or \nin other words in the rest of the paper temperature is defined in terms\nof the dimensionless unit $k_{\\rm B}T\/U$. In a similar way, for the CGMF we \ncan define the cluster partition function in terms of the eigenfunctions \n$\\ket{\\Psi_c}^l$ and the corresponding eigenvalues. \nUsing the above description, the thermal average of the SF order parameter at \nthe $(p,q)$ lattice site is\n\\begin{equation}\n \\langle \\phi_{p,q}\\rangle = \\frac{1}{Z}\\sum_{l}{_k^l\\bra{\\Psi_c}}\n \\hat{b}_{p,q} e^{-\\beta E_l} \\ket{\\Psi_c}_k^l,\n\\label{phi_t}\n\\end{equation}\nwhere $\\langle\\ldots\\rangle$ is used to represent thermal averaging and \n$\\ket{\\Psi_c}_k^l$ is the $l$th excited state of the $k$th cluster within \nwhich the $(p,q)$ lattice site lies. Similarly, the occupancy or the density \ncan be computed as \n\\begin{equation}\n \\langle\\langle \\hat{n}_{p,q} \\rangle\\rangle = \\frac{1}{Z}\\sum_{l}\n {_k^l\\bra{\\Psi_c}}\\hat{n}_{p,q}e^{-\\beta E_l}\n \\ket{\\Psi_c}^l_k,\n \\label{number_t}\n\\end{equation}\nwhere, following the notations in Eq. (\\ref{number}) and (\\ref{cnumber}), the \nadditional $\\langle\\ldots\\rangle$ represents thermal averaging. Once we \nobtain $\\langle\\langle \\hat{n}_{p,q} \\rangle\\rangle$, the average density or \noccupancy is \n$\\langle\\rho\\rangle=\\langle n \\rangle \n= \\sum_{p,q}\\langle\\langle \\hat{n}_{p,q} \\rangle\\rangle \/(K\\times L)$. Then, \nlike defined earlier, the number fluctuation is \n\\begin{equation}\n \\delta n_{p,q} = \\sqrt{\\langle\\langle \\hat{n}^2_{p,q}\\rangle\\rangle\n -\\langle\\langle\\hat{n}_{p,q}\\rangle\\rangle^2}.\n\\end{equation}\nA new feature of considering finite temperature effects is, it is possible to \nhave vanishing $\\langle \\phi_{p,q}\\rangle$ but with non-integer \n$\\langle\\langle \\hat{n}_{p,q} \\rangle\\rangle$. This heralds a new \nphase in the phase diagram and is referred to as the normal fluid (NF). \nThus, at finite temperatures SF order parameter can act as the order parameter \nfor the NF-BG transition. \nCompared to the NF phase, the MI on the other hand has vanishing \n$\\langle \\phi_{p,q}\\rangle$ and integer \n$\\langle\\langle \\hat{n}_{p,q} \\rangle\\rangle$. So, with vanishing \n$\\langle \\phi_{p,q}\\rangle$ the change from \ninteger value to non-integer $\\langle\\langle \\hat{n}_{p,q} \\rangle\\rangle$\ncan be identified as MI-NF transition.\n\n\n\n\n\\section{Artificial gauge field}\n\\label{art_gauge}\n Artificial gauge fields \\cite{lin_09,lin_11,dalibard_11} engineered through \noptical fields can create synthetic magnetic fields for charge neutral \nultracold atoms in optical lattices. This generates an equivalent of Lorentz \nforce for these atoms, and optical lattice is, then, endowed with properties\nanalogous to the quantum Hall system. Such a system is an excellent model \nsystem to study the physics of strongly correlated states like quantum Hall\nstates and their properties. The same logic also applies to the DBHM.\nIn the Hamiltonian description, the presence of an artificial gauge \nfield induces a complex hopping parameter $J \\rightarrow J\\exp(i\\Phi)$ and\naccordingly the SGMF Hamiltonian in Eq.~(\\ref{mf_hamil}) is modified to\n\\begin{eqnarray}\n \\hat{H}^{\\rm MF} &=& \\sum_{p, q}\\Biggr\\{-J_x e^{i\\Phi}\n \\bigg [ \\Big(\\hat{b}_{p + 1, q}^{\\dagger}\\phi_{p, q} \n + \\phi_{p + 1, q}^{*}\\hat{b}_{p, q} \n \\nonumber\\\\ \n && - \\phi_{p+1,q}^{*}\\phi_{p, q}\\Big) \n + {\\rm H.c.}\\bigg ] \n - J_y\\bigg [ \\Big(\\hat{b}_{p, q+1}^{\\dagger} \\phi_{p, q} \n \\nonumber\\\\ \n && + \\phi_{p, q+1}^{*}\\hat{b}_{p, q} \n - \\phi_{p, q+1}^{*}\\phi_{p, q}\\Big)\n + {\\rm H.c.}\\bigg ] \n \\nonumber\\\\ \n && + \\biggr[\\frac{U}{2}\\hat{n}_{p, q}\n (\\hat{n}_{p, q}-1) - \\tilde{\\mu}_{p, q}\n \\hat{n}_{p, q}\\biggr] \\Bigg \\},\n\\label{mf_hamil_gauge}\n\\end{eqnarray}\nwhere, $\\Phi$ is the phase an atom acquires when it traverses around a unit\ncell or plaquette. The artificial gauge field is considered in the Landau gauge \nand the phase for hopping along $x$ direction arises via the Peierl's\nsubstitution \\cite{hof_76, garcia_12}. The artificial gauge field, then,\ncreates a staggered synthetic magnetic flux \\cite{aidelsburger_11} along \n$y$ direction. The phase can also be defined in terms of the $\\alpha$, the \nflux quanta per plaquette, through the relation $\\Phi = 2\\pi\\alpha q$, and \nthe flux quanta is restricted in the domain $0\\le \\alpha\\le 1\/2$. In the present work, we \nexamine the properties of bosons in presence of artificial gauge field while \nexperiencing a random local chemical potential. Although, the effect of an \nartificial gauge field on BHM is quite well studied, the same is not true of \nDBHM. \n\n \n\n\n\\section{Characterization of states}\n\n Each of the low temperature phases supported by DBHM has special properties \nand this leads to unique combinations of order parameters as signatures of \neach phase. The values of these order parameters also determine\nthe phase boundaries. In Table.~\\ref{table:tab}, we list the order parameters\ncorresponding to each phase. \n\n\n\n\n\\subsection{Superfluid stiffness and compressibility}\n\nPhase coherence is a characteristic property of the SF phase, and it is \nabsent in the other phases (MI, NF and BG) supported by DBHM. Thus in the \nSF phase it requires finite amount of energy to alter the phase coherence, or \nin other words, it acquires stiffness towards phase change. This property is \nreferred to as the superfluid stiffness $\\rho_s$, and hence plays an important\nrole in determining the phase boundary between BG and SF phase. To \ncompute $\\rho_s$, a twisted boundary condition (TBC) is imposed on the state. \nIf the TBC is applied along the $x$ direction, the hopping term in the \nDBHM is transformed as \n\\begin{equation}\n\\!\\!\\!\\!J_x(\\hat{b}_{p+1,q}^{\\dagger}\\hat{b}_{p,q} + {\\rm H.c})\\rightarrow \n J_x(\\hat{b}_{p+1,q}^{\\dagger}\\hat{b}_{p,q}e^{i2\\pi\\varphi\/L} \n + {\\rm H.c}),\n\\label{twist}\n\\end{equation}\nwhere, $\\varphi$ is the phase shift or twist applied to the periodic boundary\ncondition, $L$ is the size of the lattice along $x$ direction, and \n$2\\pi\\varphi\/L$ is phase shift of an atom when it hops between nearest \nneighbours. Accordingly, $\\rho_s$ is computed employing the following \nexpression \\cite{gerster_16}\n\\begin{equation}\n \\rho_s=\\frac{L}{8\\pi^2}\\frac{\\partial^2E_0}{\\partial\\varphi^2}|_{\\varphi = 0}.\n\\label{stiff}\n\\end{equation}\nwhere $E_0$ is the ground state energy with TBC.\nThe SF phase is a compressible state as $\\delta n$ is finite. However, MI phase\nand strongly correlated phase like quantum Hall states are incompressible. \nThus, the compressibility $\\kappa$ is a property of the system which can be\nemployed a diagnostic to support the phases determined through the order\nparameters. By definition, $\\kappa$ is given by\n\\begin{equation}\n \\kappa=\\frac{\\partial\\langle\\hat{n}\\rangle}{\\partial\\mu}.\n \\label{sfactor}\n\\end{equation}\nThat is, $\\kappa$ is the sensitivity of $n$ to the change of $\\mu$.\n\n\n\n\\begin{figure}\n \\includegraphics[width=8cm]{eaop_dord.jpg}\n \\caption{$q_{\\text{\\tiny{EA}}}$ as a function of $\\mu\/U$ and $J\/U$ at zero \n temperature. (a)-(d) show $q_{\\text{\\tiny{EA}}}$ using SGMF method\n and (e)-(h) are obtained employing the CGMF method with 2$\\times$ 2\n cluster. (c)-(d) show the enhancement of the BG phase region in the \n presence of an artificial gauge field with $\\alpha = 1\/4$ compared to \n (a)-(b) corresponding to $\\alpha = 0$ with disorder strengths\n $D\/U= 0.2$ and $0.6$ respectively. This enhancement is also\n captured in (g)-(h) for $\\alpha = 1\/4$ compared to (e)-(f) using the\n CGMF method.The increase of BG regions with an increase of $D\/U$ is\n also notable for both in the presence and in the absence of an artificial\n gauge field. In all the above figures, $q_{\\text{\\tiny{EA}}}$ is\n obtained by averaging over 50 different disorder distributions. }\n \\label{eaop-t0}\n\\end{figure}\n\n\n\\subsection{Edwards-Anderson order parameter}\n\nFor a disordered system the natural and hence, more appropriate order parameter\nis the Edwards-Anderson order parameter (EAOP). It can distinguish the \nGriffiths phase by its non zero value and can describe the effect of disorder \nbetter than other properties like $\\rho_s$, $\\kappa$, structure factor, etc.\nIn the studies with mean field theory, EAOP was first introduced to detect the \nnon trivial breaking of ergodicity. Since then various type of EAOP have been \nproposed in literature \\cite{morrison_08,graham_09,thomson_14,khellil_16}. In \nour study we consider the EAOP of the following form \\cite{thomson_14}\n\\begin{eqnarray}\n q_{\\text{\\tiny{EA}}} = \\overline{\\langle{\\hat{n}}_{p,q}\\rangle^2}\n -\\overline{\\langle{\\hat{n}}_{p,q}\\rangle}^2,\n\\label{eaop}\n\\end{eqnarray} \nwhere, $n_{p,q}$ is the number of atoms at the $(p,q)$ lattice site. The above \nexpression involves two types of averages: $\\langle\\cdots\\rangle$ represents\nthermal; and $\\overline{\\cdots}$ indicates average over disorder distribution.\nFor the $\\langle\\cdots\\rangle$ we consider all the excited states.\nFrom the definition, as the MI phase is identified by integer \nvalues of $\\langle{\\hat{n}}_{p,q}\\rangle$ $q_{\\text{\\tiny{EA}}}$ is zero. In \nthe SF phase $\\langle{\\hat{n}}_{p,q}\\rangle$ is real and $\\delta n_{p,q}$ is \nfinite, however, for the clean system $q_{\\text{\\tiny{EA}}}$ is \nzero as $\\langle{\\hat{n}}_{p,q}\\rangle$ is homogeneous. With disorder, \n$\\langle{\\hat{n}}_{p,q}\\rangle$ is inhomogeneous in the SF phase and hence, \n$q_{\\text{\\tiny{EA}}}$ is finite but small $O(10^{-3})$ \\cite{thomson_16}. In \nthe BG phase $q_{\\text{\\tiny{EA}}}$ is relatively large due to correlation \nbetween number density and disorder. Thus using $q_{\\text{\\tiny{EA}}}$ the BG \nphase is distinguishable from MI and NF phases in the present in the system. In \nzero temperature limit we define $q_{\\text{\\tiny{EA}}}$ as\n\\begin{eqnarray}\n q_{\\text{\\tiny{EA}}}|_{(T=0)} = \\overline{\\langle{\\hat{n}}_{p,q}\\rangle^2}\n -\\overline{\\langle{\\hat{n}}_{p,q}\\rangle}^2,\n \\label{eaop0}\n\\end{eqnarray} \nwhere we consider expectations only for the ground state.\n\\begin{table}[h!]\n \\begin{ruledtabular}\n \\begin{tabular}{lr} \n \\textbf{Quantum phase} & \\textbf{Order parameter} \\\\\n \\colrule\n Superfluid (SF) & $q_{\\text{\\tiny{EA}}} > 0$, $\\rho_s > 0$, $\\kappa > 0$, $\\phi\\ne 0$ \\\\\n Mott insulator (MI) & $q_{\\text{\\tiny{EA}}} = 0$, $\\rho_s = 0$, $\\kappa = 0$, $\\phi = 0$ \\\\\n Bose glass (BG) & $q_{\\text{\\tiny{EA}}} > 0$, $\\rho_s = 0$, $\\kappa > 0$, $\\phi\\ne 0$ \\\\\n Normal fluid (NF) & $q_{\\text{\\tiny{EA}}} > 0$, $\\rho_s = 0$, $\\kappa > 0$, $\\phi= 0$\\\\\n \\end{tabular}\n \\caption{ Classification of quantum phases and the order parameters \n supported by DBHM at zero and finite temperatures.}\n \\label{table:tab}\n \\end{ruledtabular}\n\\end{table}\n\n\n\n\\section{Results and Discussions}\n\\label{results}\n\n To compute the ground state of the system and determine the phase diagram,\nwe scale the parameters of the DBHM Hamiltonian with respect to the\ninteraction strength $U$. So, the relevant parameters of the model are\n$J\/U$, $\\mu\/U$ and $D\/U$. We, then, determine the phase diagram of the DBHM\nin the $J\/U-\\mu\/U$ plane for different values of $D\/U$, and one unique feature\nof the model is the emergence of the BG phase. The local glassy nature of\nthe BG phase leads to very different properties from the incompressible and\ngapped MI phase, and compressible and gapless SF phase. Thus as mentioned\nearlier, one of the key issues in the study of DBHM is to identify appropriate\norder parameters to distinguish different phases. And, in particular, to\ndetermine the BG phase without ambiguity based on its local properties. To\nconstruct the phase diagram, we consider a $12\\times 12$ square\nlattice superimposed with a homogeneous disorder distribution. \n\nIn DBHM, depending on the magnitude of $D\/U$, the phase diagrams can be\nclassified into three broad categories. First, at low disorder strength\n$D\/U \\leqslant 0.1$, BG phase emerge in the phase diagram. Second, at moderate\ndisorder strengths $0.2\\leqslant D\/U \\leqslant 1$, the domain of BG phase is \nenhanced. This is the most important regime to explore the physics of BG \nphase. The distinctive features in this range consist of shrinking of MI phase\nand enhancement of the BG phase. Finally, at very high disorder strengths \n$D\/U > 1$, the MI phase disappears and DBHM supports only two phases, BG and SF.\nFor reference the selected zero temperature results are shown in the \nAppendix.\n\n\n\n\n\\subsection{Zero temperature results}\n\\label{t0}\n\n The synthetic magnetic field arising from the introduction of the artificial \ngauge field localizes the bosons and suppresses their itinerant property. \nThis manifests as a larger MI lobe in the presence of artificial gauge field. \nHowever, locally the combined effect of disorder and artificial gauge field \nfavours the formation of SF islands. This synergy, then, creates a larger domain \nof BG phase in the phase diagram. In terms of identifying the phase boundaries, \nunlike in the $\\alpha=0$ where $\\rho_s$ has linear dependence\non $J\/U$ in the SF domain, $\\rho_s$ cannot be used here as it exhibits no dependence \non $J\/U$. The two possible causes of this are: the TBC required to compute \n$\\rho_s$ modifies the magnetic unit cell associated with the chosen value of \n$\\alpha$; and with $\\alpha\\neq 0$ the SF phase contains vortices which reduce \nthe SF phase coherence. So, we use $q_{\\text{\\tiny{EA}}}$ as the order \nparameter to distinguish BG phase from the MI and SF phases. For consistency \nwe compute $q_{\\text{\\tiny{EA}}}$ both for $\\alpha = 0$ and $\\alpha = 1\/4$ \nemploying SGMF and the results are shown in Fig.~\\ref{eaop-t0}(a)-(d), \nwhere $q_{\\text{\\tiny{EA}}}$ is shown as a function of $\\mu\/U$ and $J\/U$. \nThe general trend is that $q_{\\text{\\tiny{EA}}}$ is \nzero in MI and O$(10^{-3})$ in the SF phase, and O$(10^{-1})$ in BG phase. From \nthe figure, the presence of the BG phase between different MI lobes is \ndiscernible from the finite values of $q_{\\text{\\tiny{EA}}}$ and it is \nconsistent with the phase diagram determined from $\\rho_s$ shown in \nFig.~\\ref{ph-dia-al0}(g)-(j) in Appendix. We can define sharp MI-BG and SF-BG \nboundaries in the phase diagram by defining a threshold value \nof $q_{\\text{\\tiny{EA}}}$ between the Mott lobes, however, this is \nnon-trivial for the patina of BG phase present at the tip of Mott lobes. \nThis is the domain where the MI-SF quantum phase transition is driven by phase \nfluctuations and consequently, the number fluctuation is highly suppressed. As \na result the value of $q_{\\text{\\tiny EA}}$ is negligible and it cannot be \nused to distinguish BG and SF phases \\cite{buonsante_07,bissbort_10}. Thus, to \nidentify the BG domain it is essential to complement the results from \n$q_{\\text{\\tiny EA}}$ with those of other quantities.\n\n For $\\alpha = 1\/4$, the region with finite values of $q_{\\text{\\tiny{EA}}}$ \nincreases significantly. This is discernible from the plot of \n$q_{\\text{\\tiny{EA}}}$ in Fig.~\\ref{eaop-t0}(d). For the case of $D\/U = 0.6$, \nwhen $\\alpha = 1\/4$, the $q_{\\text{\\tiny{EA}}}$ is finite with a value \nof $\\approx 0.2$ upto $J\/U \\approx 0.03$. Whereas, with $\\alpha=0$ as shown in \nFig.~\\ref{eaop-t0}(b), $q_{\\text{\\tiny{EA}}}$ has similar value only \nupto $J\/U \\approx 0.02$. This indicates the enhancement of BG region in the\npresence of the artificial gauge field. Employing CGMF method with \n$2 \\times 2$ cluster, the values of the $q_{\\text{\\tiny{EA}}}$ obtained are \nshown in Fig.~\\ref{eaop-t0}(e)-(h). One important change is that, \n$q_{\\text{\\tiny EA}}$ is no longer zero in the MI phase, but it is of \nO$(10^{-6})$. This is due to the presence of particle-hole \nexcitations in the cluster states. And, the non-zero value of \n$q_{\\text{\\tiny EA}}$ is consistent with the results reported in a previous \nwork \\cite{morrison_08}. The figures show similar trends of artificial \ngauge field induced enhancement of the BG region in the phase diagram. The\nincrease of BG regions with the increase of $D\/U$ is also notable for both \n$\\alpha = 0$ and $\\alpha = 1\/4$. Another observation is that, \n$q_{\\text{\\tiny{EA}}}$ obtained \nfrom the CGMF method contains less fluctuations and thus describes the boundary \nof SF-BG transition better compared to the SGMF method. Increasing the cluster \nsize CGMF can describe the BG-SF boundary more accurately but at the cost of\nmuch higher computational resources. \n\\begin{figure}[H]\n\t\\centering\n \\includegraphics[width=7.5cm]{phd-finite-t}\\\\\n\\vskip 0.1cm\n \\includegraphics[width=8cm]{eaop-al0}\n \\caption{Finite temperature phase diagram using SGMF method\n in absence of artificial gauge\n field for six different temperatures (a) $T = 0.005 U\/k_B$, \n (b)$T = 0.01U\/k_B$, (c) $T = 0.02U\/k_B$, (d) $T = 0.03U\/k_B$,\n (e)$T = 0.05U\/k_B$ and (f) $T = 0.1U\/k_B$ .\n Disorder strength is fixed at $D = 0.6U$ and each data in the plot\n is obtained by averaging over 500 different disorder\n distributions. (g) shows finite temperature effects on \n Edward-Anderson order parameter ( $q_{\\text{\\tiny{EA}}}$) \n with $D\/U = 0.6$ and $J\/U$ being fixed at 0.01. The magnitude \n of $q_{\\text{\\tiny{EA}}}$ gradually decreases with increase of \n temperature.}\n \\label{ph-dia-t}\n\\end{figure}\n\n\n\n\n\n\\subsection{Finite temperature results}\n\\label{ftemp}\n The important outcome of finite temperature is the emergence of a new phase,\nthe NF phase. This new phase, like the SF phase, has real commensurate \nnumber of particles per site. But, unlike SF $\\phi$ is zero. So, the NF phase \nhas some features common to both the MI and SF phases. Previous works reported\nthe appearance of the NF phase at finite temperatures in the case of the \ncanonical Bose-Hubbard model \\cite{gerbier_07}, and extended\nBose-Hubbard model with nearest neighbour interactions \\cite{ng_10, lin_17}.\n\n\n\n\n\\subsubsection{$\\alpha=0$}\n\n The effect of the thermal fluctuations to the $q_{\\text{\\tiny{EA}}}$, in\nabsence of artificial gauge field ( $\\alpha=0$), is shown in \nFig.~\\ref{ph-dia-t}(g). The \nresults presented in the figure correspond to $D\/U=0.6$ and \neach plot is an average over 500 realizations of disorder distributions. With \nincreasing temperature there is a monotonic decrease in $q_{\\text{\\tiny{EA}}}$,\nwhich indicates the {\\em melting} of BG phase. Along with the BG phase the MI \nphase also melts, however, this is not apparent from the values of \n$q_{\\text{\\tiny{EA}}}$. And, the extent of melting can be inferred from the \nphase diagram. To illustrate this point the phase diagram of DBHM at different \ntemperatures are shown in Fig~\\ref{ph-dia-t}(a-f). As mentioned earlier, \nprevious studies have also reported the melting of MI phase due to thermal \nfluctuations \\cite{gerbier_07}. But a clear theoretical description and phase \ndiagram incorporating finite temperature effects are lacking. Our present work \nshows that the BG phase also melts due to thermal fluctuations. Here, the key \npoint is the SF islands, which are hallmark of the BG phase, melts into NF.\nThis arises from \nthe local nature of the SF islands in BG phase, which as a result is affected \nby the local nature of the thermal fluctuations. The bulk SF phase, on the \nother hand, has long range phase correlations and is more robust against \nlocal fluctuations stemming from finite temperatures. \n\n\\begin{figure}[H]\n\t\\centering\n \\includegraphics[width=8cm]{phd-ctemp}\n \\caption{Finite temperature phase diagram using CGMF for $2\\times 2$\n cluster in absence of artificial gauge\n field for two different temperatures (a) $T = 0.01 U\/k_B$, \n (b)$T = 0.02U\/k_B$;\n Disorder strength is fixed at $D = 0.6U$ and each data in the plot\n is obtained by averaging over 50 different disorder distributions.\n }\n \\label{cphd-ft}\n\\end{figure}\n\n \n In the plots the region within the black line is MI phase, whereas, the \nregion bounded by the black and green lines is the NF phase, where $\\phi$ is \nclose to zero $\\phi \\leqslant 10^{-6}$. The BG phase lies in the region \nbounded by the green and orange lines, and the area right of the orange line \nis the SF phase. As the temperature is increased, due to the increased thermal \nfluctuations, the phase diagrams undergo several changes. First, the MI \nlobes shrink and at $k_{\\rm B}T\/U = 0.02$, MI lobes disappear from the phase \ndiagram. This is due to the melting of MI phase and conversion into NF phase. \nSo, as discernible from the comparison of Fig.~\\ref{ph-dia-t}(a) and (b), the \nMI lobe with $\\rho=1$ is bounded and lies in the domain \n$0.40\\leqslant \\mu\/U\\leqslant 0.6$ at $k_{\\rm B}T\/U = 0.005$,\nbut it shrinks to 0.47 $\\leqslant \\mu\/U\\leqslant 0.53 $ at $k_{\\rm B}T\/U = 0.01$.\nSecond, the region of the BG phase is reduced with increasing temperature. The \nchange is more prominent in the regions which lie between the MI lobes. For \nexample, at $\\mu=0$ the BG phase exists in the domain \n$0.004\\leqslant J\/U\\leqslant 0.014$ for $k_{\\rm B}T\/U=0.005$. But, it is \nreduced to $0.008\\leqslant J\/U\\leqslant 0.015$ when the temperature is \nincreased to $k_{\\rm B}T\/U = 0.01$. As discernible from Fig. ~\\ref{ph-dia-t}(f)\nat $k_{\\rm B}T\/U = 0.1$ the domain is reduced to \n$0.04\\leqslant J\/U\\leqslant 0.043$. And, third, at finite temperatures the \nMI lobes are bounded from top and bottom by straight lines in the SGMF \nresults. But, as visible from Fig. \\ref{cphd-ft}, the MI boundary is not a \nstraight line with CGMF results. This is on account of the better correlation \neffects in CGMF, in contrast, SGMF tends to support sharp NF-MI boundaries \nas a function of $\\mu\/U$ due to short range coupling through $\\phi$.\n\\begin{figure}[H]\n\t\\centering\n \\includegraphics[width=8cm]{eaop-mu-jp02}\n \\caption{ $q_{\\text{\\tiny{EA}}}$ as a function of $\\mu\/U$ for four\n different values of $\\alpha$ at (a) $T = 0$ (b) $T = 0.03U\/k_B$; \n with fixed disorder strength $D = 0.6U$ and\n hopping strength $J = 0.02U$.\n In each subfigure $q_{\\text{\\tiny{EA}}}$ are calculated for \n $\\alpha = 0, 1\/12, 1\/6$ and $1\/2$\n and averaged over 500 different disorder distributions. \n }\n \\label{eaop-al}\n\\end{figure}\n\n\n Based on the above observations of the phase diagrams at different \ntemperatures, the NF-BG and BG-SF phase boundaries shift toward higher \n$J\/U$ with increasing temperature. This is due to higher hopping energy \nrequired to prevail over thermal fluctuations. So that the SF phase is present \nas islands or homogeneous network in BG and SF phases, respectively. The \nother important point is that, the SF phase does not melt directly to NF phase.\nIn other words, the BG phase advances into the SF phase with ever decreasing \nwidth with increasing temperature. Thus, the BG phase is an intermediate \nphase between the NF and SF phases. This is the finite temperature equivalent\nof the zero temperature phase structure, where BG phase is an intermediate\nphase between the MI and SF phases.\n\n To improve the accuracy of the phase diagram by incorporating additional \ncorrelation effects, we compute the phase diagram with CGMF using \n$2\\times 2$ cluster, and the resulting phase diagram is shown in \nFig.~\\ref{cphd-ft}. The results are for the temperatures $k_{\\rm B}T\/U = 0.01$ \nand $0.02$, and for better illustration the phase diagrams of only upto \n$\\mu\/U = 1.0$ are shown in the figure. As to be expected the MI lobes are \nlarger in the CGMF results, but the one important change is that the \nenvelope of BG phase around the MI and NF phases is more pronounced. \nConsequent to the larger MI lobes, the NF and BG phases encompass\nregions with higher $J\/U$ compared with the SGMF results. In particular, \nat $\\mu =0$ the BG phase occurs in the domain \n$0.011\\leqslant J\/U \\leqslant 0.018$ and $0.018\\leqslant J\/U \\leqslant 0.022$ \nfor the $k_{\\rm B}T\/U = 0.01$ and $k_{\\rm B}T\/U = 0.02$, respectively.\n\n\n\n\n\\subsubsection{$\\alpha \\ne 0$}\n\n The thermal fluctuations delocalize the atoms through the entire lattice,\nand melt MI phase. This tends to reduce $\\phi$. Whereas, as mentioned earlier,\nartificial gauge field localizes the atoms, and thereby enhances the\nMI lobes. So, these two have opposing effects on the DBHM, and the combined\neffects of these two physical factors on the $q_{\\text{\\tiny{EA}}}$ are shown \nin Fig.~\\ref{eaop-al}. In the figure 4 the plots of $q_{\\text{\\tiny{EA}}}$ for \n$k_{\\rm B}T\/U =0$ and $0.03$ are shown for different $\\alpha$ as a function \n$\\mu\/U$ at $J\/U=0.02$. From the figures it is apparent that the effect of the\nartificial gauge field is negligible in the region between the $\\rho=0$ and\n$\\rho=1$ Mott lobes. However, in the regions between other Mott lobes there is \nan enhancement of the BG phase as indicated by the increase in \n$q_{\\text{\\tiny{EA}}}$. As discernible from Fig.~\\ref{eaop-al}(a) the \nvalue of $q_{\\text{\\tiny{EA}}}$ increases from $0.13$ to $0.19$ for the \nregion between $\\rho=1$ and $\\rho=2$ corresponding to \n$0.65 \\leqslant \\mu\/U\\leqslant 1.36 $ for non-zero $\\alpha$ at \n$k_{\\rm B}T\/U =0$. From the figure it is also evident that \n$q_{\\text{\\tiny{EA}}}$ gradually increases with the increase of $\\alpha$.\nConsequently, the enhancement of BG phase region in DBHM depends on the\nstrength of artificial gauge field. As a quantitative measure of it, for\n$\\alpha = 0, 1\/12, 1\/6$ and $1\/2$ $q_{\\text{\\tiny{EA}}}$\ntakes the value $0.139$, $0.148$, $0.164$ and $0.187$ respectively around \n$\\mu = U$. To demonstrate the combined effect of finite \ntemperature and artificial gauge field, the phase diagram in terms of \n$q_{\\text{\\tiny{EA}}}$ is shown in Fig. \\ref{eaop-ft}. As the \nfigure is based on $50$ disorder realizations, the general trends of \n$q_{\\text{\\tiny{EA}}}$ observable in Fig.~\\ref{eaop-al} are not apparent. \nHowever, from the figure the enlargement of the BG phase region between the \nMI lobes is discernible. Thus, this implies that the enhancement of the BG \nphase in presence of artificial gauge field is stable against thermal \nfluctuations.\n\\begin{figure}\n\t\\centering\n \\includegraphics[width=8cm]{eaop_al1b4-cl}\\\\\n \\caption{ $q_{\\text{\\tiny{EA}}}$ as a function of $\\mu\/U$ and $J\/U$ for\n $\\alpha =1\/4$ for two different values\n of temperature $T = 0.01 U\/k_B$ (a) and $T = 0.03 U\/k_B$. (b) \n Disorder strength is kept fixed at $D= 0.6 U$ and \n $q_{\\text{\\tiny{EA}}}$ are averaged over 50 different disorder \n distributions with CGMF method. }\n \\label{eaop-ft}\n\\end{figure}\n\n\n\n\n\\section{Conclusions}\n\\label{conc}\nAt finite temperatures, the thermal fluctuations lead to melting of the BG \nphase and formation of NF phase. The emergence of the NF phase at finite \ntemperatures necessitates using a combination of order parameters and \nproperties to identify each phase without ambiguity. More importantly, \nthe BG phase is an intermediate phase between the NF and SF phases. \nThis is similar to the zero temperature phase where the BG phase is an \nintermediate phase between the MI and SF phases. At higher temperatures the \nmelting of MI phase is complete and\nonly three phases NF, BG and SF phases exist in the system. The addition of \nartificial gauge field brings about a significant change in the phase diagram \nby enhancing the BG phase domain, which is observed in the trends of \nthe $q_{\\text{\\tiny{EA}}}$ without any ambiguity. This implies that such \nenhancements would be observable in quantum gas microscope experiments. To get \naccurate results with mean field theories it is desirable to use the CGMF \ntheory. It incorporates correlation effects better and the phase diagrams \nobtained from CGMF are quantitatively different from those obtained from SGMF.\n\n\n\n\n\\begin{acknowledgments}\n\nThe results presented in the paper are based on the computations\nusing Vikram-100, the 100TFLOP HPC Cluster at Physical Research Laboratory, \nAhmedabad, India.\n\n\\end{acknowledgments}\n\n\n\n\n\\section*{Appendix}\n\nTo determine the MI-BG phase boundary, we consider number fluctuation \n($\\delta n$) as the property which distinguishes the two phases. In the \nMI phase $\\delta n$ is zero for $D\/U=0$, however, for $D\/U\\neq 0$, it is \nnon-zero but small due to the disorder. We set $\\delta n < 10^{-6}$ as the \ncriterion to identify the MI phase in our computations. On the other hand, to \ndefine the BG-SF boundary, we compute the superfluid stiffness ($\\rho_s$). \nIn BG phase as the SF phase exists as islands the phase coherence is limited\nto these, so the $\\rho_s$ small, and we consider $\\rho_s < 10^{-2}$ as the \nthreshold to distinguish the BG from SF phase. In the SF phase as there is \nphase coherence throughout the system $\\rho_s$ is large and it is $O(1)$. \n\n\\begin{figure}[H]\n~ \\includegraphics[width=7.8cm]{gstate-t0}\\\\\n \\includegraphics[width=7.5cm]{phd-ss}\n \\caption{ Order parameter $\\phi$ of DBHM at zero temperature for \n $J\/U = 0.01$ and $D\/U$ keeping fixed at 0.6 (a)-(c) without and \n (d)-(f) \n with ($\\alpha=1\/4$) artificial gauge field. (a) \\& (d) MI phase \n with $\\mu\/U = 0.5$; (b) \\& (e) BG phase with $\\mu\/U = 0.1$; \n and (c) \\& (f)SF phase with $\\mu\/U = 1.0$. \n (g)-(j) equilibrium phase diagram of DBHM using SGMF method at \n zero temperature in absence of artificial gauge field\n ($\\alpha=0$) for disorder strengths $D\/U=0$, $0.2$, $0.6$ and\n $1.2$, respectively for 500 different disorder realizations.\n }\n \\label{ph-dia-al0}\n\\end{figure}\n\n The phase diagrams of DBHM with $\\alpha=0$ at different values of $D\/U$ have \ndistinctive features \\cite{lin_12}. As examples, the phase \ndiagrams for the case of $D\/U = 0$, $0.2$, $0.6$ and $1.2$ obtained from the\nSGMF method are shown in Fig.~\\ref{ph-dia-al0}(g)-(j). With $D\/U=0$, the phase \ndiagram as shown in Fig.~\\ref{ph-dia-al0}(g) consists of only two phases MI \nand SF. With non-zero $D\/U$ BG appears in the phase diagram, and as shown in \nFig.~\\ref{ph-dia-al0}(h) for $D = 0.2$ the domain of the MI phase shrinks and an\nenvelope of BG phase emerges around the MI lobes. From \nFig.~\\ref{ph-dia-al0}(h), it is clear that the BG phase is most prominent in\nbetween the MI lobes. These are the domains with large density \nfluctuations and small disorder is sufficient to render the bosons itinerant \nto create islands of SF phase. This, then, leads to the formation of BG phase. \nWhen the $D\/U$ is increased to $0.6$, as shown in Fig.~\\ref{ph-dia-al0}(i), \nthe MI lobes shrink further and the area of the BG phase is enlarged. At \nsufficiently high disorder strength, $D = 1.2U$, the MI phase disappears and \nphase diagram Fig.~\\ref{ph-dia-al0}(j) is composed of only SF and BG phases. \n \n\nThere is an improvement in the phase diagram, which is apparent from the \nenlarged MI lobes, when the phase diagram is computed using CGMF. In particular,\nwe consider $2\\times 2$ cluster and the phase diagrams so obtained are\nshown in Fig.~\\ref{cph-dia-al0}. The overall structure of the phase diagram\nis qualitatively similar to the SGMF case. However, there are few quantitative\nchanges. For comparison, consider the case of $D\/U = 0.6$, based on our \nresults and as visible in Fig.~\\ref{ph-dia-al0}(i) and \nFig.~\\ref{cph-dia-al0}(b), there are three important difference due to \nbetter correlation effects encapsulated in the CGMF method. First, the tip of \nthe Mott lobe $\\rho=1$ extends upto $0.035$ while it was $0.032$ with SGMF. \nSecond, at $\\mu\/U \\simeq 0$, the SF-BG transition occurs at $J\/U\\approx 0.022$, \nwhich in the case of SGMF is at $ J\/U\\approx 0.014$. This is due to the \nassociation of BG phase with islands of SF phase, and CGMF captures the phase \ncorrelations in these islands better. The SGMF, on the other hand, tends to \nfavour long range phase correlations through the $\\phi$ coupling between the \nlattices sites. And, finally, around the tip of the Mott lobes, the area of BG \nphase increases in CGMF method. \n\\begin{figure}[H]\n \\includegraphics[width=8cm]{c-ph-dia}\n \\caption{Equilibrium phase diagram of DBHM using CGMF method with cluster\n size 2$\\times$2 at zero temperature in absence of artificial\n gauge field for disorder strength (a) $D\/U = 0.2$, and \n (b) $D\/U = 0.6$ for 50 different disorder realizations.}\n \\label{cph-dia-al0}\n\\end{figure}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nSoftware-Defined Networking (SDN) decouples the network control plane from the data plane via a well-defined programming interface (such as OpenFlow~\\cite{mckeown2008}).\nThis decoupling allows the control logic to be logically centralized, easing the implementation of network policies, enabling advanced forms of traffic engineering (e.g., \\\\Google's B4~\\cite{jain2013}), and facilitating innovation (network virtualization~\\cite{koponen2014} being a prominent example).\n\nThe controllers are the crucial enabler of the SDN paradigm: they maintain the logically centralized network state to be used by applications and act as a common intermediary with the data plane.\nFigure \\ref{fig:intro_sdn} shows the normal execution in an SDN environment.\nUpon receiving a packet it does not know how to handle, the switch sends an \\emph{event} to the controller.\nThe controller delivers the event to the applications, which afterwards apply their logic based on this event, and eventually instruct the controller to send \\emph{commands} to the switches (e.g., to install flow rules).\n\n\\begin{figure}[h!]\n \\centering\n\t\\includegraphics[width=0.45\\textwidth]{figures\/intro_sdn.png}\n \\caption[SDN flow execution]{SDN flow execution. Switches send events to the controller as needed and the controller replies with one or more commands that modify the switch's tables.}\n \\label{fig:intro_sdn}\n\\end{figure}\n\nA trivial implementation of SDN using a centralized controller would lead to an undesirable outcome: a single point of failure.\nTo guarantee the required availability of network control, it is necessary the controller platform to be made fault-tolerant.\nFault tolerance demands transparency: for a controller that claims having such ability -- in other words, for it to be logically centralised -- applications that run on it should operate correctly in the presence of faults.\nThis is a fundamental requirement, as in case of controller failures the network view needs to be maintained consistent, otherwise applications will operate in a stale network view, leading to network anomalies that can have undesirable consequences (e.g., security breaches)~\\cite{levin2012, katta2015}.\n\nTo address this problem, traditional replication techniques are usually employed~\\cite{oki1988, lamport1998, ongaro2014}.\nHowever, building a consistent network view in the controllers is not enough to offer consistent logically centralized control that prevents the above-mentioned anomalies.\nIn SDN, it is necessary to include switch state into the system model to achieve this goal~\\cite{katta2015}.\nSince switches are programmed by controllers (and controllers can fail), there must be mechanisms to ensure that the entire event-processing cycle of SDN is handled consistently.\n\nA correct, fault-tolerant SDN environment needs to ensure \\emph{observational indistinguishability}~\\cite{katta2015} between an ideal central controller and a replicated controller platform.\nInformally, to ensure observational indistinguishability the fault-tolerant system should behave the same way as a fault-free SDN for its users (end-hosts and network applications).\nFot this purpose, it is necessary the following properties to be met:\n\n\\begin{itemize}\n\\item \\textbf{Total Event Ordering:} Controller replicas should process events in the same order and subsequently all controller application instances should reach the same internal state\n\\item \\textbf{Exactly-Once Event Processing:} All the events are processed, and neither are lost nor processed repeatedly.\n\\item \\textbf{Exactly-Once Execution of Commands:} Any given series of commands are executed once, and only once on the switches.\n\\end{itemize}\n\nTo the best of our knowledge, the problem of correct, fault-tolerant SDN control has only been addressed in the work by Katta~\\emph{et al.}~\\cite{katta2015}.\nInstead of just keeping the controller state consistent, the authors propose Ravana, a fault-tolerant SDN controller platform that handles the entire event-processing cycle as a transaction -- either all or none of the components of this transaction are executed.\nThis enables Ravana to correctly handle switch state and thus guarantee SDN correctness even under fault.\n\nTo achieve these properties, however, Ravana requires modifications to the OpenFlow protocol and to existing switches.\nSpecifically, switches need to maintain two buffers, one for events and one for commands, and four new messages need to be added to the protocol.\nThese modifications preclude the adoption of Ravana on existing systems and hinder the possibility of it being used in the near future (as there are no guarantees these messages be added to OpenFlow anytime soon, for instance).\n\nFaced with this challenge, we propose Rama, a fault-tolerant SDN controller platform that, similar to Ravana, offers a transparent control plane that allows unmodified network applications to run in a consistent and fault-tolerant environment.\nThe novelty of the solution lies in Rama not requiring changes to OpenFlow nor to the underlying hardware, allowing immediate deployment.\nFor this purpose, Rama exploits existing mechanisms in OpenFlow and orchestrates them to achieve its goals.\n\nThe main contributions of this work can be summarized as follows:\n\n\\begin{itemize}\n\\item A protocol for fault-tolerant SDN that provides the correctness guarantees of a logically centralised controller \\emph{without} requiring changes to OpenFlow or modifications to switches.\n\\item The implementation and evaluation of a prototype controller -- Rama -- that demonstrates the overhead of the solution to be modest.\n\\end{itemize}\n\n\\section{Fault tolerance in SDN}\n\\label{motivation}\n\nKatta~\\emph{et al.} have experimentally shown~\\cite{katta2015} that traditional techniques for replicating controllers do not ensure correct network behaviour in case of failures.\nThe reason is that these techniques address only part of the problem: maintaining consistent state in controller replicas.\nBy not considering switch state (and the interaction controller-switches) inconsistencies may arise, resulting in potentially severe network anomalies.\nIn this section we present a summary of the problems of using techniques that do not incorporate switches in the system model, which lead to the design requirements of a \\emph{correct} fault-tolerant SDN solution. \nWe also present Ravana~\\cite{katta2015}, the first fault-tolerant controller that achieves the required correctness guarantees for SDN.\n\n\\subsection{Inconsistent event ordering}\n\nSince OpenFlow 1.3, switches can maintain TCP connections with multiple controllers.\nIn a fault-tolerant configuration switches can be set to send all their events to all known controller replicas.\nAs replicas process events as they are received, each one may end up building a different internal state.\nAlthough TCP guarantees the order of events delivered by each switch, there are no ordering guarantees between events sent to controllers by the different switches, leading to the problem.\n\nConsider a simple scenario with two controller replicas (c1 and c2) and two switches (s1 and s2) that send all events to both controllers.\nSwitch s1 sends two events -- e1 and e2, in this order -- and switch s2 sends two other events -- e3 and e4, in this order.\nOne possible outcome where both controllers receive events in a different order while respecting the TCP FIFO property is c1 receiving events in the order e1, e3, e2, e4 and c2 receiving in the order e3, e4, e1, e2.\nUnfortunately, an inconsistent ordering of events can lead to incorrect packet-processing decisions~\\cite{katta2015}.\nAs a result of this consistency problem we derive the first design goal for a fault-tolerant and correct SDN controller:\n\n\\textbf{Total event ordering:} controllers replicas should process the same (total) order of events and subsequently all controller application instances should reach the same internal state.\n\n\\subsection{Unreliable event delivery}\n\nIn order to achieve a total ordering of events between controller replicas two approaches can be used:\n\n\\begin{enumerate}\n\\item The master (primary) replica can store controller state (including state from network applications) in an external consistent data-store (as in Onix \\cite{koponen2010}, ONOS \\cite{berde2014}, and SMaRtLight \\cite{botelho2014});\n\\item The controller state can be kept consistent using replicated state machine protocols.\n\\end{enumerate}\n\nAlthough both approaches ensure a consistent ordering of events between controller replicas, they are not fault-tolerant in the standard case where only the master controller receives all events.\n\nIf we consider \u2013- for the first approach -- that the master replica can fail between receiving an event and finishing persisting the controller state in the external data-store (which happens after processing the event through controller applications), that event will be lost and the new master (i.e., one of the other controller replicas) will never receive it.\nThe same can happen in the second approach: the master replica can fail right after receiving the event and before replicating it in the shared log (which in this case happens before processing the event through the controller applications).\nIn these cases, since only the crashed master received the event, the other controller replicas will not have an updated view of the network.\nAgain, this may cause severe network problems~\\cite{katta2015}. \nSimilar problems can occur in case of repetition of events.\nThese problems lead to the second design goal:\n\n\\textbf{Exactly-once event processing:} All the events sent by switches are processed, and are neither lost nor processed repeatedly.\n\n\\subsection{Repetition of commands}\n\nIn either traditional state machine replication or consistent storage approaches, if the master controller fails while sending a series of commands, the new elected master may send repeated commands.\nThis may happen when the old master fails before informing the slave replica of its progress.\nSince some commands are not idempotent~\\cite{katta2015}, its duplication can lead to undesirable network behaviour. \nThis problem leads to the third and final design goal:\n\n\\textbf{Exactly-once command execution:} any series of commands are executed only once on the switches.\n\n\\subsection{Ravana}\n\nRavana~\\cite{katta2015} is the first controller to provide correct fault-tolerant SDN control.\nTo achieve this, Ravana processes control messages transactionally and exactly once (at both the controllers and the switches) using a replicated state machine approach, but without involving the switches in an expensive consensus protocol.\n\nThe protocol used by Ravana is show in Figure \\ref{fig:ravana-protocol}.\nSwitches buffer events (as they may need to be retransmitted) and send them to the master controller that will replicate them in a shared log with the slaves.\nThe controller will then reply back to the switch acknowledging the reception of the events.\nThen, events are delivered to applications that may after processing require one or more commands to be sent to switches.\nSwitches reply back to acknowledge the reception of these commands and buffer them to filter possible duplicates.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/ravana-protocol.jpg}\n \\caption[Ravana Protocol]{Ravana protocol. In Ravana switches maintain two buffers (displayed on the left) to re-transmit events and filter repeated commands in case of master failure. New acknowledge messages (\\texttt{ack\\_event} and \\texttt{ack\\_cmd}) are exchanged between the switch and the master to guarantee the consistency requirements.}\n \\label{fig:ravana-protocol}\n\\end{figure}\n\nWhile Ravana allows unmodified applications to run in a fault-tolerant environment, it requires modifications to the OpenFlow protocol and to switch hardware.\nNamely, Ravana leverages on buffers implemented on switches to retransmit events and filter possible repeated commands received from the controllers.\nAlso, explicit acknowledgement messages must be added to the OpenFlow protocol so that the switch and the controller acknowledge received messages.\nUnfortunately, these requirements preclude immediate adoption of Ravana.\nFor instance, it is not antecipated OpenFlow to be extended to include the required messages anytime soon.\nThese limitations are the main motivation for our proposal, which we present next.\n\n\\section{Rama design}\n\\label{design}\n\nThe goal of our work is to build a strongly consistent and fault-tolerant control plane for SDN to be used transparently by unmodified applications.\nThis section describes the architecture and protocol for such control plane, which is driven by the following four requirements. First, reliability: the system should maintain a correct and consistent state even in the presence of failures (in both the controllers and switches). Second, transparency: the consistency and fault-tolerance properties should be completely transparent to applications. Third, performance: the performance of the system should not degrade as the number of network elements (events and switches) grows. Fourth, immediate deployability: the solution should work with existing switches and not require new additions to the OpenFlow protocol.\n\n\\subsection{Architecture}\n\nThe high-level architecture of our system, Rama\\footnote{In the Hindu epic Ramayana, Rama is the hero whose wife (Sita) is abducted by Ravana.}, is depicted in Figure \\ref{fig:arquitectura_alto_nivel}.\nIts main components are: (i) OpenFlow enabled switches (switches that are implemented according to the OpenFlow specification), (ii) controllers that manage the switches and (iii) a coordination service.\nIn our model, we consider only one network domain with one primary controller and one or more backup controllers, depending on the number of faults to tolerate.\nEach switch connects to one primary controller and multiple (\\textit{f} to be precise) backup controllers (to tolerate up to \\emph{f} crash controller faults).\nThis primary\/backup model is supported by OpenFlow in the form of master\/slave and allows the system to tolerate controller faults.\nWhen the master controller fails, the remaining controllers will elect a new leader to act as the new master for the switches managed by the crashed master.\nThis election is supported by the coordination service.\n\n\\begin{figure}[t]\n \\includegraphics[width=0.2\\textwidth]{figures\/architecture.png}\n \\centering\n \\caption[High level architecture of the system]{High level architecture of the system.}\n \\label{fig:arquitectura_alto_nivel}\n\\end{figure}\n\nThe coordination service offers strong consistency and abstracts controllers from complex primitives like fault detection and total order, making them simpler and more robust.\nNote that the coordination system requires a number of replicas equal to \\textit{2f+1}, with \\textit{f} being the number of faults to tolerate.\nThe strong consistency model assures that updates to the coordination service made by the master will only return when they are persistently stored.\nThis means that slaves will always have the fresh modifications available as soon as the master receives confirmation of the update.\nThis results in a consistent network view among all controllers even if some fail.\nThe need for agreement between several replicas make the coordination service the system bottleneck~\\cite{hunt2010}.\nIn addition to the controllers' state, the switches also maintain state that must be handled consistently in the presence of faults.\nFulfilling this request is the main goal of the protocol we present next.\n\n\\subsection{Rama protocol}\n\\label{sec:ft_protocol}\n\nIn an SDN setting, switches generate events (e.g., when they receive packets or when the status of a port changes) that are forwarded to controllers.\nThe controllers run multiple applications that process the received events and may send commands to one or more switches in reply to each event.\nThis cycle repeats itself in multiple switches across the network as needed.\n\nIn order to maintain a correct system in the presence of faults, one must handle the state in the controllers and the state in the switches consistently.\nTo ensure this, the entire cycle presented in Figure \\ref{fig:control_loop} is processed as a \\emph{transaction}: either all or none of the components of this transaction are executed.\nThis means that\n\\begin{inlinelist}\n\\item the events are processed exactly once at the controllers,\n\\item all controllers process events in the same (total) order to reach the same state, and\n\\item the commands are processed exactly once in the switches.\n\\end{inlinelist}\nBecause the standard operation in OpenFlow switches is to simply process commands as they are received, the controllers must coordinate to guarantee the required exactly-once semantics.\nRavana~\\cite{katta2015} does not need this coordination because the (modified) switches can simply buffer the commands received and discard repeated commands (i.e., those with the same identifier) sent by the new controller.\n\n\\begin{figure}[h]\n \\includegraphics[width=0.45\\textwidth]{figures\/control_loop.png}\n \\centering\n \\caption[Control loop]{Control loop of (1) event delivery, (2) event ordering, (3) event processing, and (4) command execution. Events are delivered to the master controller, which decides a total order on the received events. The events are processed by applications in the same order in all controllers. Applications issue commands to be executed in the switches.}\n \\label{fig:control_loop}\n\\end{figure}\n\nBy default, in OpenFlow a master controller receives all asynchronous messages (e.g., \\texttt{OFPT{\\_}PACKET{\\_}IN}), whe\\-re\\-as the slaves controllers only receive a subset (e.g., port modifications).\nWith this configuration only the master controller would receive the events generated by switches.\nThere are two options to solve this problem.\nOne is for slaves to change this behaviour by sending an \\texttt{OFPT{\\_}SET{\\_}ASYNC} message to each switch that modifies the asynchronous configuration.\nAs a result, switches send all required events to the slaves.\nAlternatively, all controllers can set their role to \\texttt{EQUAL}.\nThe OpenFlow protocol specifies that switches should send all events to every controller with this role.\nThen, controllers need to coordinate between themselves who the master is (i.e., the one that processes and sends commands).\nWe have opted for the second solution and use the coordination service for leader election amongst controllers.\n\nThe fault-free execution of the protocol is represented in Figure \\ref{fig:rama-protocol}.\nIn the figure we consider a switch to be connected with one master controller and a single slave controller.\nThe main idea is that switches must send messages to \\textit{all controllers}, so that they can coordinate themselves even if some fail at any given point.\nIn Ravana, because switches simply buffer events (so that they can be retransmitted to a new master if needed), switches can send events only to the current master, instead of to every controller.\n\n\\begin{figure}[ht!]\n \\includegraphics[width=0.45\\textwidth]{figures\/rama-protocol.png}\n \\centering\n \\caption[Fault-free case of the protocol]{Fault-free case of the protocol. Switches send generated events to all controllers so that no event is lost. The master controller replicates the event in the shared log and then feeds its applications with the events in log order. Commands sent are buffered by the switch until the controller sends a Commit Request. The corresponding Commit Reply message is forwarded to all controllers.}\n \\label{fig:rama-protocol}\n\\end{figure}\n\nThe master controller then replicates the event in a shared log with the other controllers, imposing a total order on the events received (to simplify, the coordination service is omitted from the figure).\nWhen the event is replicated to the shared log controllers, it is processed by the master controller applications, which will generate zero or more commands.\nTo guarantee exactly-once semantics, the commands are sent to the switches in bundles (a feature introduced in OpenFlow 1.4, see Figure \\ref{fig:openflow-bundles}).\nWith this feature a controller can open a bundle, add multiple commands to it and then instruct the switch to commit all commands present in the bundle in an atomic and ordered fashion.\n\n\\begin{figure}[ht!]\n \\includegraphics[width=0.45\\textwidth]{figures\/openflow-bundles.png}\n \\centering\n \\caption[OpenFlow Bundles]{OpenFlow Bundles.}\n \\label{fig:openflow-bundles}\n\\end{figure}\n\nRama uses bundles in the following way.\nWhen an event is processed by all modules, the required commands are added by the master controller to a bundle.\nThe master then sends an \\texttt{OFPBCT\\_COMMIT\\_REQUEST} message to each switch affected by the event.\nThe switch processes the request and tries to apply all the commands in the bundle in order.\nAfterwards, it then sends a reply message indicating if the Commit Request was successful or not.\nThis message is used by Rama as an acknowledgement.\n\nAgain, we need to make sure that this reply message is sent to all controllers.\nThis is a challenge, because Bundle Replies are Controller-to-Switch messages and hence are only sent to the controller that made the request (using the same TCP connection).\nTo overcome this challenge we introduce a new mechanism in Rama.\nThe way we inform other controllers if the bundle was committed or not (so that they can decide later if they need to resend specific commands) is by including one \\texttt{OFPT{\\_}PACKET{\\_}OUT} message in the end of the bundle with the action \\texttt{output=controller}.\nThe outcome is that the switch will send the information included in the \\texttt{OFPT{\\_}PACKET{\\_}OUT} message to all connected controllers in a \\texttt{OFPT{\\_}PACKET{\\_}IN} message.\nThis message is set by the master controller to inform slave controllers about the events that were fully processed by the switch (in this bundle).\nThis prevents a new master from sending repeated commands, thus guaranteeing exactly-once semantics.\nRavana does not need to rely on bundles since switches buffer all received commands so that they can discard possible duplicates from a new master.\n\nThe master finishes the transaction by replicating an \\texttt{event} \\texttt{processed} \\texttt{message} in the log, informing backup controllers that they can safely feed the corresponding event in the log to their applications.\nThis is done to simply bring the slaves to the same updated state as the master controller (the resulting commands sent by the applications are naturally discarded).\n\n\\subsubsection{Fault cases}\n\\label{subsec:fault-cases}\n\nWhen the master controller fails, the backup controllers will detect the failure (by timeout) and run a leader election algorithm to elect a new master for the switches.\nUpon election, the new master must send a Role Request message to each switch, to register as the new master.\nThere are three main cases where the master controller can fail:\n\n\\begin{enumerate}\n\\item Before replicating the received event in the distributed log (Figure \\ref{fig:rama-protocol-fault-case-1});\n\\item After replicating the event but before sending the Commit Request (Figure \\ref{fig:rama-protocol-fault-case-2});\n\\item After sending the Commit Request message.\n\\end{enumerate}\n\n\\begin{figure}[h]\n \\includegraphics[width=0.45\\textwidth]{figures\/rama-protocol-fault-case-1.png}\n \\centering\n \\caption[Failure case 1 of the protocol]{Case of the protocol where the master fails before replicating the event received. Because the slaves buffer all events, the event is not lost and the new master can resume the execution of the failed controller.}\n \\label{fig:rama-protocol-fault-case-1}\n\\end{figure}\n\nIn the first case, the master failed to replicate the received events to the shared log.\nAs slave controllers receive and buffer all events, no events are lost.\nFirst, the new master must finish processing any events logged by the older master.\nNote that events marked as processed have their resulting commands filtered.\nThis makes the new master reach the same internal state as the previous one before choosing the new order of events to append to the log (this is valid for all other fault cases).\nThe new elected master then appends the buffered events in order to the shared log and continues operation (feeding the new events to applications and sending commands to switches).\n\nIn the cases where the event was replicated in the log (cases 2 and 3), the master that crashed may or may not have issued the Commit Request message.\nTherefore, the new master must carefully verify if the switch has processed everything it has received before re-sending the commands and the Commit Request message.\nTo guarantee ordering, OpenFlow provides a Barrier message, to which a switch can only reply after processing everything it has received before.\nIf a new master receives a Barrier Reply message without receiving a Commit Reply message (in form of \\texttt{OFPT{\\_}PACKET{\\_}OUT}), it can safely assume that the switch did not receive nor execute a Commit Request for that event from the old master (case 2)\\footnote{This relies on the FIFO properties of the controller-switch TCP connection.}.\nEven if the old master sent all commands but did not send the Commit Request message, the bundle will never be committed and will eventually be discarded.\nTherefore, the new master can safely resend the commands.\nIn case 3, since the old master sent the Commit Request before crashing, the new master will receive the confirmation that the switch processed the respective commands for that event and will not resend them (guaranteeing exactly-once semantics for commands).\n\n\\begin{figure}[ht]\n \\includegraphics[width=0.45\\textwidth]{figures\/rama-protocol-fault-case-2.png}\n \\centering\n \\caption[Failure case 2 of the protocol]{Case of the protocol where the master fails after replicating the event. The first part of the protocol is identical to the fault-free case and is omitted from the figure. In this case, the crashed master may have already sent some commands or even the Commit Request to the switch.}\n \\label{fig:rama-protocol-fault-case-2}\n\\end{figure}\n\n\\renewcommand{\\arraystretch}{1.7}\n\\newcolumntype{A}{ >{\\centering\\arraybackslash} m{.28\\linewidth} }\n\\newcolumntype{B}{ >{\\centering\\arraybackslash} m{.32\\linewidth} }\n\\newcolumntype{C}{ >{\\centering\\arraybackslash} m{.32\\linewidth} }\n\\begin{table*}[t!]\n\\begin{tabular}{ABC}\n\\hline\n\\vspace{-4px}\\textbf{Property} & \\vspace{-4px}\\textbf{Ravana} & \\vspace{-4px}\\textbf{Rama}\\\\\n\\hline\n\\textit{At least once events} & Buffering and retransmission of switch events & Switches send events to every controller with role EQUAL\\\\\n\\hline\n\\textit{At most once events} & \\multicolumn{2}{c}{Event IDs and filtering in the log}\\\\\n\\hline\n\\textit{Total event order} & \\multicolumn{2}{c}{Master appends events to a shared log}\\\\\n\\hline\n\\textit{At least once commands} & RPC acknowledgments from switches & \\multirow{2}{0.3\\textwidth}[-0.1cm]{\\centering{Bundle commit is known by every controller by piggybacking PacketOut in OpenFlow Bundle}}\\\\\n\\cline{1-2}\n\\textit{At most once commands} & Command IDs and filtering at switches & \\\\\n\\hline\n\\end{tabular}\n\\caption{How Rama and Ravana achieve the same consistency properties using different mechanisms}\n\\label{tab:rama-properties}\n\\end{table*}\n\n\n\\section{Correctness}\n\nThe Rama protocol we propose in this paper was designed to guarantee correctness of fault-tolerant SDN control.\nWe define correctness as in~\\cite{katta2015}, where the authors introduce the concept of observational indistinguishability in the SDN context, defined as follows:\n\n\\emph{Observational indistinguishability:} If the trace of observations made by users in the fault-tolerant system is a possible trace in the fault-free system, then the fault-tolerant system is observationally indistinguishable from a fault-free system.\n\nFor observational observability, it is necessary to guarantee transactional semantics to the entire control loop, including (i) exactly-once event delivery, (ii) event ordering and processing, and (iii) exactly-once command execution.\nIn this section we summarize how the mechanisms employed by our protocol fulfil each of these necessary requirements.\nFor a brief comparison with Ravana, see Table \\ref{tab:rama-properties}.\n\n\\textbf{Exactly once event processing:} events cannot be lost (processed \\textit{at least once}) due to controller faults nor can they be processed repeatedly (they must be processed \\textit{at most once}).\nContrary to Ravana, Rama does not need switches to buffer events neither that controllers acknowledge each received event to achieve \\textit{at-least once event processing} semantics.\nInstead, Rama relies on switches sending the generated events to \\textit{all (f+1)} controllers (considering that the system tolerates up to \\emph{f} crash faults) so that at least one will known about the event.\nUpon receiving these events, the master replicates them in the shared log while the slaves add the events to a buffer.\nAs such, in case the master fails before replicating the events, the new elected master can append the buffered events to the log.\nIf the master fails after replicating the events, the slaves will filter the events in the buffer to avoid duplicate events in the log.\nThis ensures \\textit{at-most once event processing} since the new master only processes each event in the log once.\nTogether, sending events to all controllers and filtering buffered events ensures \\textit{exactly-once event processing}.\n\n\\textbf{Total event ordering:} to guarantee that all controller replicas reach the same internal state, they must process any sequence of events in the same order.\nFor this, both Rama and Ravana rely on a shared log across the controller replicas (implemented using the external coordination service) which allows the master to dictate the order of events to be followed by all replicas.\nEven if the master fails, the new elected master always preserves the order of events in the log and can only append new events to it.\n\n\\textbf{Exactly once command execution:} for any given event received from one switch, the resulting series of commands sent by the controller are processed by the affected switches exactly \\textit{once}.\nHere, Ravana relies on switches acknowledging and buffering the commands received from controllers (to filter duplicates).\nAs this requires changes to the OpenFlow protocol and to switches, Rama relies on OpenFlow Bundles to guarantee transactional processing of commands.\nAdditionally, the Commit Reply message, which is triggered after the bundle finishes, is sent to \\textit{all} controllers and thus acts as an acknowledgement that is independent of controller faults. If the master fails, the new master needs to know if it should resend the commands for the logged events or not.\nA Packet Out message at the end of the bundle acts as a Commit Reply message to the slave controllers.\nThis way, upon becoming the new master, the controller replica has the required information to know if the switch processed the commands inside the bundle or not, without relying on the crashed master.\nFurthermore, the new master sends a Barrier Request message to the switch.\nReceiving the corresponding Barrier Reply message guarantees that neither the switch nor the link are slow (because a message was received and TCP maintains FIFO order) and thus there is no possibility of the Packet Out being delayed.\nTherefore, the use of Bundles that include a Packet Out at the end, in addition to the Barrier message ensures that commands will be processed by the switches \\textit{exactly-once}.\n\nIt is important to note that we also consider the case where switches fail.\nHowever, this is not a special case of the protocol because it is already treated by the OpenFlow protocol under normal operation.\nA switch failure will generate an event in the controller which will be delivered to applications, for them to act accordingly (e.g., re-route traffic around the failed switch).\nA particularly relevant case is when a switch fails before sending the Commit Reply to the master and the slave controllers.\nImportantly, this event does not result in transaction failure.\nSince this is a normal event in SDN, the controller replicas simply mark pending events for the failed switch as processed and continue operation.\n\nWhile we detail our reasoning as to why our protocol meets the correctness requirements of observational indistinguishability in SDN, modelling the Rama protocol and giving a formal proof is left as future work and out of the scope of this paper.\n\n\\section{Implementation}\n\\label{implementation}\n\nWe have built Rama on top of Floodlight~\\cite{floodlight}.\nFor coordination, we opted for ZooKeeper~\\cite{hunt2010}.\nThis service abstracts controllers from fault detection, leader election, and event transmission and storage (for controller recovery).\nRama introduces two main modules into Floodlight: the \\textit{Event Replication} module (Section \\ref{sec:event-replication}) and the \\textit{Bundle Manager} module (Section \\ref{sec:bundle-manager}).\nAdditionally, the Floodlight architecture was optimised for performance by introducing parallel network event collection and logging (Rama's multi-thread architecture is shown in Figure \\ref{fig:rama-threads}) and by batching events (Section \\ref{sec:event-batching}).\nThe multi-thread paralelism is introduced carefully, not to break TCP FIFO order of event processing, as will be explained next.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/rama-architecture.png}\n \\caption{Rama thread architecture}\n \\label{fig:rama-threads}\n\\end{figure}\n\n\nIn the original Floodlight, worker threads are used to collect network events and to process the modules pipeline (in Floodlight network applications are called ``modules'').\nThis design precludes event batching and other optimisations.\nIdeally, we want to free the threads that collect network events as soon as possible so that they can keep collecting more events.\nFor this purpose, the worker threads' only job in Rama is to push events to the Replication Queue.\nEvents for a particular switch are collected always by the same thread (although each thread can be shared by several switches) and thus TCP FIFO order is guaranteed in the Replication Queue.\nNext, the Rama runtime imposes a total order on the events by giving them a monotonically increasing ID.\nAs such, several Replication threads can then take events from this queue and execute the logic in the Event Replication module, which will send the events to ZooKeeper in batches, without breaking the required total order for correctness.\nThis technique is equivalent to Ravana's parallel event logging~\\cite{katta2015}. \nWhen ZooKeeper replies to the request, the events are added to the Pipeline Queue to be processed by the Floodlight modules.\nA single thread is used in this step, to guarantee the total order.\nThe slave replicas also follow the total order from the IDs assigned by the master.\n\nOne of our requirements was to make the control plane transparent for applications to execute unmodified.\nThe Event Replication module is transparent to other modules as it acts before the pipeline.\nThe modules will continue to receive events as usual in Floodlight and process them by changing their internal structures and sending commands to switches.\nThe process of sending messages inside OpenFlow Bundles as required by Rama is also made completely transparent to Floodlight modules, as will be explained in Section \\ref{sec:bundle-manager}.\n\n\\subsection{Event Replication and ZK Manager}\n\\label{sec:event-replication}\n\nThe Event Replication module is the bridge between receiving events from the worker threads and pushing them into the pipeline queue to be processed by Floodlight modules.\nEvents are only added to the pipeline queue after being stored in ZooKeeper.\nTo separate tasks, Event Replication leverages on the ZK Manager, an auxiliary class that acts as ZooKeeper client (establishing connection, making requests and processing replies) and keeps state regarding the events (an event log and an event buffer in case of slaves) and switch leadership.\nEvent Replication and the ZK Manager work together to attain exactly-once event delivery and total order as follows.\n\nWhen an event arrives at the Event Replication module, we check whether the controller is in master or slave mode.\nIn master mode the event is replicated in ZooKeeper and added to its in-memory log.\nThis log is a collection of \\texttt{RamaEvent} objects which, apart from the switch information and message content, contains the unique event identifier explained before.\nThe events are replicated in ZooKeeper in batches (see Section \\ref{sec:event-batching}), so each replication thread simply adds an event to the current batch and becomes free to process a new event.\nEventually the batch will be sent to ZooKeeper containing one or more events to be stored.\nUpon receiving the reply, the events are pushed to the pipeline queue, ordered according to the identifier given by the master to guarantee total order.\n\nIn slave mode, the event is simply buffered in memory (to be used in the case where the master controller fails).\nA special case is when the event received is the Packet Out that the master controller included in the bundle.\nIn this case, the slave marks that this switch already processed all commands for this event.\nSlaves also keep an event log as the master, but only events that come from the master are added to it.\nEvents from the master arrive via \\textit{watches} set in ZooKeeper nodes.\nSlaves set watches and are notified when the master creates event nodes under that node. \nNew events are added to the in memory log (so it is kept up-to-date with the log maintained by the master) and the events are added to the pipeline queue in the same way as in the master controller.\nAn important detail is that event identifiers are set by the master controller, and when slaves deserialize the data obtained from nodes stored in ZooKeeper they get the same \\texttt{RamaEvent} objects created by the master.\nTherefore, the events will be queued in the same order as they were in the master controller replica.\n\n\\subsection{Bundle Manager}\n\\label{sec:bundle-manager}\n\\begin{sloppypar}\nThe Bundle Manager module keeps state related to the open bundles for each switch (as result of an event) and is responsible for adding messages to the bundle, closing and committing it.\nTo guarantee transparency to applications, we modified the write method in \\texttt{OFSwitch.java} (the class that is used by all modules to send commands to switches) to call the Bundle Manager.\nThis module will wrap the message sent by application modules in a \\texttt{OFPT{\\_}BUNDLE{\\_}ADD\\_MESSAGE} and send it to the switch.\nThis process is transparent because applications are unaware of the Bundle Manager module.\n\\end{sloppypar}\n\\begin{sloppypar}\nIn the end of the pipeline, the Bundle Manager module is thus called to prepare and commit the bundles containing the commands instructed by the modules as a response to this event.\nNote that one event may cause modules to send commands to multiple switches, so in this step the Bundle Manager may send \\texttt{OFPBCT\\_COMMIT\\_REQUEST} to one or more switches.\nBefore committing the bundle, the Bundle Manager also adds a \\texttt{OFPT\\_PACKET\\_OUT} message to it, so that slave controllers will know if the commands for an event were committed or not in the switch (as explained in Section \\ref{sec:ft_protocol}). \nThis message will be received by the slave controllers as a \\texttt{OFPT\\_PACKET\\_IN} message with the required information set by the master controller.\n\\end{sloppypar}\n\n\\subsection{Event batching}\n\\label{sec:event-batching}\n\nFloodlight thread architecture was modified to allow event batching, for performance reasons.\nConsidering that ZooKeeper is running on a separated machine from the master controller replica, sending one event at a time to ZooKeeper would significantly degrade performance.\nTherefore, the ZKManager groups events before sending them to ZooKeeper in batches.\nBatches are sent to ZooKeeper using a special request called \\texttt{multi}, which contains a list of operations to execute (e.g., create, delete, set data).\nFor event replication, the multi request will have a list with multiple create operations as parameter.\nThis request is sent after reaching the maximum configured amount of events (e.g., 1000) or some time after receiving the first event in the batch (e.g., 50ms).\nThis means that each event has a maximum delay bound (regarding event batching).\n\n\\section{Evaluation}\n\\label{evaluation}\n\nIn this section we evaluate Rama to understand its viability and the costs associated with the mechanisms used to achieve the desired consistency properties (without modifying the OpenFlow protocol or switches).\n\n\\subsection{Setup}\n\nFor the evaluation we used 3 machines connected to the same switch via 1Gbps links as shown in Figure \\ref{fig:setup}.\nEach machine has an Intel Xeon E5-2407 2.2GHz CPU and 32 GB (4x8GB) of memory.\nMachine 1 runs one or more Rama instances, machine 2 runs ZooKeeper 3.4.8, and machine 3 runs Cbench to evaluate controller performance.\nThis setup tries to emulate a scenario similar to a real one with ZooKeeper on a different machine for fault-tolerance purposes, and Cbench on a different machine to include network latency. \n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/setup.png}\n \\caption{Experiment setup}\n \\label{fig:setup}\n\\end{figure}\n\n\\begin{figure*}[h!]\n\t\\centering\n \\begin{subfigure}[t]{0.25\\linewidth}\n \\centering\n \\resizebox{\\columnwidth}{!}{\n \\begin{tikzpicture}\n\t\t\t\t\\begin{axis}[\n\t\t\t\t\t\tybar=10pt,\n\t\t\t\t\t\tbar width=12pt,\n\t\t\t\t\t\tx=1.8cm,\n\t\t\t\t\t\tymin=0,\n\t\t\t\t\t\taxis on top,\n\t\t\t\t\t\tymax=60,\n\t\t\t\t\t\tylabel=Throughput (K Responses\/s),\n\t\t\t\t\t\txtick=data,\n\t\t\t\t\t\tenlarge x limits=0.6,\n\t\t\t\t\t\n symbolic x coords={Ravana,Rama},\n\t\t\t\t\t\taxis lines*=left,\n\t\t\t\t\t\tclip=false,\n\t\t\t\t\t\ttranspose legend,\n\t\t\t\t\t\tlegend style={draw=none,at={(0.5,1.3)},anchor=north},\n nodes near coords,\n cycle list name=black white,\n every axis plot\/.append style={fill=gray,no markers}\n\t\t\t\t\t]\n\t\t\t\t\t\\addplot coordinates {(Ravana,46.4) (Rama,28.3)};\n\t\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n }\n \\caption{Fault-tolerant controllers throughput}\n \\label{fig:controllers-throughput}\n \\end{subfigure}\n \\hspace{1cm}\n \\begin{subfigure}[t]{0.25\\linewidth}\n \\centering\n \\resizebox{\\columnwidth}{!}{\n \\begin{tikzpicture}\n\t\t\t\t\\begin{axis}[\n\t\t\t\t\t\tybar=0pt,\n\t\t\t\t\t\tbar width=12pt,\n\t\t\t\t\t\tx=1.7cm,\n\t\t\t\t\t\tymin=0,\n\t\t\t\t\t\taxis on top,\n\t\t\t\t\t\tymax=60,\n\t\t\t\t\t\tylabel=Throughput (K Responses\/s),\n\t\t\t\t\t\txtick=data,\n\t\t\t\t\t\tenlarge x limits=0.6,\n\t\t\t\t\t\tsymbolic x coords={Ravana,Rama},\n\t\t\t\t\t\taxis lines*=left,\n\t\t\t\t\t\tclip=false,\n\t\t\t\t\t\ttranspose legend,\n\t\t\t\t\t\tlegend style={draw=none,at={(0.5,1.3)},anchor=north, column sep=1.5mm},\n legend cell align=left,\n nodes near coords=\\pgfmathfloatifflags{\\pgfplotspointmeta}{0}{}{\\pgfmathprintnumber{\\pgfplotspointmeta}},\n every node near coord\/.append style={rotate=90, anchor=west},\n\t\t\t\t\t]\n\t\t\t\t\t\\addplot coordinates {(Ravana,52) (Rama,35.6)};\\label{legend-blue}\n\t\t\t\t\t\\addplot coordinates {(Ravana,0) (Rama,50.1)};\\label{legend-red}\n\t\t\t\t\t\\addplot coordinates {(Ravana,46) (Rama,28.3)};\\label{legend-brown}\n\t\t\t\t\t\\legend{Exactly-once events, Exactly-once commands, Both}\n\t\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n }\n \\caption{Throughput with different consistency guarantees}\n \\label{fig:ravana-vs-rama}\n \\end{subfigure}\n \\hspace{1cm}\n \\begin{subfigure}[t]{0.35\\linewidth}\n \\centering\n \\resizebox{\\columnwidth}{!}{\n \\begin{tikzpicture}\n \\begin{axis}[\n xlabel={Number of switches},\n ylabel={Throughput (Responses\/s)},\n xmin=1, xmax=7,\n ymin=0, ymax=60,\n xtick={1,2,3,4,5,6,7},\n xticklabels={1,2,4,8,16,32,64},\n ytick={0,10,20,30,40,50,60},\n yticklabels={0K,10K,20K,30K,40K,50K,60K},\n \n legend style={draw=none,at={(0.5,1.3)},anchor=north},\n ymajorgrids=true,\n grid style=dashed,\n ]\n \\addplot[\n color=blue,\n mark=*,\n ]\n coordinates {\n (1,22.0)(2,23.5)(3,28)(4,31.8)(5,35.6)(6,36.5)(7,36.1)\n };\n \\addplot[\n color=red,\n mark=*,\n ]\n coordinates {\n (1,32.0)(2,36.8)(3,44.2)(4,45.3)(5,50.7)(6,51.9)(7,51.0)\n };\n \\addplot[\n color=brown,\n mark=*,\n ]\n coordinates {\n (1,15.3)(2,16.4)(3,22.1)(4,24.3)(5,28.3)(6,28.7)(7,29.1)\n };\n \\legend{Exactly-once events, Exactly-once commands, Both}\n \\end{axis}\n\t\t\t\\end{tikzpicture}\n }\n \\caption{Rama throughput with different number of switches}\n \\label{fig:rama-throughput-switches}\n \\end{subfigure}\n\\caption{Throughput} \n\\label{fig:rama-throughput}\n\\end{figure*}\n\n\\subsection{Rama performance}\n\nWe have compared the performance of Rama against Ravana~\\cite{katta2015}.\nFigure \\ref{fig:controllers-throughput} shows the throughput for each controller (for Ravana we use the results reported in \\cite{katta2015}, as its authors considered a similar setup).\nFor Rama measurements we run Cbench emulating 16 swit\\-ches.\n\nRama achieves a throughput close to 30K responses per second.\nThis figure is lower than Ravana's, as our solution incurs in higher costs compared to Ravana for the consistency guarantees provided.\nThe additional overhead is caused by two requirements of our protocol.\nFirst, current switches' lack of mechanisms to allow temporary storage of OpenFlow events and commands require Rama to instruct switches to send all events to all replicas, increasing network overhead.\nSecond, the lack of acknowledgement messages in OpenFlow leads Rama to a more expensive solution -- bundles -- to achieve similar purposes.\nThe overhead introduced by these mechanisms is translated into reduced throughput when compared with Ravana.\n\nIn figure \\ref{fig:ravana-vs-rama} we show, separately, throughput results considering the different levels of consistency provided by both Rama and Ravana.\nThe exactly-once events consistency level (\\ref{legend-blue}) ensures that no events are lost and that controllers do not process repeated events.\nAdditionally, controllers must agree on a total order of events to be delivered to applications.\nFor the latter, both Rama and Ravana rely on ZooKeeper to build a shared log across controllers.\nIn our case, the master controller batches events in multiple requests to ZooKeeper, waits for replies, and orders the events before adding them to the Pipeline Queue.\nNote that neither Rama nor Ravana wait for ZooKeeper to persistently store requests on disk (they both use ZooKeeper in-memory).\nIn our case, the multi-request is sent asynchronously (i.e., threads are freed to continue operation) and a callback function is registered. \nThis function will be activated when ZooKeeper replies to our multi request and enqueues the logged events (in order) in the Pipeline Queue to be processed by the modules.\nIn Ravana the processing is equivalent.\n\nThe Exactly-once commands semantics (\\ref{legend-red}) ensures that commands sent by controllers are not lost and that switches do not receive duplicate commands.\nRavana relies on switches to explicitly acknowledge each command and filter repeated ones.\nFor Rama, this includes maintaining state of all opened bundles for switches, and sending additional messages to the switches.\nInstead of replying only with a Packet Out as in Floodlight, Rama must send messages to open the bundle, add the Packet Out to it, close the bundle and commit it.\nTo evaluate this case, we modified Cbench to make switches increase their counters only when they receive a Commit Request message from the controller.\nThis allows a fair evaluation of the performance of Rama in a real system -- indeed, in Rama a packet will only be forwarded after committing the bundle on the switch to guarantee consistent processing.\n\nAs show in Figure \\ref{fig:ravana-vs-rama}, some guarantees are costlier to ensure than others\\footnote{Note that we do not include the results from Exactly-once commands in Ravana as these are not available in~\\cite{katta2015}. It is possible, however, to extrapolate that the results will be inline with the rest of the analysis.}.\nFor instance, the cost of providing Exactly-once events semantics is higher than Exactly-once commands semantics.\nThis result brings with it an important insight: the system bottleneck is the coordination service.\nIn other words, the additional mechanisms Rama uses to guarantee the desired consistency properties add overhead but, crucially, system performance is not limited by these mechanisms.\n\nFigure \\ref{fig:rama-throughput-switches} shows how maintaining multiple switch connections affects Rama throughput.\nAs switches send events at the highest possible rate, the throughput of the system saturates with around 16 switches.\nImportantly, the throughput does not decrease with a higher number of switches.\n\n\n\\subsection{Event batching}\n\nRama batches events to reduce the communication overhead of contacting ZooKeeper.\nIn practice, events are sent to ZooKeeper after reaching a configurable number of events in the batch (batching size) or after a configurable timeout (batching time).\n\nTo evaluate batching we conducted a series of tests with different configurations to understand how the batching size and time affects Rama performance (Figure \\ref{fig:rama-batch-size}).\nIntuitively, a larger batching size will increase throughput, but as downside will also increase latency.\nAs batching size increases, throughput increases due to the reduction of RPC calls required to replicate events.\n\n\n\\begin{figure}[t]\n \\centering\n \\resizebox{.75\\linewidth}{!}{\n \\begin{tikzpicture}\n \\begin{axis}[\n xlabel={Batch size},\n ylabel={Throughput (Responses\/s)},\n xmin=1, xmax=7,\n ymin=10, ymax=30,\n xtick={1,2,3,4,5,6,7},\n xticklabels={10,100,200,400,600,800,1000},\n ytick={15,20,25,30},\n yticklabels={15K,20K,25K,30K},\n ymajorgrids=true,\n grid style=dashed,\n ]\n \\addplot[\n color=blue,\n mark=*,\n ]\n \n \n coordinates {\n (1,16.0)\n (2,16.9)\n (2.5,17.3)\n (3,17.9)\n (3.5,18.4)\n (4,19.1)\n (4.5,20.5)\n (5,22.6)\n (5.5,24)\n (6,24.9)\n (6.5,25.5)\n (7,28.3)\n };\n \\end{axis}\n \\end{tikzpicture}\n }\n \\caption{Variation of Rama throughput with batch size}\n \\label{fig:rama-batch-size}\n \n\\end{figure}\n\n\\subsection{Failover Time}\n\nTo measure the time for Rama to react to failures we use mininet~\\cite{mininet}, OpenvSwitch~\\cite{pfaff2015}, and iperf.\nWe setup a simple topology in Mininet with one switch and two hosts, one to act as iperf server and another as client.\nWe start the client and server in UDP mode, with the client generating 1 Mbit\/sec for 10 seconds.\nThe switch connects to two Rama instances and sends all events to both controllers.\nEach Rama instance is connected to the ZooKeeper server running on another machine (as before) with a negotiated session timeout of 500ms.\nTo make sure that no rules are installed on the switch -- so that events are sent to the controllers each time a packet arrives -- we run Rama with a module that only forwards packets (using Packet Out messages) without modifying the switch's tables.\n\nFigure \\ref{fig:rama-failover} shows the reported bandwidth from the iperf server and indicates the time taken by Rama to react to failures. \nNamely, the slave replica takes around 550ms to react to faults.\nThis includes the time for: (a) ZooKeeper to detect the failure and notify the slave replicas (500ms); (b) electing a new leader for the swit\\-ches; (c) the new leader to transition to master (finish processing logged events from the old master to reach the same internal state); (d) append buffered events to the log and start delivering unprocessed events in the log to applications so they start sending commands to the switches.\nAs is clear, the major factor affecting failover time is the time ZooKeeper needs to detect the failure of the master controller.\n\n\\begin{figure}[h!]\n \\centering\n \\resizebox{.75\\linewidth}{!}{\n \\begin{tikzpicture}\n \\begin{axis}[\n\t\txlabel={Time (s)},\n\t\tylabel={Bandwidth (Mbits\/sec)},\n\t\txmin=1, xmax=10,\n\t\tymin=0, ymax=1.5,\n\t\txtick={1,2,3,4,5,6,7,8,9,10},\n\t\n\t\n\t\tytick={0,1},\n\t\n\t\n\t\n\t\tlegend style={legend pos=south east},\n\t\tymajorgrids=true,\n\t\tgrid style=dashed,\n\t\t]\n\t\t\\addplot[\n\t\n\t\tcolor=blue,\n\t\tmark=none,\n\t\tline width=0.5mm\n\t\t]\n\t\tcoordinates {\n (1,1)\n (3,1)\n (3.4,0.6)\n (3.5,0)\n (4,0)\n (4,1)\n (10,1)\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t};\n \\draw[gray, dashed, thick] (3.4,0) -- (3.4,1.5);\n \\draw[gray, dashed, thick] (4.0,0) -- (4.0,1.5);\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t\\legend{}\n \\end{axis}\n \\end{tikzpicture}\n }\n\\caption{Rama failover time}\n\\label{fig:rama-failover}\n\\end{figure}\n\n\n\n\\subsection{Summary}\n\nRama comes close, but does not achieve the performance of Ravana.\nThis is due to the fact that our system incurs in higher costs. \nRama requires more messages to be sent over the network and introduces new mechanisms, such as bundles, which increase the overhead of the solution in order to achieve the same properties as Ravana.\nDespite the (relatively small) loss in performance, the value proposition of Rama of guaranteeing consistent command and event processing without requiring modifications to switches or to the OpenFlow protocol still makes it an effective enabler for immediate adoption of fault-tolerant SDN solutions.\n\n\\section{Related work}\n\\label{related}\n\n\\textbf{Consistent SDN.} Levin et al. \\cite{levin2012} have explored the trade-offs of state distribution in a distributed control plane, motivating the importance of strong consistency in applications' performance.\nOn the one hand, view staleness affects the correct operation of applications, which may lead to poor network performance.\nOn the other, applications need to be more complex in order to be aware of possible network inconsistencies.\n\nHaving a strongly consistent network view across the controllers may be critical to the operation of some applications (e.g., load balancing) in terms of correctness and performance \\cite{levin2012}.\nHowever, as noted in the CAP theorem, a system can not provide availability while also achieving strong consistency in the presence of network partitions.\nBecause of this, fault-tolerant and distributed SDN architectures must use techniques to explicitly handle partitions in order to optimize consistency and availability (and thus achieving a tradeoff between them)~\\cite{brewer2012}.\n\nPart of the strong consistency in the controllers comes from a consistent packet processing (i.e., packets received from switches). OF.CPP \\cite{perevsini2013} explores the consistency and performance problems associated with packet processing at the controller and proposes the use of transactional semantics to solve them.\nThese semantics are achieved by using multi-commit transactions, where each event is a sub transaction, which can commit or abort, of the related packet (the main transaction).\nHowever, this transactional semantics in packet processing is not enough: controllers should also coordinate to guarantee the same semantics in the switches' state. \nSpecifically, the commands sent by the controllers should be processed exactly once by the corresponding switches -- a problem our work addresses.\n\n\\textbf{Consistent network updates.} The concepts of per-packet and per-flow consistency in SDN were introduced in \\cite{reitblatt2011} to provide a useful abstraction for applications: consistent network updates.\nWith consistent updates, packets or flows in flight are processed exclusively by the old or by the new network policy (never a mix of both).\nFor example, with per-packet consistency, every packet traversing the network is processed by exactly one consistent global network configuration.\nThe authors extend this work in \\cite{reitblatt2012} and implement Kinetic, which runs on top of NOX \\cite{gude2008} to offer these abstractions in a control plane to be used by applications.\nThe main mechanism used to guarantee consistent network updates is the use of a two-phase protocol to update the rules on the switches.\nFirst, the new configuration is installed in an unobservable way (no packets go through these rules yet).\nAfterwards, the switch's ingress ports are updated one-by-one to stamp packets with a new version number (using VLAN tags).\nOnly packets with the new version number are processed by the new rules.\n\nIn \\cite{canini2013}, Canini et al. extend Kinetic to a distributed control plane and formalize the notion of fault-tolerant policy composition.\nTheir algorithm also requires a bounded number of tags, regardless of the number of installed updates, as opposed to the unbounded number of tags in \\cite{reitblatt2012}.\n\nThis class of proposals addresses consistent network updates, which is an orthogonal problem to the one addressed here.\n\n\\textbf{Fault-tolerance in SDN.} Botelho et al.~\\cite{botelho2014} and Katta et al.~\\cite{katta2015} both address fault tolerance in the control plane while achieving strong consistency.\nIn~\\cite{botelho2014} the authors proposes SMaRtLight, a fault-tolerant controller architecture for SDN. \nTheir architecture uses a hybrid replication approach: passive replication in the controllers (one primary and multiple backups) and active replication in an external distributed data store, to achieve durability and strong consistency.\nThe controllers are coordinated through the data store and ca\\-ching mechanisms are employed to achieve acceptable performance.\nIn~\\cite{botelho2016} the authors extend their solution to a distributed deployment.\nIn contrast to our solution, SMaRtLight requires applications to be modified to use the data store directly.\nMore importantly, the solution does not consider the consistency if switch state in the system model.\nRavana~\\cite{katta2015} was the first fault-tolerant controller that integrates switches into the problem.\nThe techniques proposed by its authors guarantee correctness of event processing and command execution in SDN.\nThe main differentiating factor of our work to Ravana is that our solution does not require changes to the OpenFlow protocol nor to switches.\n\n\\textbf{Distributed SDN controllers}. The need for scalability and dependability has been a motivating factor for distribution and fault-tolerance in SDN control planes.\nOnix~\\cite{koponen2010}, the first distributed, dependable, production-level solution considered these problems from the outset.\nAs the choice of the ``right'' consistency model was perceived as fundamental by its authors, Onix offered two data stores to maintain the network state: an eventually consistent and a strong consistent option.\nONOS~\\cite{berde2014} is an open-source solution that shares with Onix the fact that controller state is stored in an external consistent data-store.\nBoth approaches ensure a consistent ordering of events between controller replicas, but they do not include switch state and hence can lead to the network anomalies of traditional replication solutions. \n\n\\textbf{Traditional fault-tolerance techniques.} Viewstamped Replication~\\cite{oki1988}, Paxos~\\cite{lamport1998}, and Raft~\\cite{ongaro2014} are well-known distributed consensus protocols used for replication of state machines in client-server models.\nNone of these widely-used protocols is directly applicable in the context of SDN, where to guarantee correctness it is necessary not only to have consistent controller state, but also switch state.\n\n\\section{Conclusions}\n\nIn a fault-tolerant SDN, maintaining consistent controller state is not enough to achieve correctness.\nUnlike traditional distributed systems, in SDN it is necessary to consistently handle switch state to avoid loss or repetition of commands and events under controller failures.\nTo address these challenges we propose Rama, a consistent and fault-tolerant SDN controller that handles the entire event processing cycle transactionally.\n\nRama differs from the existing alternative, Ravana ~\\cite{katta2015}, by not requiring modifications to the OpenFlow protocol nor to switches.\nThis comes at a cost, as the techniques introduced in Rama incur in a higher overhead when compared to Ravana.\nAs the overhead leads to a relatively modest decrease in performance, we expect, in practice, this to be compensated by the fact that our solution is immediately deployable.\nWe make our software available open source\\footnote{https:\/\/github.com\/fvramos\/rama} to further foster adoption of fault-tolerant SDN.\n\nAs for future work, besides devising a formal proof on the consistency guarantees Rama provides, we plan to address correctness in distributed SDN deployments and to consider richer fault models. \n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s:intro}\n\\setcounter{equation}{0}\nGeneral Relativity is presently the most successful theory for describing the gravitational interaction\nat the classical level.\nIts own failure is marked by the prediction of the formation of geodesic singularities whenever a trapped surface arises from the gravitational collapse of a compact object.\\footnote{We\nalso recall that pointlike sources are mathematically incompatible with the Einstein field equations~\\cite{geroch}.}\nSuch considerations open up the possibility that significant departures from General Relativity\nmight occur where our experimental data do not yet place strong enough constraints,\nlike for example in regions of strong gravity near a very massive source.\nHowever, Einstein's field equations are not linear and this makes it difficult to modify the laws of gravity in the\nstrong-field regime without affecting also the weak-field behavior, since these regimes are likely to be related\nnontrivially in any nonlinear theories.\n\\par\nThe bootstrapped Newtonian gravity~\\cite{BootN,Casadio:2020mch} is an attempt at\ninvestigating these issues in a somewhat simplified context.\nThe approach, based on Deser's conjecture~\\cite{deser}, consists of retrieving the full Einstein's theory including gravitational self-coupling terms in the Fierz-Pauli action.\\footnote{This\nidea is indeed older, see {\\em e.g.}~Ref.~\\cite{Feynman}.} \nThese additional terms must be consistent with diffeomorphism invariance, in order to preserve the covariance of any (modified) metric theory.\nWe can obtain different modified gravitational theories depending on the choice of boundary conditions in the reconstruction procedure~\\cite{rubio}.\nA key observation is that a practically effective dynamics can be derived only starting with a ``small'' contribution of matter sources.\nFor large astrophysical sources, this implies that the matter source must also be included in\na nonperturbative way.\nIn the present approach this task is addressed starting from the Fierz-Pauli action corresponding to the potential generated by an arbitrarly large static source, and putting in extra terms representing gravitational self-coupling.\nFurthermore, the coupling constants for the additional terms are not fixed to their Einstein-Hilbert values in order to accommodate for diverse underlying dynamics.\nThis approach then results in a nonlinear equation including pressure effects and the gravitational self-interaction terms to next-to-leading order in the Newton constant, whose solution is the gravitational potential operating on test particles at rest.\nSuch equation was useful to investigate compact objects~\\cite{Casadio:2019cux,Casadio:2020kbc,Casadio:2019pli} and coherent quantum states~\\cite{Casadio:2016zpl,Casadio:2017cdv}.\\footnote{These quantum states show some of the\nproperties~\\cite{ciarfella} found in the corpuscular model of black holes~\\cite{DvaliGomez}.\nHowever, we shall not discuss quantum aspects in this work.}\n\\par\n The motion of (test) particles and photons in the surrondings of a compact object represents the most immediate tool to gather information on the gravitational potential in which they revolve.\nIn Ref.~\\cite{Casadio:2021gdf}, a full (effective) metric tensor was obtained from the bootstrapped\nNewtonian potential, which allows one to study these trajectories in general, and to compare them with\nresults from General Relativity.\nThe requirement that the resulting theory of gravity is covariant is satisfied by the use of an effective metric tensor, since the bootstrapped Newtonian dynamics is implicitly assumed to be invariant after coordinate transformations.\nNonetheless, the particular metric found in Ref.~\\cite{Casadio:2021gdf} differs from the Schwarzschild \ngeometry; hence, it is not a solution of the Einstein equations in the vacuum.\nAn effective fluid is therefore present, as was already noted in the cosmological context~\\cite{cosmo}.\n\\par\nThe bootstrapped effective metric is given as a function of parameterized\npost-Newtonian (PPN) parameters~\\cite{weinberg} in the weak-field expansion.\nThese parameters can be consistently chosen so as to minimize deviations from the Schwarzschild metric\nonly up to a point.\nIn fact, some of the PPN parameters are uniquely related, and at the PPN order determined in\nRef.~\\cite{Casadio:2021gdf}, they can be expressed in terms of one free parameter. \nIn this work, we report on a phenomenological investigation aiming at placing bounds on \nthis remaining free parameter from the measured precessions in the Solar System ~\\cite{DeMartino2018,Moyer200,Will2018}and from the study\nof S-star orbits around the black hole in the center of the Galaxy ~\\cite{Eckart1996,Eckart1997,Gillessen:2009,Gillessen2009L,Ghez1998,Ghez:2008}.\n\\par\nThe paper is organized as follows.\nIn Sec.~\\ref{s:boot}, we briefly review the equation for the bootstrapped Newtonian \npotential and its solution in the vacuum.\nWe then just recall the full effective metric reconstructed from this potential, which is \nthen used to analyze Solar System data and S-star motions in Sec.~\\ref{s:app}. We conclude with comments and an outlook in Sec.~\\ref{s:conc}.\n\\section{Bootstrapped Newtonian vacuum}\n\\label{s:boot}\n\\setcounter{equation}{0}\nWe shall only review briefly the derivation of the bootstrapped Newtonian equation, since all the\ndetails can be found in Refs.~\\cite{Casadio:2017cdv,BootN,Casadio:2019cux,Casadio:2019pli}. We shall use units with\nthe speed of light $c=1$ in this section.\nWe start from the Lagrangian for the Newtonian potential $V=V(r)$ generated by a static\nand spherically symmetric source of density $\\rho=\\rho(r)$, to wi\n\\begin{equation}\nL_{\\rm N}[V]\n=\n-4\\,\\pi\n\\int_0^\\infty\nr^2\\,\\d r\n\\left[\n\\frac{\\left(V'\\right)^2}{8\\,\\pi\\,G_N}\n+V \\rho\n\\right]\n\\ .\n\\label{LVn}\n\\end{equation}\nThe corresponding Euler-Lagrange field equation is given by Poisson's\n\\begin{equation}\n\\dfrac{1}{r^2}\\,\\dfrac{d}{d r}\n\\left(r^2\\,\\frac{dV}{dr}\\right)=\n4\\,\\pi\\,G_N\\,\\rho\n\\ ,\n\\label{poisson}\n\\end{equation}\nwhere we recall that the radial coordinate $r$\nis the one obtained from harmonic coordinates~\\cite{weinberg,Casadio:2021gdf}.\nWe next couple $V$ to a gravitational current proportional to its own energy density,\n\\begin{equation}\nJ_V\n\\simeq\n4\\,\\frac{d U_{ N}}{d \\mathcal{V}} \n=\n-\\dfrac{\\left[V'(r)\\right]^2}{2\\,\\pi\\,G_N}\n\\ ,\n\\end{equation}\nwhere $\\mathcal{V}$ is the spatial volume and $U_{\\rm N}$ is the Newtonian potential energy.\nWe also add the ``one loop term'' $J_{\\rho}\\simeq-2\\,V^2$, which couples to $\\rho$, and\nthe pressure term $p$~\\cite{Casadio:2019cux}.\nThe total Lagrangian then reads \n\\begin{align}\nL[V]\n=&\n-4\\,\\pi\n\\int_0^\\infty\nr^2\\,\\d r\n\\left[\n\\frac{\\left(V'\\right)^2}{8\\,\\pi\\,G_N}\n\\left(1-4\\,q_V\\,V\\right)\n\\right.\\nonumber\\\\&\\left.+\\left(\\rho+3\\,q_p\\,p\\right)V\n\\left(1-2\\,q_\\rho\\,V\\right)\\right]\\,\n\\label{LagrV}\n\\end{align}\nwhere the coupling constants $q_V$, $q_p$ and $q_\\rho$ can be used to track the effects of the different\ncontributions.\nFor instance, the case $q_V=q_p=q_\\rho=1$ reproduces the Einstein-Hilbert action at next-to-leading order\nin perturbations around Minkowski~\\cite{Casadio:2017cdv,Casadio:2019cux,Casadio:2019pli}.\nFinally, the bootstrapped Newtonian field equation reads\n\\begin{eqnarray}\n\\dfrac{1}{r^2}\\,\\dfrac{d}{d r}\n\\left(r^2\\,\\dfrac{d V}{d r}\\right)\n&&=\n4\\,\\pi\\,G_N\n\\dfrac{1-4\\,q_\\rho\\,V}{1-4\\,q_V\\,V}\n\\left(\\rho+3\\,q_p\\,p\\right)\n\\nonumber\\\\&&+\\dfrac{2\\,q_V\\left(V'\\right)^2}\n{1-4\\,q_V\\,V}\n\\ ,\n\\label{EOMV}\n\\end{eqnarray}\nwhich must be solved along with the conservation equation $p' = -V'\\left(\\rho+p\\right)$. \n\\subsection{Vacuum potential}\nIn vacuum, we have $\\rho=p=0$, and Eq.~\\eqref{EOMV} simplifies to\n\\begin{equation}\n\\frac{1}{r^2}\\,\\frac{d}{d r}\n\\left(r^2\\,\\frac{d V}{d r}\\right)\n=\n\\frac{2\\,q \\left(V'\\right)^2}{1-4\\,q\\,V}\n\\ ,\n\\label{EOMV0}\n\\end{equation}\nwhere we renamed $q\\equiv q_V$ for simplicity.\nThe exact solution was found in Ref.~\\cite{BootN} and reads\n\\begin{eqnarray}\n\\label{potential}\nV(r)\n=\n\\frac{1}{4\\,q}\n\\left[1-\\left(1+\\frac{6\\,q\\,G_N\\,M}{r}\\right)^{2\/3}\\right]\n\\ .\n\\end{eqnarray}\nThe asymptotic expansion away from the source yields\n\\begin{equation}\nV_{2}\n\\simeq\n-\\frac{G_N\\,M}{r}\n+q\\,\\frac{G_N^2\\,M^2}{r^2}\n-q^2\\,\\frac{8\\,G_N^3\\,M^3}{3\\,r^3}\n\\ ,\n\\label{Vasy}\n\\end{equation}\nso that the Newtonian behavior is always recovered (for $q=0$) and the post-Newtonian terms\nare seen to depend on the coupling $q$ (see Fig.~\\ref{f:V}).\n\\\n\\\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[scale=0.9]{FIG1.pdf}\n \\caption{Bootstrapped Newtonian potential $V$ in Eq.~\\eqref{potential} compared to\n its expansion $V_2$ from Eq.~\\eqref{Vasy} and to the Newtonian potential $V_{\\rm N}$\n (for $q=1$).}\n \\label{f:V}\n\\end{figure}\n\n\\subsection{Vacuum effective metric}\n\\label{ss:metric}\nA complete spacetime metric was reconstructed from the vacuum potential~\\eqref{potential}\nin Ref.~\\cite{Casadio:2021gdf}.\nThe procedure is rather cumbersome, and we shall only recall here a few main steps leading\nto the necessary expressions in the weak-field regime. We explicitly show the speed of light\n$c$ from here on.\nOne starts from the PPN form~\\cite{weinberg\n\\begin{widetext}\n\\begin{equation}\nds^2\\simeq\\left[1-\\alpha\\dfrac{2R_g}{\\bar{r}}+(\\beta-\\alpha\\gamma)\\dfrac{2R_g^2}{\\bar{r}^2}+(\\zeta-1)\\dfrac{8R_g^3}{\\bar{r}^3}\\right]c^2dt^2 +\\left[1+\\gamma\\dfrac{2R_g}{\\bar{r}}+\\xi\\dfrac{4R_g^2}{\\bar{r}^2}+\\sigma\\dfrac{8R_g^3}{\\bar{r}^3}\\right]d\\bar{r}^2+\\bar{r}^2d\\Omega^2\n\\end{equation}\n\\end{widetext}\nwhere $R_g=\\frac{G_N\\,M}{c^2}$ and $\\bar{r}$ is the areal radius, which differs from the radial coordinate $r$ in which\nthe potential~\\eqref{potential}\nis expressed.\nThe latter is obtained from harmonic coordinates and the two radial coordinates are\nrelated by~\\cite{Casadio:2021gdf}\n\\begin{eqnarray}\nr\n\\simeq&&\n\\bar{r}\n+\n(1-3\\,\\gamma)\\,\\frac{R_g}{2}\n+\\nonumber\\\\\n&&+(1-3\\,\\gamma+2\\,\\gamma^2-2\\,\\Xi)\\,\n\\frac{R_g^2}{\\bar{r}}\n\\ ,\n\\end{eqnarray}\nin which $\\Xi$ is a free parameter.\nFurthermore, we have\n\\begin{equation}\nq\n=\n\\beta+\\frac{\\gamma-1}{2}\n\\ .\n\\label{eq:q}\n\\end{equation}\n\\par\nWe can next set $\\alpha=1$ by simply absorbing this coefficient in the definition of the mass $M$~\\cite{adm},\nand $\\beta=\\gamma=1$ in order to satisfy the experimental constraints $|\\gamma-1|\\simeq|\\beta-1|\\ll 1$.\nFrom Eq.~\\eqref{eq:q}, this is tantamount to setting $q=1$, as expected.\nThe higher order PPN parameters are then fully determined by $\\Xi$ according to \n\\begin{eqnarray}\n\\xi\n&\\!\\!=\\!\\!&\n1+\\Xi\n\\label{eq:xi}\n\\\\\n\\zeta\n&\\!\\!=\\!\\!&\n1-\\frac{5+6\\,\\Xi}{12}\n=\n\\frac{13-6\\,\\xi}\n{12}\n\\label{eq:zeta}\n\\\\\n\\sigma\n&\\!\\!=\\!\\!&\n\\frac{9+14\\, \\Xi}\n{4}\n\\ .\n\\label{eq:sigma}\n\\end{eqnarray}\nAs already noted in Ref.~\\cite{Casadio:2021gdf}, the General Relativistic PPN combination $\\xi=\\zeta=1$\ncannot be obtained for any value of $\\Xi$, and the bootstrapped metric for which we have the minimum deviation\nfrom the Schwarzschild form is thus given by \n\n\\begin{eqnarray}\nds^2\n\\simeq&&\n-\\left[\n1\n-\\frac{2\\,R_g}{r}\n-(5+6\\,\\Xi)\\,\\frac{2\\,R_g^3}{3\\,c^6\\,r^3}\n\\right]\nc^2\\,dt^2\n\\nonumber\n\\\\\n&&\n+\n\\left[\n1\n+\n\\frac{2\\,R_g}{r}\n+\n(1+\\Xi)\\,\\frac{4\\,R_g^2}{r^2}\n\\right.\\nonumber\\\\&&\\left.\n+\n(9+14\\, \\Xi)\\frac{2\\,R_g^3}{r^3}\n\\right]\ndr^2\n+\nr^2\\,d\\Omega^2\n\\ ,\n\\label{eq:g}\n\\end{eqnarray}\nin which we drop the bar from the areal coordinate for simplicity from now on.\nWe can see that there are contributions in the metric coefficients which cannot be reduced to the\nSchwarzschild expressions.\nThis deviation from the Schwarzschild solution is encoded by the free parameter $\\Xi$,\nwhose value is \\textit{a priori} unknown and must be constrained by observations.\nIn particular, we will test these corrections by analyzing the \nplanets in the Solar System and S-stars motion around Sgr~A*.\nThe geodesic equation\n\\begin{equation}\n\\Ddot{x}^{\\mu}+\\Gamma^{\\mu}_{\\alpha\\beta}\\,\\dot{x}^{\\alpha}\\,\\dot{x}^{\\beta}\n=\n0,\n\\end{equation}\nwhere a dot indicates the derivative with respect to the proper time, can be equivalently computed using the Euler-Lagrange equations\n\\begin{equation}\n\\dfrac{d}{d s}\n\\dfrac{\\partial\\mathcal{L}}{\\partial\\dot{x}^{\\mu}}\n-\\dfrac{\\partial\\mathcal{L}}{\\partial x^{\\mu}}\n=\n0\n\\ , \n\\end{equation}\nwith $\\mathcal{L}=g_{\\alpha\\beta}\\,\\dot x^\\alpha\\,\\dot x^\\beta=-1$ for a massive object.\nFrom the metric in Eq.~\\eqref{eq:g}, one then finds \n\\par\n\\begin{widetext}\n\\begin{align}\n\\Ddot{r}\n&\n=\n\\frac{R_g\n\\left\\{\n4\\, (1+\\Xi)\\, R_g\\,r\\,\\dot{r}^2\n+R_g^2\n\\left[3\n\\left(\n9+14\\Xi\n\\right)\n\\dot{r}^2\n-c^2\n\\left(5+6\\Xi\\right)\n\\dot{t}^2\n\\right]\n+r^2\\left(\\dot{r}^2-c^2\\,\\dot{t}^2\\right)\n\\right\\}\n+r^5\\,(\\dot{\\theta}^2+\\dot{\\phi}^2\\,\\sin^2\\theta)}\n{r\n\\left[2\\, (9+14\\, \\Xi)\\, R_g^3+4\\,(1+\\Xi)\\, R_g^2\\, r+2\\,R_g\\,r^2+\\,r^3\n\\right]}\n\\label{eq:ddr}\n\\\\\n\\Ddot{\\theta}\n&=\n\\dot{\\phi}^2\\,\\sin\\theta\\,\\cos\\theta\n-\\frac{2\\,\\dot{r}\\,\\dot{\\theta}}{r}\n\\\\\n\\Ddot{\\phi}\n&=\n-\\frac{2\\,\\dot{\\phi}}{r}\\,(\\dot{r}+r\\,\\dot{\\theta}\\,\\cot\\theta)\n\\\\\n\\Ddot{t}\n&\n=\n\\frac{6\\,\\dot{r}\\,\\dot{t}\n\\left[\n(5+6\\,\\Xi)\\, R_g^3\n+R_g\\,r^2\n\\right]}\n{2\\,(5+6\\,\\Xi)\\, R_g^3\\,r\n+6\\,R_g\\,r^3\n-3\\,r^4}\n\\ .\n\\label{eq:ddt}\n\\end{align}\n\\end{widetext}\n The third and fourth equations are the usual conservation equations for the angular momentum\nand energy conjugated to $t$, respectively.\nSpherical symmetry as usual implies that the orbital motion occurs on a plane which we can\narbitrarily set at $\\theta=0=\\dot\\theta$.\n\n\nThe above parametric system of nonlinear differential equations can be integrated numerically\nin order to study the orbits.\n\n\\subsubsection{Precession}\n\\label{ss:precession}\n\\par\nIt is easy to express the perihelion precession in terms of the PPN parameters~\\cite{weinberg}.\nAt leading order, one has\n\\begin{equation}\n\\label{eq:eq1}\n\\Delta\\phi^{(1)}\n=\n2\\,\\pi\n\\left(2-\\beta+2\\,\\gamma\\right)\n\\dfrac{R_g}{\\ell}\n\\ ,\n\\end{equation}\nwhere $\\ell=a\\,(1-e^2)$ is the \\textit{semilatus rectum}, $a$ is the semimajor axis\nand $e$ is the eccentricity.\nFor $\\beta=\\gamma=1$, Eq.~(\\ref{eq:eq1}) reproduces the General Relativistic result\n\\begin{equation}\n\\label{eq:eq2}\n\\Delta\\phi_S^{(1)}\n=\n6\\,\\pi\\,\\frac{R_g}{\\ell}\n\\ .\n\\end{equation}\nThe second order correction depends on $\\xi$ and $\\zeta$, and for $\\beta=\\gamma=1$,\nit reads~\\cite{Casadio:2021gdf}\n\\begin{align}\n\\Delta\\phi^{(2)}\n&\n=\n\\pi\\left[(41+10\\xi-24\\,\\zeta)\n+\n(16\\,\\xi-13)\\,\\frac{e^2}{2}\\right]\n\\frac{R_g^2}{\\ell^2}\n\\nonumber\n\\\\\n&\n\\simeq\n\\pi\\left[(37+22\\,\\Xi)+(3+16\\,\\Xi)\\,\\frac{e^2}{2}\\right]\n\\frac{R_g^2}{\\ell^2}\n\\nonumber\n\\\\\n&\n\\simeq\n\\Delta\\phi_S^{(2)}\n+\n2\\,\\pi\\left[\n11\\,\\xi-7+4\\,(\\xi-1)\\,e^2\\right]\n\\frac{R_g^2}{\\ell^2}\n\\ ,\n\\end{align}\nwhere the General Relativistic result $\\Delta\\phi_S^{(2)}$ corresponds to $\\xi=\\zeta=1$.\nFrom Eqs.~\\eqref{eq:xi} and \\eqref{eq:zeta}, it follows that we cannot have $\\xi=\\zeta=1$ for any value of $\\Xi$,\nand a deviation from General Relativity remains. \n\\section{Astronomical tests}\n\\label{s:app}\n\\setcounter{equation}{0}\nIn order to constrain the free parameter of the bootstrapped Newtonian potential, $\\Xi$,\nwe confronted the theoretical results exposed in Sec.~\\ref{ss:metric} with astronomical \ndata.\n\nTo infer a range of validity for $\\Xi$, we compared the analytical expression of the precession with the observed values of the perihelion advance of Solar System's planets (Sec.~\\ref{ss:precession2}).\n\nThen, we turned our attention to the Galactic Center, and we studied the motion of S-stars\norbiting around Sgr A*.\nTo constrain $\\Xi$, we let it vary in a given range and fit the corresponding simulated\norbits to astrometric observations.\nIn particular, we adopted a fully relativistic approach which consists of integrating numerically\nEqs.~\\eqref{eq:ddr}-\\eqref{eq:ddt} in order to get the mock orbits, instead of solving Newton's\nlaw with the standard potential replaced by the modified one. \n\\subsection{Perihelion precession in the Solar System}\n\\label{ss:precession2}\nIn order to constrain $\\Xi$ we can start from the Solar System planets whose orbital precession\nhas been measured, namely Mercury, Venus, Earth, Mars, Jupiter and Saturn~\\cite{2015MNRAS.451.3034N}.\nThe confidence region for $\\Xi$ can be identified as the set of values such that the precession\n\\begin{equation}\n\\Delta\\phi=\\Delta\\phi^{(1)}+\\Delta\\phi^{(2)}\n\\label{eq:eq3}\n\\end{equation}\nis compatible with the observations.\nThe planetary parameters\\footnote{The reported values are taken from NASA fact sheet at\nhttps:\/\/nssdc.gsfc.nasa.gov\/planetary\/factsheet\/.},\nthe corresponding observed values of the precession~\\cite{2015MNRAS.451.3034N} and\nthe General Relativistic value obtained by Eq.~(\\ref{eq:eq2}) are reported in Table~\\ref{tab:tab1}\nfrom first to seventh columns.\n\\begin{table*}\n \\centering\n \\begin{tabular}{cccccccc}\n \\hline\\hline\n Planet & $a(10^6km)$ & $P(years)$ & $i(^{\\circ})$ & $e$ & $\\Delta\\phi_{obs}(''\/cy)$ & $\\Delta\\phi_S(''\/cy)$ & $[\\Xi_{min};\\Xi_{max}]$ \\\\\n \\hline\n $\\textbf{Mercury}$&$57.909$&$0.24$&$7.005$&$0.2056$&$43.1000\\pm0.5000$&$42.9822$&$[-89708.7;144995]$\\\\\n $\\textbf{Venus}$&$108.209$&$0.61$&$3.395$&$0.0067$&$8.6247\\pm0.0005$&$8.6247$&$[-1149.67;1167.47]$\\\\\n $\\textbf{Earth}$&$149.596$&$1.00$&$0.000$&$0.0167$&$3.8387\\pm0.0004$&$3.83881$&$[-3660.86;2094.96]$\\\\\n $\\textbf{Mars}$&$227.923$&$1.88$&$1.851$&$0.0935$&$1.3565\\pm0.0004$&$1.35106$&$[155248.;179879.]$\\\\\n $\\textbf{Jupiter}$&$778.570$&$11.86$&$1.305$&$0.0489$&$0.6000\\pm0.3000$&$0.0623142$&$[5.46709\\times10^8;1.92679\\times10^9]$\\\\\n $\\textbf{Saturn}$&$1433.529$&$29.45$&$2.485$&$0.0565$&$0.0105\\pm0.0050$&$0.0136394$&$[-1.57315\\times10^8;3.59618\\times10^7]$\\\\\n \\hline\\hline\n \\end{tabular}\n\\caption{Values of semimajor axis ($a$), orbital period ($P$), tilt angle ($i$), eccentricity ($e$), observed orbital precession\n($\\Delta\\phi_{obs}$), orbital precession as predicted by General Relativity ($\\Delta\\phi_S$) and constraints on $\\Xi$ for Solar System's planets.}\n \\label{tab:tab1}\n\\end{table*}\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics[scale=0.6]{Fig2.pdf}\n \\caption{Bootstrapped orbital precession as a function of the parameter $\\Xi$.\n Black lines give the theoretical prediction from Eq.~\\eqref{eq:eq3}, blue dashed lines represent\n the measurements adapted from Ref.~\\cite{2015MNRAS.451.3034N} and red lines depict the General Relativistic\n values as in Eq.~\\eqref{eq:eq2}.\n Confidence regions for $\\Xi$ are shaded in gray.}\n \\label{fig:fig1}\n\\end{figure*}\nThe allowed region of $\\Xi$ for each planet is defined as the range of values compatible with data, having as extremes the\nvalues of $\\Xi$ solving the equation\n\\begin{equation}\n\\Delta\\phi=\\Delta\\phi_{obs}\n\\ .\n\\end{equation}\nThe inferred lower and upper limits on $\\Xi$ are reported in the last column of Table~\\ref{tab:tab1}, and the included area\nis depicted in Fig.~\\ref{fig:fig1} for each planet (gray shades).\nIt is worth noticing the discrepancy between the General Relativistic value (the red line) and the observed precession\n(blue dashed lines) for Mars and Jupiter; it could be attributed to the incomplete subtraction of nonrelativistic effects\nfrom the observed value, complicated by the presence of the asteroid belt between Mars and Jupiter, and the presence\nof an anomalous residual precession \\cite{2015MNRAS.451.3034N,2013MNRAS.432.3431P}.\n\\par\nThe tightest interval on the parameter $\\Xi$ is obtained with Venus, for which it can vary between $-1149.67$ and $1167.47$.\nWe can use the values defining such an interval to predict the precession for Uranus, Neptune and Pluto,\nfor which no observation is available.\nThe results, summarized in Table~\\ref{tab:tab2}, show that the bootstrapped theory predictions are in perfect agreement\nwith General Relativity. \n\\begin{table*}\n \\centering\n \\begin{tabular}{ccccccc}\n \\hline\\hline\n Planet & $a(10^6km)$ & $P(years)$ & $i(^{\\circ})$ & $e$ & $\\Delta\\phi_{S}(''\/cy)$ & $[\\Delta\\phi_{min};\\Delta\\phi_{max}]$ \\\\\n \\hline\n $\\textbf{Uranus}$&$2872.463$&$84.01$&$0.772$&$0.0457$&$0.00238404$&$[0.00238404;0.00238405]$\\\\\n $\\textbf{Neptune}$&$4495.060$&$164.786$&$1.769$&$0.0113$&$0.000775374$&$[0.000775373;0.000775375]$\\\\\n $\\textbf{Pluto}$&$5869.656$&$247.936$&$17.16$&$0.2444$&$0.000419669$&$[0.000419669;0.00041967]$\\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{Orbital parameters from Nasa Fact Sheet, the General Relativistic prediction for the precession in the sixth column\n and the values predicted by the bounds on the parameter $\\Xi$ of the bootstrapped theory deduced for Venus (see Table~\\ref{tab:tab1}).}\n \\label{tab:tab2}\n\\end{table*}\n\\par\nNow it is useful to move to a different scale and analyze $S2$ (see Table \\ref{tab4}), the only one among the S-stars whose precession was observed \\cite{GRAVITY:2020gka}.\nWe can next calculate the precession for Mars, Jupiter, and $S2$ with the values of $\\Xi$ as obtained by Mercury, Venus, Earth\nand Saturn to check agreement with the corresponding Schwarzschild value and with the observations (Table \\ref{tab3}). The results confirm the compatibility of our predictions with General Relativity.\nThe mean value of the parameter $\\Xi$ such that\n\\begin{equation}\n \\Delta\\phi\n =\n \\Delta\\phi_S\n\\end{equation}\nis given by\n\\begin{equation}\n\\Xi\n=\n-1.64236\\pm 0.10305\n\\ .\n\\label{eq:c2}\n\\end{equation}\n\\begin{table*}\n \\centering\n \\begin{tabular}{cccccccc}\n \\hline\\hline\n Star & $a(AU)$ & $P(years)$ & $i(^{\\circ})$ & $e$ & $\\Delta\\phi_{obs}(''\/\\text{orbit})$ & $\\Delta\\phi_S(''\/\\text{orbit})$ & $[\\Xi_{min};\\Xi_{max}]$ \\\\\n \\hline\n $\\textbf{S2}$&$1031.32$&$16.0455$&$134.567$&$0.884649$&$730.382\\times(1.10\\pm0.19)$&$730.382$&$[-103.066;326.398]$\\\\\n\n \\hline\\hline\n \\end{tabular}\n \\caption{For the star $S2$, orbital parameters~\\cite{GRAVITY:2020gka}, observed orbital precession ($\\Delta\\phi_{obs}$),\n orbital precession as predicted by General Relativity ($\\Delta\\phi_S$), and constraints on $\\Xi$.}\n \\label{tab4}\n\\end{table*}\n\\begin{table*}\n \\centering\n \\begin{tabular}{cccccc}\n \\hline\\hline\n Object & $\\Delta\\phi_S$ & $\\Delta\\phi(\\Xi_{Mercury})$ &$\\Delta\\phi(\\Xi_{Venus})$&$\\Delta\\phi(\\Xi_{Earth})$&$\\Delta\\phi(\\Xi_{Saturn})$\\\\\n \\hline\n $\\textbf{Mars}$&$1.35106$&$[1.34814;1.35577]$&$[1.35102;1.3511]$&$[1.35094;1.35113]$&$[-3.75855;2.5191]$\\\\\n $\\textbf{Jupiter}$&$0.0623142$&$[0.0622752;0.0623773]$&$[0.0623137;0.0623147]$&$[0.0623126;0.0623151]$&$[-0.00607962;0.0779489]$\\\\\n $\\textbf{S2}$&$730.382$&$[-57243.9;94435.7]$&$[-11.7295;1485.75]$&$[-1634.61;2085.15]$&$[-1.01666*10^8;2.32414*10^7]$\\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{Precession for Mars, Jupiter, and $S2$ as predicted by confidence regions for $\\Xi$ inferred from Mercury, Venus, Earth and Saturn.}\n \\label{tab3}\n\\end{table*}\n\n\\subsection{S-star dynamics}\n\\label{ss:dynamic}\nWe can confirm the bounds on $\\Xi$ deduced from orbital precessions by comparing them with results deduced from the analysis\nof stellar orbits at the Galactic Center.\nThis further analysis consists in comparing simulated orbits in bootstrapped Newtonian gravity, obtained by integrating numerically\nEqs.~\\eqref{eq:ddr}-\\eqref{eq:ddt}, with observed orbits of three S-stars constructed by astrometric observations\n(see Sec.~\\ref{sec:data}).\nIn particular, we focused on stars $S2$, $S38$ and $S55$ for two main reasons:\namong the brightest stars they are those with the shortest period.\nThese properties are desired because highly bright stars are less prone to being confused with other sources,\nand a short period allows us to observe a larger part of the orbit in a given observation session.\nFor simplicity, we neglected perturbations from other members of the cluster and any extended matter structures. \n\\subsubsection{Astrometric data}\n\\label{sec:data}\nAstrometric data are taken from Ref.~\\cite{Gillessen:2017}~\\footnote{Data are publicly available on the electronic version of\nRef.~\\cite{Gillessen:2017} at the link https:\/\/iopscience.iop.org\/article\/10.3847\/1538-4357\/aa5c41\/meta.}\nand cover $25$ years of observations performed in the near-infrared (NIR), where the interstellar extinction amounts to\nabout three magnitudes.\nDifferent instruments have been used, which we briefly describe below.\n\\begin{enumerate}\n\\item\nSHARP.- First high-resolution data of the Galactic Center were obtained in $1992$ with the SHARP camera at the European Southern Observatory's\n(ESO) $3.5\\,$m New Technology Telescope (NTT) in Chile, operating in Speckle mode with exposure times of $0.3\\,$s, $0.5\\,$s and $1.0\\,$s.\nThe data, described in detail in Ref.~\\cite{Schodel:2003gy}, led to the detection of high proper motion near the central massive object.\n\\item\nNACO.- The first Adaptive Optics (AO) imaging data were produced by Naos-Conica (NACO) system,\nmounted at the telescope Yepun ($8.0\\,$m) of the VLT and starting to operate in $2002$.\nIt followed a great improvement due to larger telescope aperture, and the higher Strehl ratio (about $40\\%$).\n\\item\nGEMINI.- The dataset includes observations obtained by the $8\\,$m telescope Gemini North in Mauna Kea, Hawaii.\nThese images, obtained using the AO system in combination with the NIR camera Quirc, were processed by the Gemini team.\n\\end{enumerate}\nThe astrometric calibration of these data, treated in Ref.~\\cite{Gillessen:2009ht}, consists in the following steps:\nobtaining high-quality maps of the S-stars, extracting pixel positions, and transforming them to a common astrometric\ncoordinate system.\nIn particular, the astrometric reference frame is implemented relating the S-stars positions to a set of selected reference\nstars, which are in turn related to a set of Silicon Monoxide (SiO) maser stars whose positions relative to Sgr~A* is known.\n\\subsubsection{Fitting procedure}\nThe first step of the fitting procedure is the numerical integration of the system of parametric nonlinear\ndifferential equations~\\eqref{eq:ddr}-\\eqref{eq:ddt} to produce stellar simulated orbits in bootstrapped\nNewtonian gravity.\n\\begin{table*}\n\\begin{centering}\n \\begin{tabular}{cccc}\n \\hline\\hline\nParameter & S2 & S38 & S55 \\\\ \\hline\n$a$ (mas) & $125.058\\pm0.041$ & $141.6\\pm0.2$ & $107.8\\pm1.0$ \\\\\n$\\Omega$ ($^\\circ$) & $228.171\\pm0.031$ & $101.06\\pm0.24$ & $325.5\\pm4.0$ \\\\\n$e$ & $0.884649\\pm0.000066$ & $0.8201\\pm0.0007$ & $0.7209\\pm0.0077$ \\\\\n$i$ ($^\\circ$) & $134.567\\pm0.033$ & $171.1\\pm2.1$ & $150.1\\pm2.2$ \\\\\n$\\omega$ ($^\\circ$) & $66.263\\pm0.031$ & $17.99\\pm0.25$ & $331.5\\pm3.9$ \\\\\n$t_p$ (yr) & $2018.37900\\pm0.00016$ & $2003.19\\pm0.01$ & $2009.34\\pm0.04$ \\\\\n$T$ (yr) & $16.0455\\pm0.0013$ & $19.2\\pm0.02$ & $12.80\\pm0.11$ \\\\ \n$m_K$ &\n13.95 &\n17. &\n17.5 \\\\\nRef. &\n\\cite{GRAVITY:2020gka} &\n\\cite{Gillessen:2017} &\n\\cite{Gillessen:2017} \\\\\n\\hline\\hline\n \\end{tabular}\n \\caption{Orbital parameters of $S2$, $S38$, and $S55$:\n semimajor axis $a$, eccentricity $e$, inclination $i$, angle of the line of node $\\Omega$,\n angle from ascending node to pericenter $\\omega$, orbital period $T$, and the time of the\n pericenter passage $t_p$. }\n \\label{tab1}\n \\end{centering}\n\\end{table*}\n\\begin{table*}\n\\begin{centering}\n \\begin{tabular}{cccc}\n \\hline\\hline\n Star&$M(M_{\\odot})$&$R(kpc)$&Ref.\\\\\n \\hline\n S2&$(4.261\\pm0.012)\\times10^6$&$8.2467\\pm0.0093$&GRAVITY Collaboration \\cite{GRAVITY:2020gka} \\\\\n S38&$(4.35\\pm0.13)\\times10^6$&$8.33\\pm0.12$&Gillessen et al. \\cite{Gillessen:2017}\\\\\n S55&$(4.35\\pm0.13)\\times10^6$&$8.33\\pm0.12$&Gillessen et al. \\cite{Gillessen:2017}\\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{Parameters of the central BH: the mass $M$ and the distance $R$.} \\label{tab2}\n \\end{centering}\n\\end{table*}\n\\par\nPreliminarily, we fix the Keplerian elements and the parameters of the central mass to the values reported\nin Tables~\\ref{tab1} and \\ref{tab2}.\nIn particular, for the study of $S2$, we used the values obtained by the GRAVITY Collaboration~\\cite{GRAVITY:2020gka},\nand for $S38$ and $S55$, we used those obtained in Ref.~\\cite{Gillessen:2017}.\nIn order to have a well-defined Cauchy problem, we must provide initial conditions for the four-dimensional coordinates\nand their derivatives with respect to the proper time: \\{$r(0)$, $\\dot{r}(0)$, $\\theta(0)$, $\\dot{\\theta}(0)$, $\\phi(0)$,\n$\\dot{\\phi}(0)$, $t(0)$, $\\dot{t}(0)$\\}. \nWe assume that the star initially lies on the equatorial plane of the reference system, for which $\\theta(0)=\\pi\/2$,\nand that its initial velocity is parallel to the equatorial plane, that is $\\dot{\\theta}(0)=0$.\nIt then follows that $\\ddot{\\theta}(0)=0$ identically.\nIn particular, we set the initial conditions for $r$ and $\\phi$ at the time of passage of the apocenter,\nwhen the Cartesian coordinates of the star expressed in the orbital plane are given by\n\\begin{equation}\n(x_{orb},y_{orb})\n=\n\\left(-a\\,(1+e),0\\right)\n\\end{equation}\nand the Cartesian components of its velocity read\n\\begin{equation}\n(v_{x,orb},v_{y,orb})\n=\n\\left(0,\\dfrac{2\\,\\pi\\,a^2}{T\\,r}\\,\\sqrt{1-e^2}\\right)\n\\ .\n\\end{equation}\nThe initial condition for $\\dot{t}$ can be retrieved from the normalization of four-velocities requiring\nthat the geodesic is timelike.\n\\par\nStarting from the initial conditions of each star, we proceed with an explicit Runge-Kutta numerical integration\nof the relativistic equations of motion.\nThe results are the stars mock orbit in the orbital plane, described by a four-dimensional array\n$\\{t(\\tau),r(\\tau),\\theta(\\tau),\\phi(\\tau)\\}$.\nTo compare the theoretical orbits with those observed from the Earth, we must project any point\n$(x_{\\rm orb}, y_{\\rm orb})$ on the orbital plane into the point $(x, y)$ on the observer's sky plane.\nSuch a transformation is realized by applying the Thiele-Innes formulas~\\cite{1930MNRAS,aitken}:\n\\begin{align}\n x&=l_1\\,x_{\\rm orb}+l_2\\,y_{\\rm orb}\n \\\\\n y&=m_1\\,x_{\\rm orb}+m_2\\,y_{\\rm orb}\n \\ .\n\\end{align}\nThe Thiele-Innes elements $l_1$, $l_2$, $m_1$ and $m_2$ depend on the Keplerian elements by according to\n\\begin{align}\n l_1\n &=\n \\cos\\Omega\\,\\cos\\omega-\\sin\\Omega\\, \\sin\\omega\\, \\cos i\n \\\\\n l_2\n &=\n -\\cos \\Omega\\, \\sin\\omega-\\sin\\Omega\\, \\cos\\omega \\cos i\n \\\\\n m_1\n &=\n \\sin\\Omega\\, \\cos\\omega+\\cos\\Omega\\, \\sin\\omega \\cos i\n \\\\\n m_2\n &=\n -\\sin\\Omega\\,\\sin\\omega+\\cos\\Omega\\, \\cos\\omega\\, \\cos i\n .\n \\end{align}\n\\par \nThe second step consists in the fitting procedure itself, and has the aim to constrain the parameter $\\Xi$.\nGuided by the results obtained from the precession in Sec.~\\ref{ss:precession}, we let it vary freely in an\nappropriate range including the value~\\eqref{eq:c2}.\nFor each value of $\\Xi$ we repeated the aforementioned procedure to get the true positions $(x_i,y_i)$\nand velocities $(\\dot{x}_i,\\dot{y}_i)$ of the stars at all the observed epochs.\nAfter transforming the true positions into the apparent positions $(x_i^{th},y_i^{th})$, we computed the\nreduced-$\\chi^2$ distribution to quantify the discrepancy between theory and observations as\n\\begin{equation}\n \\chi^2_{\\rm red}\n =\n \\frac{1}{2\\,N-1}\\,\n \\sum_i^N\\left[\\left(\\frac{x_i^{\\rm obs}-x_i^{\\rm th}}{\\sigma_{x_i^{\\rm obs}}}\\right)^2\n +\\left(\\frac{y_i^{\\rm obs}-y_i^{\\rm th}}{\\sigma_{y_i^{\\rm obs}}}\\right)^2\n \\right]\n \\ ,\n\\end{equation}\nwhere $(x_i^{\\rm obs},y_i^{\\rm obs})$ and $(x_i^{\\rm th},y_i^{\\rm th})$ are respectively the observed and\nthe predicted positions, $N$ is the number of observations and $(\\sigma_{x_i^{\\rm obs}},\\sigma_{y_i^{\\rm obs}})$\nare the observative uncertainties.\nFinally, we calculated the likelihood probability distribution, $2\\,\\log\\mathcal{L}=-\\chi^2_{\\rm red}(\\Xi)$.\nThe best-fit value for $\\Xi$ was derived as the point that maximizes the likelihood distribution.\n\\subsubsection{Results}\n\\begin{table}\n \\centering\n \\begin{tabular}{cc}\n \\hline\\hline\n Star&$\\Xi$\\\\\n \\hline\n $S2$&$-5900_{-44964.9}^{+39358.8}$\\\\\n $S38$&$25500_{-23312.88}^{+22607.1}$\\\\\n $S55$&$60400_{-87446.9}^{+81386}$\\\\\n Multi-star&$17400_{-32244.3}^{+30555.6}$\\\\\n \\hline\\hline\n \\end{tabular}\n \\caption{Best-fit values for $\\Xi$.}\n \\label{tab5}\n\\end{table}\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics[scale=0.45]{Fig3.pdf}\n \\caption{Comparisons between the NTT\/VLT astrometric observations with their errors (black circles)\n and the theoretical best-fit orbits using parameters reported in the first three rows of Table~\\ref{tab5}.\n The results for $S2$, $S55$ and $S38$ are illustrated respectively in the top left, top right, and bottom panels.}\n \\label{fig:fig3}\n\\end{figure*}\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics[scale=0.6]{Fig4.pdf}\n \\caption{Top panels show the comparison between the observed and fitted coordinates, and\n bottom panels show the corresponding (O-C) residuals for $S2$, $S38$ and $S55$.}\n \\label{fig:fig4}\n\\end{figure*}\n\\begin{figure*}[!ht]\n \\centering\n \\includegraphics[scale=0.7]{Fig5.pdf}\n \\caption{Best relativistic multistar orbit fit of $S2$, $S38$ and $S55$.}\n \\label{fig:fig5}\n\\end{figure*}\nOur results are summarized in Table~\\ref{tab5} and represented in Figs.~\\ref{fig:fig3}, \\ref{fig:fig4} and \\ref{fig:fig5}.\n\\par\nIn Fig.~\\ref{fig:fig3} we show the comparison between best fit and observed orbits of the selected stars:\nthe top left panel, the top right panel, and the bottom panel illustrate the results respectively for $S2$, $S55$ and $S38$.\nAstrometric data are reported with their own error bars to note the effectiveness of our fitting procedure.\n\\par\nFigure~\\ref{fig:fig4} depicts the comparisons between the observed and simulated coordinates with the corresponding residuals.\nThe left column contains the right ascension (RA), while the right column reports the declination (Dec).\nIt is worth noticing that in all stars and for both coordinates, residuals are larger at the beginning observing epochs,\nand decrease as astrometric accuracy improves.\n\\par\nFinally, we show in Fig.~\\ref{fig:fig5} the orbits of the studied S-stars corresponding to the best multistar fit for\n$\\Xi=17400_{-32244.3}^{+30555.6}$ (last row of Table~\\ref{tab5}).\nAs expected, the parameter $\\Xi$ is compatible with the the mean value~\\eqref{eq:c2} such that the bootstrapped\nNewtonian precession recovers General Relativity.\n\\section{Conclusions}\n\\label{s:conc}\n\\setcounter{equation}{0}\nIn this paper we tested astronomically the bootstrapped Newtonian gravity.\nThe starting point is the complete spacetime metric~\\eqref{eq:g} derived in Ref.~\\cite{Casadio:2021gdf}.\nThe leading order deviation from the Schwarzschild solution cannot be eliminated and is encoded in the free parameter\n$\\Xi$, which is not \\textit{a priori} known and must be constrained by observations.\n\\par\nFirst, we show that bounds on $\\Xi$ can be deduced from the comparison between the measurements\nof the orbital precession of Solar System bodies and the theoretical predictions arising from bootstrapped\nNewtonian metric computed in Ref.~\\cite{Casadio:2021gdf}.\nThe inferred confidence region for $\\Xi$ for each planet is reported in Table~\\ref{tab:tab1} and graphically\ndepicted in Fig.~\\ref{fig:fig1}.\nBased on the tightest interval obtained with Venus, we found that $\\Xi$ lies in the range $[-1149.67\\,;\\,+1167.47]$.\nWith these values of the parameter $\\Xi$ we predicted the orbital precession for Uranus, Neptune and Pluto,\nand we found a theoretical precession in great agreement with the General Relativistic value.\nSuch a compatibility was confirmed by turning our attention to the Galactic Center and repeating the same\nanalysis for the star $S2$~\\cite{GRAVITY:2020gka}.\nThe mean value of the parameter $\\Xi$ such that the bootstrapped Newtonian precession equals the Schwarzschild\nvalue is \n\\begin{equation}\n\\Xi=-1.64236\\pm0.10305\n\\ .\n\\end{equation}\n\\par\nWe next focused on the Galactic Center scale to constrain $\\Xi$ by investigating the orbital motion of S-stars.\nWe used a fully relativistic approach based on an agnostic method:\nfor each value of $\\Xi$, we solved the geodesic equations numerically starting from initial conditions at the\napocenter.\nAfter applying the Thiele-Innes formulas to the mock positions, we were able to compare the resulting solution\nwith the observed stellar orbits.\nFinally, we quantified the discrepancy between the simulated and observed orbits performing a $\\chi^2$-statistics.\nThe inferred confidence region for $\\Xi$ is compatible with the bounds obtained by the precession analysis,\nand thus with General Relativity.\nIndeed we found $17400_{-32244.3}^{+30555.6}$.\nSince S-stars are at a distance of about $r>1000\\, R_g$ from the source, strong-field effects are not relevant,\nand such a result was expected. \n\\par\nThe proposed approach is completely general and represents a useful tool in the classification of extended\ntheories of gravity.\nMoreover, this approach has already been used to test a Yukawa-like gravitational potential by means of dynamical tests at the\nGalactic Center~\\cite{yuk1,yuk2,yuk3,DellaMonica2021}, where no significant deviations from General Relativity were found.\nNevertheless, the definitive confirmation\/exclusion of a given extended theory of gravity requires the improvement\nof the constraints on its free parameters based on the observation of various strong-field effects.\nThis task can be accomplished taking advantage of the increasing high accuracy observations of second\ngeneration instruments like GRAVITY~\\cite{Gillessen:2010ei}.\n\\par\nIn particular, we focus on finding stars with short semimajor axis and highly eccentric orbits within the\npericenter of $S2$.\nThe existence of such a population of stars can be inferred from the recent discovery of the sources $S62$,\n$S4711$ and $S4714$~\\cite{peissker2020,peissker20202}.\nObserving stars at smaller radii is essential to detect strong-field effects, which become no longer negligible\nfor distances of the pericenter $r\\simeq 10\\,R_g$, and therefore any deviations from General Relativity\nto find out the underlying gravitational theory.\n\n\n\n\n\\begin{acknowledgments}\nR.C.~is partially supported by the INFN grant FLAG. M.D.L. and A.D. acknowledges INFN Sez. di Napoli (Iniziativa Specifica TEONGRAV). \nA.G.~is supported by the European Union's Horizon 2020 research and innovation programme under the \nMarie Sk\\l{}odowska-Curie Actions (grant agreement No.~895648--CosmoDEC).\nThe work of R.C.~and A.G~has also been carried out in the framework\nof activities of the National Group of Mathematical Physics (GNFM, INdAM). \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA Borel set $E$ in an Abelian Polish group $X$ is said to be \\emph{Haar Null} if there is a Borel probability measure $\\mu$ on $X$ such that $\\mu(x+E)=0$ for every $x\\in X$. Haar null sets were introduced for the first time by J.P.R.\\ Christensen in \\cite{chris_1972} in order to extend the notion of sets with zero Haar measure to nonlocally compact Polish groups, where the Haar measure is not defined. Indeed, Haar null sets and sets with zero Haar measure agree on locally compact Abelian Polish groups and, as in the locally compact case, Haar null sets form a $\\sigma$-ideal in the Borel $\\sigma$-algebra of $X$. We say that a Borel set $E\\subseteq X$ is \\emph{Haar positive} if $E$ is not Haar null. The reader is invited to have a look at the survey papers \\cite{bog_2018} and \\cite{en_2020}, as well as at \\cite{benlin}, Chapter 6, for a detailed exposition of this topic.\n\nAlthough the definition of Haar positive sets is measure-theoretical in nature, in \\cite{daverava2} a measure-free characterisation of Haar positive closed, convex sets in separable Banach spaces is provided.\n\n\\begin{theo}\n\\label{theo:intro}\nLet $C$ be a closed and convex set in a separable Banach space $X$ with unit ball $B_X$. The following assertions are equivalent.\n\\begin{enumerate}\n\\item $C$ is Haar positive.\n\\item There is $r>0$ with the property that, for every compact set $K\\subseteq rB_X$, there is $x\\in X$ such that $x+K\\subseteq C$.\n\\end{enumerate}\n\\end{theo} \n\\noindent Equivalently, a closed, convex set $C$ in a separable Banach space $X$ is Haar null if and only if $C$ is \\emph{Haar meagre}. That is, there is a compact metric space $M$ and a continuous function $\\func{f}{M}{X}$ such that $f^{-1}(x+C)$ is meagre in $M$ for every $x\\in X$. For a more general treatment of Haar meagre sets, we refer the reader to \\cite{dar_2013}, where they were first introduced.\n\nTheorem \\ref{theo:intro} motivates the introduction of several geometric radii associated to such sets. These are defined in Section \\ref{sec:radii} and multiple properties about these quantities are shown. In Section \\ref{sec:weaks} we exploit these radii to prove a new characterisation of Haar positive, closed, convex sets. Namely, a closed, convex set $C$ in a separable Banach space $X$ is Haar positive if and only if its weak$^*$ closure in the second dual $X^{**}$ has nonempty interior with respect to the norm topology. This improves the well-known fact that closed, convex subsets of a Euclidean space $\\mathbb{R}^d$ have positive Lebesgue measure if and only if their interior is nonempty and a theorem of Eva Matou\\v{s}kov\\'{a} (\\cite{eva3}) which states that, in separable, reflexive Banach spaces, closed convex sets are Haar positive if and only if they have nonempty interior. As a corollary, it is shown in Section \\ref{sec:top} that the family of Haar positive, closed, convex and bounded sets is open in the space of all nonempty, closed, convex and bounded subsets of $X$, endowed with the Hausdorff distance.\n\nThe standard notation of Banach space theory is used throughout the paper. Given a Banach space $X$, $B_X$ and $S_X$ stand for the closed unit ball and the unit sphere of $X$ respectively. $X^*$ denotes the dual of $X$, whereas $X^{**}$ is the second dual. The closure of a set $E\\subseteq X$ is denoted by $\\textup{cl}(E)$ and in a dual space we denote by $\\textup{w}^*\\textup{-cl}(E)$ the closure of $E$ in the weak$^*$ topology. We use the notation $\\mathcal{C}(X)$ for the space of all nonempty, closed, convex and bounded subsets of $X$. This turns into a complete metric space if endowed with the Hausdorff distance $d_\\textup{H}$ given by\n\\[ d_\\textup{H}(C,D)=\\inf\\{\\epsilon>0\\,:\\,C\\subseteq\\textup{cl}(D+\\epsilon B_X) \\text{ and }D\\subseteq\\textup{cl}(C+\\epsilon B_X)\\}. \\]\nAll Banach spaces are assumed to be real.\n\n\\section{Geometric radii of closed, convex and bounded sets} \n\\label{sec:radii}\nGiven $\\rho>0$ and a nonempty, closed, convex set $C$ in a Banach space $X$, we denote by $\\textup{ir}(C,\\rho)$ the $\\rho\\,$-\\emph{inner radius} of $C$. That is, the supremum of all $r\\geq 0$ such that $C$ contains a closed ball of the form $x+rB_X$, where $x\\in\\rho B_X$. Clearly, $C$ has nonempty interior if and only if $\\textup{ir}(C,\\rho)>0$ for some $\\rho>0$. The $\\rho\\,$-\\emph{compact radius} $\\textup{kr}(C,\\rho)$ is defined as follows: it is the supremum of all $r\\geq 0$ with the property that, for every compact set $K\\subseteq rB_X$, there is $x\\in\\rho B_X$ such that $x+K\\subseteq C$. The $\\rho\\,$-\\emph{finite radius} $\\textup{fr}(C,\\rho)$ is defined similarly. Namely, it is the supremum of all $r\\geq 0$ with the property that, for every finite set $F\\subseteq rB_X$, there is $x\\in\\rho B_X$ such that $x+F\\subseteq C$. Finally, we introduce the $\\rho\\,$-\\emph{loose radius} $\\textup{lr}(C,\\rho)$ as follows: it is the supremum of all $r\\geq 0$ with the property that, for every finite set $F=\\{x_1,\\dots,x_n\\}\\subseteq rB_X$ and every $\\epsilon>0$, there are a finite set $G=\\{y_1,\\dots,y_n\\}$ and $z\\in\\rho B_X$ such that $\\norm{x_j-y_j}{X}<\\epsilon$ for every $j\\in\\{1,\\dots,n\\}$ and $z+G\\subseteq C$.\n\n\\begin{theo}\n\\label{theo:radii}\nLet $C$ be a nonempty, closed, convex set in a Banach space $X$. The following inequalities hold for every $\\rho>0$.\n\\[ \\textup{ir}(C,\\rho)\\leq\\textup{kr}(C,\\rho)\\leq\\textup{fr}(C,\\rho)\\leq\\textup{lr}(C,\\rho)\\leq 2\\textup{kr}(C,\\rho). \\]\n\\end{theo}\n\\begin{proof}\nThe only nontrivial inequality is the last one and we will prove it using a variation of the Banach-Dieudonn\\'{e} Theorem, similar to the one which appears in \\cite{eva1}. If $\\textup{lr}(C,\\rho)=0$, the claim is obvious. Assume that $\\textup{lr}(C,\\rho)>0$. We aim to show that, for every $r\\geq 0$ such that $2r<\\textup{lr}(C,\\rho)$, every compact set $K\\subseteq rB_X$ can be translated into $C$ via some $z\\in\\rho B_X$. Let $K$ be a compact subset of $rB_X$ and find a finite set $F_1=\\{x_{1,1},\\dots,x_{1,k(1)}\\}\\subseteq 2rB_X$ such that $2^{-1}F_1$ is an $(r\/4)$-net for $K$. Find a further finite set $G_1=\\{y_{1,1},\\dots,y_{1,k(1)}\\}$ and $z_1\\in\\rho B_X$ such that $\\norm{y_{1,j}-x_{1,j}}{X}0$ such that $\\textup{kr}(C,\\rho)>0$. It follows in particular that $C$ is Haar positive if and only if $\\textup{lr}(C,\\rho)>0$ for some $\\rho>0$.\n\\end{lemma}\n\\begin{proof}\nIt is a direct consequence of Theorem \\ref{theo:intro} that $\\textup{kr}(C,\\rho)=0$ for every $\\rho>0$ if $C$ is Haar null. To show the converse statement, assume by way of contradiction that, for every positive integer $n$, there is a compact set $K_n\\subseteq n^{-1}B_X$ such that, whenever $x+K_n\\subseteq C$ for some $x\\in X$, we have $\\norm{x}{X}>n$. By Theorem \\ref{theo:intro}, there is $r>0$ such that every compact subset of $rB_X$ can be translated into $C$. In particular, this holds for\n\\[ K=\\{0\\}\\cup\\bigcup_{n>r^{-1}}K_n. \\]\nIf $x$ is such that $x+K\\subseteq C$, then $x+K_n\\subseteq C$ for every $n>r^{-1}$. It then follows by the choice of $K_n$ that $\\norm{x}{X}>n$ for every positive integer $n$, which does not make any sense. The last assertion is a consequence of Theorem \\ref{theo:radii}.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:lr-fr}\nGiven $\\rho>0$ and a nonempty, closed, convex set $C$ in a Banach space $X$, we have\n\\[ \\textup{lr}(C,\\rho)=\\lim_{\\delta\\to 0^+}\\textup{fr}\\bigl(\\textup{cl}(C+\\delta B_X),\\rho\\bigr). \\]\n\\end{lemma}\n\\begin{proof}\nSet $r=\\textup{lr}(C,\\rho)$. Pick $\\epsilon>0$ and find $F=\\{x_1,\\dots,x_n\\}\\subset(r+\\epsilon)B_X$ and $\\delta_0>0$ such that every finite set $G=\\{y_1,\\dots,y_n\\}$ which fulfills the condition $\\norm{y_j-x_j}{X}<2\\delta_0$ for every $j\\in\\{1,\\dots,n\\}$ cannot be translated into $C$ via some $z\\in\\rho B_X$. Suppose that there is $z\\in\\rho B_X$ such that $z+F\\subset\\textup{cl}(C+\\delta_0B_X)$. This would imply that, for every $j\\in\\{1,\\dots,n\\}$, we can find $w_j\\in C$ such that $\\norm{z+x_j-w_j}{X}<2\\delta_0$. Put $y_j=w_j-z$ for every $j$. Then the set $G=\\{y_1,\\dots,y_n\\}$ satisfies $z+G\\subseteq C$ and $\\norm{x_j-y_j}{X}<2\\delta_0$ for every $j$, in contradiction with the choice of $F$ and $\\delta_0$. Thus, $F$ cannot be translated into $\\textup{cl}(C+\\delta_0B_X)$ via some $z\\in\\rho B_X$, which yields\n\\[ \\lim_{\\delta\\to 0^+}\\textup{fr}\\bigl(\\textup{cl}(C+\\delta B_X),\\rho\\bigr)\\leq\\textup{fr}\\bigl(\\textup{cl}(C+\\delta_0B_X),\\rho\\bigr)\\leq r+\\epsilon. \\]\nSince $\\epsilon$ is arbitrary, we get\n\\[ \\lim_{\\delta\\to 0^+}\\textup{fr}\\bigl(\\textup{cl}(C+\\delta B_X),\\rho\\bigr)\\leq r. \\]\nTo show the opposite inequality, observe that it is obvious if $r=0$. Assume that $r>0$, choose $\\delta\\in(0,r)$ and pick a finite set $F=\\{x_1,\\dots,x_n\\}\\subset(r-\\delta)B_X$. Find $G=\\{y_1,\\dots,y_n\\}$ and $z\\in\\rho B_X$ such that $z+G\\subseteq C$ and $\\norm{x_j-y_j}{X}<\\delta$ for each $j$. Notice that\n\\[ z+x_j=z+y_j+x_j-y_j\\in C+\\delta B_X. \\]\nfor each $j$, hence $z+F\\subseteq\\textup{cl}(C+\\delta B_X)$. Since $\\delta>0$ and $F\\subset (r-\\delta)B_X$ are arbitrary, we conclude that\n\\[ \\lim_{\\delta\\to 0^+}\\textup{fr}\\bigl(\\textup{cl}(C+\\delta B_X),\\rho\\bigr)\\geq\\lim_{\\delta\\to 0^+}(r-\\delta)=r, \\]\nas wished.\n\\end{proof}\n\n\\section{Weak$^*$ closures of Haar positive closed, convex sets}\n\\label{sec:weaks}\nGiven a Banach space $X$ and a positive integer $n$, recall that we can endow the Banach space $X^n$, the product of $n$ copies of $X$, with the $\\infty$-product norm:\n\\[ \\norm{x}{X^n}=\\max_{1\\leq j\\leq n}\\norm{x_j}{X} \\]\nfor every $x=(x_1,\\dots,x_n)\\in X^n$. In this way we have $B_{X^n}={(B_X)}^n$. We denote by $\\func{\\Delta}{X}{X^n}$ the diagonal embedding $x\\mapsto(x,\\dots,x)$. The second dual space of $X^n$ is simply ${(X^{**})}^n$, endowed with the same $\\infty$-product norm. Notice that, in ${(X^{**})}^n$, we have $\\textup{w}^*\\textup{-cl}(\\Delta(\\rho B_X))=\\Delta(\\rho B_{X^{**}})$. We are now ready to state our main theorem.\n\\begin{theo}\n\\label{theo:hpws}\nLet $\\rho>0$, let $C$ be a nonempty, closed, convex set in a Banach space $X$ and let $r\\geq 0$. The following assertions are equivalent.\n\\begin{enumerate}\n\\item $\\textup{lr}(C,\\rho)\\geq r$.\n\\item For every finite $F\\subseteq rB_X$ there is $z\\in\\rho B_{X^{**}}$ such that $z+F\\subseteq\\textup{w}^*\\textup{-cl}(C)$.\n\\item There is $z_0\\in\\rho B_{X^{**}}$ such that $z_0+rB_{X^{**}}\\subseteq\\textup{w}^*\\textup{-cl}(C)$.\n\\end{enumerate}\nIn particular, $\\textup{lr}(C,\\rho)=\\textup{ir}(\\textup{w}^*\\textup{-cl}(C),\\rho)$ and, in case $X$ is separable, $C$ is Haar positive if and only if $\\textup{w}^*\\textup{-cl}(C)$ has nonempty interior in the norm topology of $X^{**}$.\n\\end{theo}\n\\begin{proof}\n(1)$\\Rightarrow$(2). Let $F=\\{x_1,\\dots,x_n\\}\\subseteq rB_X$ be a finite set and, for each positive integer $k$, find $G_k=\\{y_{k,1},\\dots,y_{k,n}\\}$ and $z_k\\in\\rho B_X$ such that $\\norm{x_{k,j}-y_{k,j}}{X}t>f(x)$ for every $x\\in\\textup{w}^*\\textup{-cl}(C)-z_0$. Define\n\\[ s=\\frac{f(x_0)-t}{2} \\]\nand $V=\\{x\\in X^{**}\\,:\\,|f(x)|f(x_0)-s=\\frac{f(x_0)+t}{2}. \\]\nAt the same time though, we have $z_{\\psi(\\gamma)}+y_0-z_0\\in\\textup{w}^*\\textup{-cl}(C)-z_0$, therefore $|f(z_{\\psi(\\gamma)})-f(z_0)+f(y_0)|0$ such that $\\textup{fr}(\\textup{cl}(C+\\delta B_X),\\rho)t_2$ for every $x\\in D$. By taking the weak$^*$ closures of both sets in ${(X^n)}^{**}$, we have $f(x)\\leq t_1$ for every $x\\in\\Delta(\\rho B_{X^{**}})$ and $f(x)\\geq t_2$ for every $x\\in\\textup{w}^*\\textup{-cl}(D)$, thus $\\Delta(\\rho B_{X^{**}})\\cap\\textup{w}^*\\textup{-cl}(D)=\\varnothing$. This is a contradiction, because\n\\[ z_0\\in\\bigcap_{j=1}^n\\bigl(\\textup{w}^*\\textup{-cl}(C)-x_j\\bigr), \\]\ni.e.\\ $(z_0,\\dots,z_0)\\in(\\textup{w}^*\\textup{-cl}(C)-x_1)\\times\\cdots\\times(\\textup{w}^*\\textup{-cl}(C)-x_n)=\\textup{w}^*\\textup{-cl}(D)$.\n\nThe last statement is a consequence of Lemma \\ref{lemma:kr}.\n\\end{proof}\n\nTheorem \\ref{theo:hpws} offers an interesting connection with the theory of weak$^*$ derived sets. If $X$ is a Banach space and $E$ is a subset of $X^*$, the first weak$^*$ derived set of $E$ is given by\n\\[ E^{(1)}=\\bigcup_{n=1}^\\infty\\textup{w}^*\\textup{-cl}(E\\cap nB_{X^*}) \\]\nand corresponds to the set of all possible limits of bounded, weak$^*$ convergent nets with elements in $E$. Clearly $E^{(1)}\\subseteq\\textup{w}^*\\textup{-cl}(E)$. We refer the reader to \\cite{ostr_2023} and the references therein for a detailed account on weak$^*$ derived sets. In particular, it has to be remarked that, in general, $E^{(1)}$ and $\\textup{w}^*\\textup{-cl}(E)$ can be different. However, we have the following result.\n\n\\begin{cor}\nLet $C$ be a closed, convex set in a Banach space $X$. In the second dual $X^{**}$, $C^{(1)}$ has empty interior in the norm topology if and only if $\\textup{w}^*\\textup{-cl}(C)$ does.\n\\end{cor}\n\\begin{proof}\nOne of the implications follows from $C^{(1)}\\subseteq\\textup{w}^*\\textup{-cl}(C)$. Conversely, assume that $\\textup{w}^*\\textup{-cl}(C)$ has nonempty interior in the norm topology. Then, by Theorem \\ref{theo:hpws} and Theorem \\ref{theo:radii}, there are $\\rho>0$ and $r>0$ such that $\\textup{kr}(C,\\rho)\\geq r$. Now, it is not hard to see that $\\textup{kr}(C\\cap nB_X,\\rho)\\geq r$ for every positive integer $n>\\rho+r$, hence $\\textup{w}^*\\textup{-cl}(C\\cap n B_X)$ has nonempty interior in the norm topology for every $n>\\rho+r$ and so does $C^{(1)}$.\n\\end{proof}\n\n\\section{Haar positive sets and the Hausdorff metric}\n\\label{sec:top}\n\nLet $C$ be a closed and convex set in a Banach space $X$. Under the additional assumption that $C$ is bounded we define \n\\begin{align*} \n\\textup{bir}(C)=\\sup_{\\rho>0}\\textup{ir}(C,\\rho),&\\quad\\textup{bkr}(C)=\\sup_{\\rho>0}\\textup{kr}(C,\\rho),\\\\\n\\textup{bfr}(C)=\\sup_{\\rho>0}\\textup{fr}(C,\\rho),&\\quad\\textup{blr}(C)=\\sup_{\\rho>0}\\textup{lr}(C,\\rho). \n\\end{align*}\nAll these values are finite, as $\\textup{diam}(C)$ is an upper estimate for all of them. Moreover, the chain of inequalities \n\\begin{equation} \n\\label{eq:radii}\n\\textup{bir}(C)\\leq\\textup{bkr}(C)\\leq\\textup{bfr}(C)\\leq\\textup{blr}(C)\\leq 2\\textup{bkr}(C)\n\\end{equation}\nis a direct consequence of Theorem \\ref{theo:radii}. We call $\\textup{bir}(C)$ the \\emph{bounded inner radius} of $C$. It is the supremum of all $r\\geq 0$ such that $C$ contains a closed ball of radius $r$. $\\textup{bkr}(C)$ is the \\emph{bounded compact radius} of $C$ and corresponds to the supremum of all $r\\geq 0$ with the property that, for every compact set $K\\subseteq rB_X$, there is $x\\in X$ such that $x+K\\subseteq C$. The \\emph{bounded finite radius} $\\textup{bfr}(C)$ is, similarly, the supremum of all $r\\geq 0$ with the property that, for every finite set $F\\subseteq rB_X$, there is $x\\in X$ such that $x+F\\subseteq C$. Finally, the \\emph{bounded loose radius} $\\textup{blr}(C)$ is the supremum of all $r\\geq 0$ with the property that, for every finite set $F=\\{x_1,\\dots,x_n\\}\\subseteq rB_X$ and every $\\epsilon>0$, there are a finite set $G=\\{y_1,\\dots,y_n\\}$ and $z\\in X$ such that $\\norm{x_j-y_j}{X}<\\epsilon$ for every $j$ and $z+G\\subseteq C$. In case $X$ is separable, $C$ is Haar positive if and only if $\\textup{blr}(C)>0$. This follows from ($\\ref{eq:radii}$) and Theorem \\ref{theo:intro}.\n\nTheorem \\ref{theo:hpws} allows to prove that the family $\\mathcal{H}^+(X)$ of Haar positive, closed, convex and bounded subsets of a separable Banach space $X$ is an open subset of $\\mathcal{C}(X)$. This result follows from a few lemmas we are going to show.\n\n\\begin{lemma}\n\\label{lemma:hpws}\nGiven a closed, convex and bounded set $C$ in a Banach space $X$, we have $\\textup{blr}(C)=\\textup{bir}(\\textup{w}^*\\textup{-cl}(C))$.\n\\end{lemma}\n\\begin{proof}\nUsing Theorem \\ref{theo:hpws}, we have\n\\[ \\textup{blr}(C)=\\sup_{\\rho>0}\\textup{lr}(C,\\rho)=\\sup_{\\rho>0}\\textup{ir}\\bigl(\\textup{w}^*\\textup{-cl}(C),\\rho\\bigl)=\\textup{bir}\\bigl(\\textup{w}^*\\textup{-cl}(C)\\bigr). \\qedhere \\]\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:wscliso}\nGiven a Banach space $X$, the function $\\func{\\textup{w}^*\\textup{-cl}}{\\mathcal{C}(X)}{\\mathcal{C}(X^{**})}$ is an isometry.\n\\end{lemma}\n\\begin{proof}\nTake $\\epsilon>0$ and let $C,D\\in\\mathcal{C}(X)$ be such that $d_\\textup{H}(C,D)\\leq\\epsilon$. This means that $C\\subseteq\\textup{cl}(D+\\epsilon B_X)$ and $D\\subseteq\\textup{cl}(C+\\epsilon B_X)$. Take $x\\in\\textup{w}^*\\textup{-cl}(C)$ and let ${(x_\\alpha)}_{\\alpha\\in I}$ be a net in $C$ whose weak$^*$ limit is $x$. Choose $\\delta>0$ and, for each $\\alpha\\in I$, find $y_\\alpha\\in D$ and $z_\\alpha\\in\\epsilon B_X$ such that $\\norm{x_\\alpha-(y_\\alpha+z_\\alpha)}{X}\\leq\\delta$. By considering a subnet if necessary, we can assume that the nets ${(y_\\alpha)}_{\\alpha\\in I}$ and ${(z_\\alpha)}_{\\alpha\\in I}$ have weak$^*$ limits $y\\in\\textup{w}^*\\textup{-cl}(D)$ and $z\\in\\epsilon B_{X^{**}}$ respectively. Moreover, \n\\[ \\norm{x-(y+z)}{X^{**}}\\leq\\liminf_{\\alpha\\in I}\\norm{x_\\alpha-(y_\\alpha+z_\\alpha)}{X}\\leq\\delta, \\]\ni.e.\\ $x\\in\\textup{w}^*\\textup{-cl}(D)+(\\epsilon+\\delta)B_{X^{**}}$. Since $\\delta$ is arbitrary, we have\n\\[ x\\in\\bigcap_{\\delta>0}\\bigl(\\textup{w}^*\\textup{-cl}(D)+(\\epsilon+\\delta)B_{X^{**}}\\bigr)=\\textup{cl}\\bigl(\\textup{w}^*\\textup{-cl}(D)+\\epsilon B_{X^{**}}\\bigr). \\]\nAs $x$ is also arbitrary, we conclude that $\\textup{w}^*\\textup{-cl}(C)\\subseteq\\textup{cl}(\\textup{w}^*\\textup{-cl}(D)+\\epsilon B_{X^{**}})$. The inclusion $\\textup{w}^*\\textup{-cl}(D)\\subseteq\\textup{cl}(\\textup{w}^*\\textup{-cl}(C)+\\epsilon B_{X^{**}})$ is shown similarly, hence we get that $d_\\textup{H}(\\textup{w}^*\\textup{-cl}(C),\\textup{w}^*\\textup{-cl}(D))\\leq\\epsilon$. \n\nConversely, suppose that $d_\\textup{H}(C,D)>\\epsilon$ and, by swapping $C$ and $D$ if necessary, assume that there is $x_0\\in C\\setminus\\textup{cl}(D+\\epsilon B_X)$. Using the Hahn-Banach Theorem, find $f\\in X^*$ and $t\\in\\mathbb{R}$ such that $f(x_0)>t>f(x)$ for every $x\\in\\textup{cl}(D+\\epsilon B_X)$. Then $f(x)\\leq t$ for every $x\\in\\textup{w}^*\\textup{-cl}(D+\\epsilon B_X)$. Now observe that \n\\begin{align*} \n\\textup{cl}(\\textup{w}^*\\textup{-cl}(D)+\\epsilon B_{X^{**}}) &= \\textup{w}^*\\textup{-cl}(D)+\\epsilon B_{X^{**}}= \\\\\n&=\\textup{w}^*\\textup{-cl}(D)+\\textup{w}^*\\textup{-cl}(\\epsilon B_{X})\\subseteq\\textup{w}^*\\textup{-cl}(D+\\epsilon B_X). \n\\end{align*}\nThus we get $x_0\\notin\\textup{cl}(\\textup{w}^*\\textup{-cl}(D)+\\epsilon B_{X^{**}})$, from which $d_\\textup{H}(\\textup{w}^*\\textup{-cl}(C),\\textup{w}^*\\textup{-cl}(D))>\\epsilon$ follows. Since $C$, $D$ and $\\epsilon$ are arbitrary, the proof is complete.\n\\end{proof}\n\nFinally, we want to show the continuity of the bounded inner radius in the metric space of all nonempty, closed, convex and bounded subsets of a Banach space, endowed with the Hausdorff metric. Although it seems that this result cannot be found in the literature, it might be well-know and belong to the folklore. We prove it here for the sake of completeness.\n\\begin{lemma}\n\\label{lemma:ir}\nIn a Banach space $X$, the function $\\func{\\textup{bir}}{\\mathcal{C}(X)}{[0,+\\infty)}$ is Lipschitz continuous with Lipschitz constant $1$.\n\\end{lemma}\n\\begin{proof}\nThe proof is based on the following claim: if $C$ is a nonempty, closed, convex and bounded set in $X$ and $\\epsilon>0$, then\n\\begin{equation}\n\\label{eq:ir+eps}\n\\textup{bir}\\bigl(\\textup{cl}(C+\\epsilon B_X)\\bigr)=\\textup{bir}(C)+\\epsilon.\n\\end{equation}\n\nLet us see first that $\\textup{bir}(\\textup{cl}(C+\\epsilon))\\leq\\textup{bir}(C)+\\epsilon$. Set $r=\\textup{bir}(C)$ and, looking for a contradiction, assume that $\\textup{bir}(\\textup{cl}(C+\\epsilon))>r+\\epsilon$. Then there is $\\delta>0$ such that $r+\\epsilon+\\delta<\\textup{bir}(\\textup{cl}(C+\\epsilon))$ and, without loss of generality, we may assume that $(r+\\epsilon+\\delta)B_X\\subseteq\\textup{cl}(C+\\epsilon B_X)$. Since $(r+\\delta)B_X\\setminus C\\ne\\varnothing$, the Hahn-Banach Theorem provides $x_0\\in(r+\\delta)B_X$, $t\\in\\mathbb{R}$ and $f\\in S_{X^*}$ such that $f(x_0)>t>f(x)$ for every $x\\in C$. Further, we have $t<\\norm{f}{X^*}\\norm{x_0}{X}\\leq r+\\delta$. Find $\\delta'$ such that $r+\\delta-\\delta'-t>0$ and $x_1\\in(r+\\epsilon+\\delta)B_X$ such that $f(x_1)>r+\\epsilon+\\delta-\\delta'$. Since $(r+\\epsilon+\\delta)B_X\\subseteq\\textup{cl}(C+\\epsilon B_X)$, there exist $x\\in C$ and $y\\in\\epsilon B_X$ such that\n\\[ \\norm{x_1-(x+y)}{X}r+\\epsilon+\\delta-\\delta'-t-\\epsilon=r+\\delta-\\delta'-t, \\]\na contradiction. \n\nTo see that $\\textup{bir}(C)+\\epsilon\\leq\\textup{bir}(\\textup{cl}(C+\\epsilon B_X))$, set again $r=\\textup{bir}(C)$, choose $\\delta\\in(0,r)$ and find $x_0\\in X$ such that $x_0+(r-\\delta)B_X\\subseteq C$. Then $x_0+(r+\\epsilon-\\delta)B_X\\subseteq\\textup{cl}(C+\\epsilon B_X)$. Since $\\delta$ is arbitrary, it follows that $\\textup{bir}(\\textup{cl}(C+\\epsilon B_X))\\geq r+\\epsilon$, as wished.\n\nTo prove the statement, pick $C,D\\in\\mathcal{C}(X)$ and $\\epsilon>0$. If $d_\\textup{H}(C,D)\\leq\\epsilon$, then $D\\subseteq\\textup{cl}(C+\\epsilon B_X)$, which implies by (\\ref{eq:ir+eps}) that $\\textup{bir}(D)\\leq\\textup{bir}(C)+\\epsilon$. The inequality $\\textup{bir}(C)\\leq\\textup{bir}(D)+\\epsilon$ follows similarly. Thus, $|\\textup{bir}(C)-\\textup{bir}(D)|\\leq\\epsilon$. Since $C,D$ and $\\epsilon$ are arbitrary, this lets us conclude that $\\textup{bir}$ is $1$-Lipschitz.\n\\end{proof}\n\n\\begin{theo}\nLet $X$ be a Banach space. The function $\\func{\\textup{blr}}{\\mathcal{C}(X)}{\\mathbb{R}}$ is $1$-Lipschitz. In particular, in case $X$ is separable, the family $\\mathcal{H}^+(X)$ of all Haar positive closed, convex and bounded subsets of $X$ is open. Equivalently, a convergent sequence ${(C_n)}_{n=1}^\\infty\\subset\\mathcal{C}(X)$ of Haar null sets has a Haar null limit.\n\\end{theo}\n\\begin{proof}\nWe have $\\textup{blr}=\\textup{bir}\\circ\\textup{w}^*\\textup{-cl}$ by Lemma \\ref{lemma:hpws}, Lemma \\ref{lemma:wscliso} and Lemma \\ref{lemma:ir}, hence $\\textup{blr}$ is a composition of $1$-Lipschitz maps and therefore it is $1$-Lipschitz. Since\n\\[ \\mathcal{H}^+(X)=\\textup{blr}^{-1}\\bigl((0,+\\infty)\\bigr), \\]\nit follows immediately that $\\mathcal{H}^+(X)$ is open.\n\\end{proof}\n\n\\section*{Acknowledgements}\nThe author would like to thank Professors Eva Kopeck\\'{a} and Christian Bargetz for the many helpful remarks, and Professor Mikhail Ostrovskii for pointing out a mistake in an earlier version of the preprint. The author's research is supported by the Austrian Science Fund (FWF): P 32523-N.\n\n\\printbibliography\n\n\n\n\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe area of high-dimensional statistics deals with estimation in the\n``large \\ensuremath{p}, small \\ensuremath{n}'' setting, where $\\ensuremath{p}$ and $\\ensuremath{n}$\ncorrespond, respectively, to the dimensionality of the data and the\nsample size. Such high-dimensional problems arise in a variety of\napplications, among them remote sensing, computational biology and\nnatural language processing, where the model dimension may be\ncomparable or substantially larger than the sample size. It is\nwell-known that such high-dimensional scaling can lead to dramatic\nbreakdowns in many classical procedures. In the absence of additional\nmodel assumptions, it is frequently impossible to obtain consistent\nprocedures when $\\ensuremath{p} \\gg \\ensuremath{n}$. Accordingly, an active line of\nstatistical research is based on imposing various restrictions on the\nmodel----for instance, sparsity, manifold structure, or graphical\nmodel structure----and then studying the scaling behavior of different\nestimators as a function of sample size $\\ensuremath{n}$, ambient dimension\n$\\ensuremath{p}$ and additional parameters related to these structural\nassumptions.\n\nIn this paper, we study the following problem: given $\\ensuremath{n}$ i.i.d.\nobservations $\\{X^{(\\obsind)}\\}_{\\ensuremath{k}=1}^{\\ensuremath{n}}$ of a zero mean\nrandom vector $X \\in \\ensuremath{{\\mathbb{R}}}^{\\ensuremath{p}}$, estimate both its covariance\nmatrix \\mbox{$\\ensuremath{\\ensuremath{\\Sigma}^*}$,} and its inverse covariance or\nconcentration matrix $\\ensuremath{\\Theta^*} \\ensuremath{: =} \\inv{\\ensuremath{\\ensuremath{\\Sigma}^*}}$. Perhaps\nthe most natural candidate for estimating $\\ensuremath{\\ensuremath{\\Sigma}^*}$ is the\nempirical sample covariance matrix, but this is known to behave poorly\nin high-dimensional settings. For instance, when $\\ensuremath{p}\/\\ensuremath{n}\n\\rightarrow c > 0$, and the samples are drawn i.i.d. from a\nmultivariate Gaussian distribution, neither the eigenvalues nor the\neigenvectors of the sample covariance matrix are consistent estimators\nof the population\nversions~\\cite{Johnstone2001,JohnstoneLu2004}. Accordingly, many\nregularized estimators have been proposed to estimate the covariance\nor concentration matrix under various model assumptions. One natural\nmodel assumption is that reflected in shrinkage estimators, such as in\nthe work of \\citet{LedoitWolf2003}, who proposed to shrink the sample\ncovariance to the identity matrix. An alternative model assumption,\nrelevant in particular for time series data, is that the covariance or\nconcentration matrix is banded, meaning that the entries decay based\non their distance from the diagonal. \\citet{Furrer2007} proposed to\nshrink the covariance entries based on this distance from the\ndiagonal. \\citet{Wu2003} and \\citet{Huang2006} estimate these banded\nconcentration matrices by using thresholding and $\\ell_1$-penalties\nrespectively, as applied to a Cholesky factor of the inverse\ncovariance matrix. \\citet{BickelLevina2008} prove the consistency of\nthese banded estimators so long as $\\frac{(\\log\\,\\ensuremath{p})^2}{\\ensuremath{n}}\n\\rightarrow 0$ and the model covariance matrix is banded as well, but\nas they note, these estimators depend on the presented order of the\nvariables.\n\nA related class of models are based on positing some kind of sparsity,\neither in the covariance matrix, or in the inverse covariance.\n\\citet{BickelLevina2007} study thresholding estimators of covariance\nmatrices, assuming that each row satisfies an $\\ell_q$-ball sparsity\nassumption. In independent work, \\citet{Karoui2007} also studied\nthresholding estimators of the covariance, but based on an alternative\nnotion of sparsity, one which captures the number of closed paths of\nany length in the associated graph. Other work has studied models in\nwhich the inverse covariance or concentration matrix has a sparse\nstructure. As will be clarified in the next section, when the random\nvector is multivariate Gaussian, the set of non-zero entries in the\nconcentration matrix correspond to the set of edges in an associated\nGaussian Markov random field (GMRF). In this setting, imposing\nsparsity on the concentration matrix can be interpreted as requiring\nthat the graph underlying the GMRF have relatively few edges. A line\nof recent papers~\\citep{AspreBanG2008,FriedHasTib2007,YuanLin2007}\nhave proposed an estimator that minimizes the Gaussian negative\nlog-likelihood regularized by the $\\ell_1$ norm of the entries (or the\noff-diagonal entries) of the concentration matrix. The resulting\noptimization problem is a log-determinant program, which can be solved\nin polynomial time with interior point methods~\\citep{Boyd02}, or by\nfaster co-ordinate descent\nalgorithms~\\citep{AspreBanG2008,FriedHasTib2007}. In recent work,\n\\citet{Rothman2007} have analyzed some aspects of high-dimensional\nbehavior of this estimator; assuming that the minimum and maximum\neigenvalues of $\\ensuremath{\\ensuremath{\\Sigma}^*}$ are bounded, they show that consistent\nestimates can be achieved in Frobenius and operator norm, in\nparticular at the rate ${\\mathcal{O}}(\\sqrt{\\frac{(\\ensuremath{s} + \\ensuremath{p}) \\log\n\\ensuremath{p}}{\\ensuremath{n}}})$.\n\n\nThe focus of this paper is the problem of estimating the concentration\nmatrix $\\Theta^*$ under sparsity conditions. We do not impose\nspecific distributional assumptions on $X$ itself, but rather analyze the estimator in terms of \nthe tail behavior of the maximum deviation $\\max_{i,j}\n|\\estim{\\Sigma}^n_{ij} - \\ensuremath{\\ensuremath{\\Sigma}^*}_{ij}|$ of the sample and\npopulation covariance matrices. To estimate $\\Theta^*$, we\nconsider minimization of an $\\ell_1$-penalized log-determinant Bregman\ndivergence, which is equivalent to the usual $\\ell_1$-penalized\nmaximum likelihood when $X$ is multivariate Gaussian. We analyze the\nbehavior of this estimator under high-dimensional scaling, in which\nthe number of nodes $\\ensuremath{p}$ in the graph, and the maximum node degree\n$\\ensuremath{\\ensuremath{d}}$ are all allowed to grow as a function of the sample size\n$\\ensuremath{n}$.\n\nIn addition to the triple $(\\ensuremath{n}, \\ensuremath{p}, \\ensuremath{\\ensuremath{d}})$, we also\nexplicitly keep track of certain other measures of model complexity,\nthat could potentially scale as well. The first of these measures is\nthe $\\ell_\\infty$-operator norm of the covariance matrix\n$\\ensuremath{\\ensuremath{\\Sigma}^*}$, which we denote by $\\ensuremath{K_{\\ensuremath{\\Sigma}^*}} \\ensuremath{: =}\n\\matnorm{\\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty}$. The next quantity involves the\nHessian of the log-determinant objective function, $\\ensuremath{\\Gamma^*} \\ensuremath{: =}\n(\\ensuremath{\\Theta^*})^{-1} \\otimes (\\ensuremath{\\Theta^*})^{-1}$. When the distribution\nof $X$ is multivariate Gaussian, this Hessian has the more explicit\nrepresentation $\\ensuremath{\\Gamma^*}_{(j,k), (\\ell, m)} = \\operatorname{cov}\\{X_j\nX_k, \\; X_\\ell X_m \\}$, showing that it measures the covariances of\nthe random variables associated with each edge of the graph. For this\nreason, the matrix $\\ensuremath{\\Gamma^*}$ can be viewed as an edge-based\ncounterpart to the usual node-based covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$.\nUsing $\\ensuremath{S}$ to index the variable pairs $(i,j)$ associated with\nnon-zero entries in the inverse covariance. our analysis involves the\nquantity $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}} = \\matnorm{(\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\n\\ensuremath{S}})^{-1}}{\\infty}$. Finally, we also impose a mutual\nincoherence or irrepresentability condition on the Hessian $\\ensuremath{\\Gamma^*}$;\nthis condition is similar to assumptions imposed on $\\ensuremath{\\ensuremath{\\Sigma}^*}$ in\nprevious work~\\cite{Tropp2006,Zhao06,MeinsBuhl2006,Wainwright2006_new} on\nthe Lasso. We provide some examples where the Lasso\nirrepresentability condition holds, but our corresponding condition on\n$\\ensuremath{\\Gamma^*}$ fails; however, we do not know currently whether one\ncondition strictly dominates the other. \n\n\nOur first result establishes consistency of our estimator\n$\\estim{\\Theta}$ in the elementwise maximum-norm, providing a rate\nthat depends on the tail behavior of the entries in the random matrix\n$\\ensuremath{\\estim{\\ensuremath{\\Sigma}}}^\\ensuremath{n} - \\ensuremath{\\ensuremath{\\Sigma}^*}$. For the special case of sub-Gaussian\nrandom vectors with concentration matrices having at most $d$\nnon-zeros per row, a corollary of our analysis is consistency in\nspectral norm at rate \\mbox{$\\matnorm{\\estim{\\Theta} - \\Theta^*}{2} =\n{\\mathcal{O}}(\\sqrt{(\\ensuremath{\\ensuremath{d}}^2 \\,\\log \\ensuremath{p})\/\\ensuremath{n}})$,} with high\nprobability, thereby strengthening previous\nresults~\\cite{Rothman2007}. Under the milder restriction of each\nelement of $X$ having bounded $4m$-th moment, the rate in\nspectral norm is substantially slower---namely,\n\\mbox{$\\matnorm{\\estim{\\Theta} - \\ensuremath{\\Theta^*}}{2} = {\\mathcal{O}}(\\ensuremath{\\ensuremath{d}}\\,\n\\ensuremath{p}^{1\/2m}\/\\sqrt{\\ensuremath{n}})$}---highlighting that the familiar\nlogarithmic dependence on the model size $\\ensuremath{p}$ is linked to\nparticular tail behavior of the distribution of $X$. Finally, we show\nthat under the same scalings as above, with probability converging to\none, the estimate $\\estim{\\Theta}$ correctly specifies the zero\npattern of the concentration matrix $\\Theta^*$.\n\n\nThe remainder of this paper is organized as follows. In\nSection~\\ref{SecBackground}, we set up the problem and give some\nbackground. Section~\\ref{SecResult} is devoted to statements of our\nmain results, as well as discussion of their consequences.\nSection~\\ref{SecProof} provides an outline of the proofs, with the\nmore technical details deferred to appendices. In\nSection~\\ref{SecExperiments}, we report the results of some simulation\nstudies that illustrate our theoretical predictions.\n\n\n\\myparagraph{Notation} For the convenience of the reader, we summarize\nhere notation to be used throughout the paper. Given a vector $\\ensuremath{u}\n\\in \\ensuremath{{\\mathbb{R}}}^\\ensuremath{d}$ and parameter $a \\in [1, \\infty]$, we use\n$\\|\\ensuremath{u}\\|_a$ to denote the usual $\\ell_a$ norm. Given a matrix\n$\\ensuremath{U} \\in \\ensuremath{{\\mathbb{R}}}^{p \\times p}$ and parameters $a,b \\in [1, \\infty]$,\nwe use $\\matnorm{\\ensuremath{U}}{a,b}$ to denote the induced matrix-operator\nnorm $\\max_{\\|y\\|_a = 1} \\|\\ensuremath{U} y\\|_b$; see \\citet{Horn1985} for\nbackground. Three cases of particular importance in this paper are\nthe \\emph{spectral norm} $\\matnorm{\\ensuremath{U}}{2}$, corresponding to the\nmaximal singular value of $\\ensuremath{U}$; the\n\\emph{$\\ell_\\infty\/\\ell_\\infty$-operator norm}, given by\n\\begin{eqnarray}\n\\label{EqnLinfOp}\n\\matnorm{\\ensuremath{U}}{\\infty} & \\ensuremath{: =} & \\max \\limits_{j=1, \\ldots, p}\n\\sum_{k=1}^p |\\ensuremath{U}_{jk}|,\n\\end{eqnarray}\nand the \\emph{$\\ell_1\/\\ell_1$-operator norm}, given by\n$\\matnorm{\\ensuremath{U}}{1} = \\matnorm{\\ensuremath{U}^T}{\\infty}$. Finally, we use\n$\\|\\ensuremath{U}\\|_\\infty$ to denote the element-wise maximum $\\max_{i,j}\n|\\ensuremath{U}_{ij}|$; note that this is not a matrix norm, but rather a norm\non the vectorized form of the matrix. For any matrix $\\ensuremath{U} \\in\n\\ensuremath{{\\mathbb{R}}}^{\\ensuremath{p} \\times \\ensuremath{p}}$, we use $\\ensuremath{\\operatorname{vec}}(\\ensuremath{U})$ or equivalently\n$\\widebar{\\ensuremath{U}} \\in \\ensuremath{{\\mathbb{R}}}^{\\ensuremath{p}^2}$ to denote its \\emph{vectorized\nform}, obtained by stacking up the rows of $\\ensuremath{U}$. We use\n$\\tracer{\\ensuremath{U}}{\\ensuremath{V}} \\ensuremath{: =} \\sum_{i,j} \\ensuremath{U}_{ij} \\ensuremath{V}_{ij}$ to\ndenote the \\emph{trace inner product} on the space of symmetric\nmatrices. Note that this inner product induces the \\emph{Frobenius\nnorm} $\\matnorm{\\ensuremath{U}}{F} \\ensuremath{: =} \\sqrt{\\sum_{i,j}\n\\ensuremath{U}_{ij}^2}$. Finally, for asymptotics, we use the following\nstandard notation: we write $f(n) = {\\mathcal{O}}(g(n))$ if $f(n) \\leq c\ng(n)$ for some constant $c < \\infty$, and $f(n) = \\Omega(g(n))$ if\n$f(n) \\geq c' g(n)$ for some constant $c' > 0$. The notation\n\\mbox{$f(n) \\asymp g(n)$} means that \\mbox{$f(n) = {\\mathcal{O}}(g(n))$} and\n\\mbox{$f(n) = \\Omega(g(n))$.}\n\n\\section{Background and problem set-up}\n\\label{SecBackground}\n\nLet $X = (X_1, \\ldots, X_\\ensuremath{p})$ be a zero mean $\\ensuremath{p}$-dimensional\nrandom vector. The focus of this paper is the problem of estimating\nthe covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*} \\ensuremath{: =} \\ensuremath{{\\mathbb{E}}}[ X X^T]$ and\nconcentration matrix $\\ensuremath{\\Theta^*} \\ensuremath{: =} \\invn{\\ensuremath{\\ensuremath{\\Sigma}^*}}$ of the\nrandom vector $X$ given $\\ensuremath{n}$ i.i.d. observations\n$\\{X^{(\\ensuremath{k})}\\}_{\\ensuremath{k}=1}^{\\ensuremath{n}}$. In this section, we\nprovide background, and set up this problem more precisely. We begin\nwith background on Gaussian graphical models, which provide one\nmotivation for the estimation of concentration matrices. We then\ndescribe an estimator based based on minimizing an $\\ell_1$\nregularized log-determinant divergence; when the data are drawn from a\nGaussian graphical model, this estimator corresponds to\n$\\ell_1$-regularized maximum likelihood. We then discuss the\ndistributional assumptions that we make in this paper.\n\n\\subsection{Gaussian graphical models}\n\nOne motivation for this paper is the problem of Gaussian graphical\nmodel selection. A graphical model or a Markov random field is a\nfamily of probability distributions for which the conditional\nindependence and factorization properties are captured by a graph. Let\n$X = (X_1, X_2, \\ldots, X_\\ensuremath{p})$ denote a zero-mean Gaussian random\nvector; its density can be parameterized by the inverse covariance or\n\\emph{concentration matrix} $\\ensuremath{\\Theta^*} = (\\ensuremath{\\ensuremath{\\Sigma}^*})^{-1} \\in\n\\Symconepl{\\ensuremath{p}}$, and can be written as\n\\begin{eqnarray}\n\\label{EqnDefnGaussMRF}\nf(x_1, \\ldots, x_\\ensuremath{p}; \\ensuremath{\\Theta^*}) & = & \\frac{1}{\\sqrt{(2 \\pi)^\\ensuremath{p}\n \\det((\\ensuremath{\\Theta^*})^{-1})}} \\; \\exp \\big\\{ -\\frac{1}{2} x^T \\ensuremath{\\Theta^*}\n x \\big \\}.\n\\end{eqnarray}\n\\begin{figure}[ht]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\psfrag{#1#}{$1$} \\psfrag{#2#}{$2$} \\psfrag{#3#}{$3$}\n\\psfrag{#4#}{$4$} \\psfrag{#5#}{$5$}\n\\raisebox{.2in}{\\widgraph{0.3\\textwidth}{simple_gauss.eps}} & &\n\\widgraph{.25\\textwidth}{fig_simple_gauss_invcov.eps} \\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{(a) Simple undirected graph. A Gauss Markov random field has\na Gaussian variable $X_i$ associated with each vertex $i \\in \\ensuremath{V}$.\nThis graph has $\\ensuremath{p} = 5$ vertices, maximum degree $d = 3$ and $s=6$\nedges. (b) Zero pattern of the inverse covariance $\\ensuremath{\\Theta^*}$\nassociated with the GMRF in (a). The set $\\ensuremath{E}(\\ensuremath{\\Theta^*})$\ncorresponds to the off-diagonal non-zeros (white blocks); the diagonal\nis also non-zero (grey squares), but these entries do not correspond\nto edges. The black squares correspond to non-edges, or zeros in\n$\\ensuremath{\\Theta^*}$.}\n\\label{FigMarkov}\n\\end{figure}\n\nWe can relate this Gaussian distribution of the random vector $X$ to a\ngraphical model as follows. Suppose we are given an undirected graph\n$\\ensuremath{G} = (\\ensuremath{V}, \\ensuremath{E})$ with vertex set $\\ensuremath{V} = \\{1, 2, \\ldots,\n\\ensuremath{p} \\}$ and edge\\footnote{As a remark on notation, we would like to\ncontrast the notation for the edge-set $\\ensuremath{E}$ from the notation for\nan expectation of a random variable, $\\mathbb{E}(\\cdot)$.} set\n$\\ensuremath{E}$, so that each variable $X_i$ is associated with a\ncorresponding vertex $i \\in \\ensuremath{V}$. The Gaussian Markov random field\n(GMRF) associated with the graph $\\ensuremath{G}$ over the random vector $X$\nis then the family of Gaussian distributions with concentration\nmatrices $\\ensuremath{\\Theta^*}$ that respect the edge structure of the graph, in\nthe sense that $\\ensuremath{\\Theta^*}_{ij} = 0$ if $(i,j) \\notin\n\\ensuremath{E}$. Figure~\\ref{FigMarkov} illustrates this correspondence between\nthe graph structure (panel (a)), and the sparsity pattern of the\nconcentration matrix $\\ensuremath{\\Theta^*}$ (panel (b)). The problem of\nestimating the entries of the concentration matrix $\\ensuremath{\\Theta^*}$\ncorresponds to estimating the Gaussian graphical model instance, while\nthe problem of estimating the off-diagonal zero-pattern of the\nconcentration matrix----that is, the set\n\\begin{eqnarray}\n\\label{EqnDefnEdgeSet}\n\\ensuremath{E}(\\ensuremath{\\Theta^*}) & \\ensuremath{: =} & \\{i, j \\in \\ensuremath{V} \\mid \\, i \\neq j,\n\\ensuremath{\\Theta^*}_{ij} \\neq 0 \\}\n\\end{eqnarray}\ncorresponds to the problem of Gaussian graphical \\emph{model\nselection}. \n\nWith a slight abuse of notation, we define the \\emph{sparsity index}\n$\\ensuremath{s} \\ensuremath{: =} |\\ensuremath{E}(\\ensuremath{\\Theta^*})|$ as the total number of non-zero\nelements in off-diagonal positions of $\\ensuremath{\\Theta^*}$; equivalently, this\ncorresponds to twice the number of edges in the case of a Gaussian\ngraphical model. We also define the \\emph{maximum degree or row\ncardinality}\n\\begin{eqnarray}\n\\label{EqnDefnDegmax}\n\\ensuremath{\\ensuremath{d}} & \\ensuremath{: =} & \\max_{i = 1, \\ldots, \\ensuremath{p} } \\biggr|\\big \\{ j \\in\n\\ensuremath{V} \\, \\mid \\, \\ensuremath{\\Theta^*}_{ij} \\neq 0 \\big\\} \\biggr|,\n\\end{eqnarray}\ncorresponding to the maximum number of non-zeros in any row of\n$\\ensuremath{\\Theta^*}$; this corresponds to the maximum degree in the graph of\nthe underlying Gaussian graphical model. Note that we have included\nthe diagonal entry $\\ensuremath{\\Theta^*}_{ii}$ in the degree count,\ncorresponding to a self-loop at each vertex.\n\nIt is convenient throughout the paper to use graphical terminology,\nsuch as degrees and edges, even though the distributional assumptions\nthat we impose, as described in Section~\\ref{SecDistAssum}, are milder\nand hence apply even to distributions that are not Gaussian MRFs.\n\n\n\n\\subsection{$\\ell_1$-penalized log-determinant divergence}\n\nAn important set in this paper is the cone\n\\begin{eqnarray}\n\\Symconepl{\\ensuremath{p}} & \\ensuremath{: =} & \\big \\{ A \\in \\ensuremath{{\\mathbb{R}}}^{\\ensuremath{p} \\times \\ensuremath{p}}\n\\mid A = A^T, \\; A \\succeq 0 \\big \\},\n\\end{eqnarray}\nformed by all symmetric positive semi-definite matrices in $\\ensuremath{p}$\ndimensions. We assume that the covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$ and\nconcentration matrix $\\ensuremath{\\Theta^*}$ of the random vector $X$ are\nstrictly positive definite, and so lie in the interior of this cone\n$\\Symconepl{\\ensuremath{p}}$. \n\nThe focus of this paper is a particular type of $M$-estimator for the\nconcentration matrix $\\ensuremath{\\Theta^*}$, based on minimizing a Bregman\ndivergence between symmetric matrices. A function is of Bregman type\nif it is strictly convex, continuously differentiable and has bounded\nlevel sets~\\cite{Bregman67a,Censor}. Any such function induces a\n\\emph{Bregman divergence} of the form $\\Breg{A}{B} = g(A) - g(B) -\n\\trs{\\nabla g(B)}{A-B}$. From the strict convexity of $g$, it follows\nthat $\\Breg{A}{B} \\geq 0$ for all $A$ and $B$, with equality if and\nonly if $A = B$.\n\n\nAs a candidate Bregman function, consider the log-determinant barrier\nfunction, defined for any matrix $A \\in \\Symconepl{\\ensuremath{p}}$ by\n\\begin{eqnarray}\n\\label{EqnDefnLogDet}\ng(A) & \\ensuremath{: =} & \\begin{cases} - \\log \\det(A) & \\mbox{if $A \\succ 0$} \\\\\n + \\infty & \\mbox{otherwise.}\n \\end{cases}\n\\end{eqnarray}\nAs is standard in convex analysis, we view this function as taking\nvalues in the extended reals $\\ensuremath{{\\mathbb{R}}}_* = \\ensuremath{{\\mathbb{R}}} \\cup \\{+\\infty \\}$.\nWith this definition, the function $g$ is strictly convex, and its\ndomain is the set of strictly positive definite matrices. Moreover,\nit is continuously differentiable over its domain, with $\\nabla g(A) =\n- A^{-1}$; see Boyd and Vandenberghe~\\cite{Boyd02} for further\ndiscussion. The Bregman divergence corresponding to this\nlog-determinant Bregman function $g$ is given by\n\\begin{eqnarray}\n\\label{EqnDefnBreg}\n\\Breg{A}{B} & \\ensuremath{: =} & - \\log \\det A + \\log \\det B +\n\\tracer{B^{-1}}{A-B},\n\\end{eqnarray}\nvalid for any $A, B \\in \\Symconepl{\\ensuremath{p}}$ that are strictly positive\ndefinite. This divergence suggests a natural way to estimate\nconcentration matrices---namely, by minimizing the divergence\n$\\Breg{\\ensuremath{\\Theta^*}}{\\Theta}$---or equivalently, by minimizing the\nfunction\n\\begin{equation}\n\\label{EqnPop}\n\\min_{\\Theta \\succ 0 } \\big \\{ \\tracer{\\Theta}{\\ensuremath{\\ensuremath{\\Sigma}^*}} - \\log\n\\det \\Theta \\big \\},\n\\end{equation}\nwhere we have discarded terms independent of $\\Theta$, and used the\nfact that the inverse of the concentration matrix is the covariance matrix\n(i.e., $(\\ensuremath{\\Theta^*})^{-1} = \\ensuremath{\\ensuremath{\\Sigma}^*}= \\ensuremath{{\\mathbb{E}}}[X X^T]$). Of course,\nthe convex program~\\eqref{EqnPop} cannot be solved without knowledge\nof the true covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$, but one can take the\nstandard approach of replacing $\\ensuremath{\\ensuremath{\\Sigma}^*}$ with an empirical\nversion, with the possible addition of a regularization term.\n\n\nIn this paper, we analyze a particular instantiation of this strategy.\nGiven $\\ensuremath{n}$ samples, we define the \\emph{sample covariance matrix}\n\\begin{eqnarray}\n\\label{EqnDefnSamCov}\n\\ensuremath{\\estim{\\ensuremath{\\Sigma}}}^\\ensuremath{n} & \\ensuremath{: =} & \\frac{1}{\\ensuremath{n}} \\sum_{\\ensuremath{k}=1}^\\ensuremath{n}\n\\sp{X}{\\ensuremath{k}} (\\sp{X}{\\ensuremath{k}})^T.\n\\end{eqnarray}\nTo lighten notation, we occasionally drop the superscript $\\ensuremath{n}$,\nand simply write $\\ensuremath{\\widehat{\\Sigma}}$ for the sample covariance. We also define\nthe \\emph{off-diagonal $\\ell_1$ regularizer}\n\\begin{eqnarray}\n\\ellreg{\\Theta} & \\ensuremath{: =} & \\sum_{i \\neq j} |\\Theta_{ij}|,\n\\end{eqnarray}\nwhere the sum ranges over all $i, j = 1, \\ldots, \\ensuremath{p}$ with $i \\neq\nj$. Given some regularization constant $\\ensuremath{\\lambda_\\ensuremath{n}} > 0$, we consider\nestimating $\\ensuremath{\\Theta^*}$ by solving the following\n\\emph{$\\ell_1$-regularized log-determinant program}:\n\\begin{eqnarray}\n\\label{EqnGaussMLE}\n\\ensuremath{\\widehat{\\Theta}} & \\ensuremath{: =} & \\arg\\min_{\\Theta \\succ 0} \\big \\{\n\\tracer{\\Theta}{\\ensuremath{\\widehat{\\Sigma}}^\\ensuremath{n}} - \\log \\det(\\Theta) + \\ensuremath{\\lambda_\\ensuremath{n}}\n\\ellreg{\\Theta} \\big \\}.\n\\end{eqnarray}\nAs shown in Appendix~\\ref{AppLemMLECharac}, for any $\\ensuremath{\\lambda_\\ensuremath{n}} > 0$ and\nsample covariance matrix $\\ensuremath{\\widehat{\\Sigma}}^\\ensuremath{n}$ with strictly positive\ndiagonal, this convex optimization problem has a unique optimum, so\nthere is no ambiguity in equation~\\eqref{EqnGaussMLE}. When the data\nis actually drawn from a multivariate Gaussian distribution, then the\nproblem~\\eqref{EqnGaussMLE} is simply $\\ell_1$-regularized maximum\nlikelihood.\n\n\n\\defc{c}\n\\defc{c}\n\n\\subsection{Tail conditions}\n\\label{SecDistAssum}\n\nIn this section, we describe the tail conditions that underlie our\nanalysis. Since the estimator ~\\eqref{EqnGaussMLE} is based on using\nthe sample covariance $\\ensuremath{\\estim{\\ensuremath{\\Sigma}}}^\\ensuremath{n}$ as a surrogate for the\n(unknown) covariance $\\ensuremath{\\ensuremath{\\Sigma}^*}$, any type of consistency requires\nbounds on the difference $\\ensuremath{\\estim{\\ensuremath{\\Sigma}}}^\\ensuremath{n} - \\ensuremath{\\ensuremath{\\Sigma}^*}$. In\nparticular, we define the following tail condition:\n\\begin{defns}[Tail conditions]\n\\label{DefnTail}\nThe random vector $X$ satisfies tail condition $\\ensuremath{\\mathcal{T}}(f,\nv_*)$ if there exists a constant $v_* \\in (0, \\infty]$ and\na function $f: \\mathbb{N} \\times (0,\\infty) \\rightarrow (0,\n\\infty)$ such that for any $(i,j) \\in \\ensuremath{V} \\times \\ensuremath{V}$:\n\\begin{eqnarray}\n\\label{EqnSamTail}\n\\ensuremath{\\mathbb{P}}[|\\ensuremath{\\widehat{\\ensuremath{\\Sigma}}}^\\ensuremath{n}_{ij} - \\ensuremath{\\ensuremath{\\Sigma}^*}_{ij}| \\geq \\delta] & \\leq\n& 1\/f(\\ensuremath{n},\\delta) \\qquad \\mbox{for all $\\delta \\in (0,\n1\/v_*]$.}\n\\end{eqnarray}\nWe adopt the convention $1\/0 \\ensuremath{: =} + \\infty$, so that the value\n$v_* = 0$ indicates the inequality holds for any $\\delta \\in\n(0,\\infty)$.\n\\end{defns}\n\n\n\\newcommand{\\ensuremath{a}}{\\ensuremath{a}}\n\nTwo important examples of the tail function $f$ are the\nfollowing:\n\\begin{enumerate} \n\\item[(a)] an \\emph{exponential-type tail function}, meaning that\n$f(\\ensuremath{n},\\delta) = \\exp(c \\, \\ensuremath{n}\\,\n\\delta^{\\ensuremath{a}})$, for some scalar $c > 0$, and exponent\n$\\ensuremath{a} > 0$; and\n\\item[(b)] a \\emph{polynomial-type tail function}, meaning that\n$f(\\ensuremath{n},\\delta) = c \\, \\ensuremath{n}^{m} \\,\n\\delta^{2m}$, for some positive integer $m \\in\n\\mathbb{N}$ and scalar $c > 0$.\n\\end{enumerate}\nAs might be expected, if $X$ is multivariate Gaussian, then the\ndeviations of sample covariance matrix have an exponential-type tail\nfunction with $\\ensuremath{a} = 2$. A bit more generally, in the following\nsubsections, we provide broader classes of distributions whose sample\ncovariance entries satisfy exponential and a polynomial tail bounds\n(see Lemmata~\\ref{LEM_SAM_COV_BOUND_SUBG}\nand~\\ref{LEM_SAM_COV_BOUND_MOMENT} respectively).\n\nGiven a larger number of samples $\\ensuremath{n}$, we expect the tail\nprobability bound $1\/f(\\ensuremath{n},\\delta)$ to be smaller, or\nequivalently, for the tail function $f(\\ensuremath{n},\\delta)$ to\nlarger. Accordingly, we require that $f$ is monotonically\nincreasing in $\\ensuremath{n}$, so that for each fixed $\\delta >0$, we can\ndefine the inverse function\n\\begin{eqnarray}\n\\label{EqnSamTailN}\n\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}(r; \\delta) & \\ensuremath{: =} & \\arg \\max \\big \\{ n \\; \\mid \\;\nf(\\ensuremath{n}, \\delta) \\leq r \\big \\}.\n\\end{eqnarray}\nSimilarly, we expect that $f$ is monotonically increasing in\n$\\delta$, so that for each fixed $\\ensuremath{n}$, we can define the\ninverse in the second argument\n\\begin{eqnarray}\n\\label{EqnSamTailT}\n\\ensuremath{\\widebar{\\delta}_f}(r; \\ensuremath{n}) & \\ensuremath{: =} & \\arg \\max \\big \\{ \\delta \\; \\mid\n\\; f(\\ensuremath{n}, \\delta) \\leq r \\big \\}.\n\\end{eqnarray}\nFor future reference, we note a simple consequence of the monotonicity\nof the tail function $f$---namely\n\\begin{eqnarray}\n\\label{EqnMonot}\n\\ensuremath{n} > \\ensuremath{{\\widebar{\\ensuremath{n}}_f}}( \\delta, r) \\quad \\mbox{for some $\\delta > 0$} &\n\\Longrightarrow & \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},r) \\leq \\delta.\n\\end{eqnarray}\nThe inverse functions $\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}$ and $\\ensuremath{\\widebar{\\delta}_f}$ play an\nimportant role in describing the behavior of our estimator. We\nprovide concrete examples in the following two subsections.\n\n\n\\subsubsection{Sub-Gaussian distributions}\n\\label{SecSamCovBoundSubG}\nIn this subsection, we study the case of i.i.d. observations of\nsub-Gaussian random variables.\n\\begin{defns}\nA zero-mean random variable $Z$ is \\emph{sub-Gaussian} if there exists\na constant $\\ensuremath{\\sigma}\\in (0, \\infty)$ such that\n\\begin{eqnarray}\n\\label{EqnDefnSubgauss}\n\\ensuremath{\\mathbb{E}}[\\exp(t Z)] & \\leq & \\exp(\\ensuremath{\\sigma}^2 \\, t^2\/2) \\qquad \\mbox{for all\n$t \\in \\ensuremath{{\\mathbb{R}}}$.}\n\\end{eqnarray}\n\\end{defns}\nBy the Chernoff bound, this upper bound~\\eqref{EqnDefnSubgauss} on the\nmoment-generating function implies a two-sided tail bound of the form\n\\begin{eqnarray}\n\\label{EqnSubgaussChern}\n\\ensuremath{\\mathbb{P}}[|Z| > z] & \\leq & 2 \\exp \\big(- \\frac{z^2}{2 \\ensuremath{\\sigma}^2}\\big).\n\\end{eqnarray}\nNaturally, any zero-mean Gaussian variable with variance $\\sigma^2$\nsatisfies the bounds~\\eqref{EqnDefnSubgauss}\nand~\\eqref{EqnSubgaussChern}. In addition to the Gaussian case, the\nclass of sub-Gaussian variates includes any bounded random variable\n(e.g., Bernoulli, multinomial, uniform), any random variable with\nstrictly log-concave density~\\cite{BulKoz,Ledoux01}, and any finite\nmixture of sub-Gaussian variables.\n\nThe following lemma, proved in\nAppendix~\\ref{APP_LEM_SAM_COV_BOUND_SUBG}, shows that the entries of\nthe sample covariance based on i.i.d. samples of sub-Gaussian random\nvector satisfy an exponential-type tail bound with exponent $\\ensuremath{a}\n= 2$. The argument is along the lines of a result due to Bickel and\nLevina~\\cite{BickelLevina2007}, but with more explicit control of the\nconstants in the error exponent:\n\\begin{lems}\n\\label{LEM_SAM_COV_BOUND_SUBG}\nConsider a zero-mean random vector $(X_1, \\ldots, X_\\ensuremath{p})$ with\ncovariance $\\ensuremath{\\ensuremath{\\Sigma}^*}$ such that each $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$\nis sub-Gaussian with parameter $\\ensuremath{\\sigma}$. Given $\\ensuremath{n}$\ni.i.d. samples, the associated sample covariance $\\ensuremath{\\widehat{\\ensuremath{\\Sigma}}}^\\ensuremath{n}$\nsatisfies the tail bound\n\\begin{eqnarray*}\n\\ensuremath{\\mathbb{P}} \\big[ |\\ensuremath{\\widehat{\\ensuremath{\\Sigma}}}^\\ensuremath{n}_{ij }- \\ensuremath{\\ensuremath{\\Sigma}^*}_{ij}| > \\delta\n \\big] & \\leq & 4 \\exp \\big \\{- \\frac{\\ensuremath{n} \\delta^2}{ 128(1 + 4\\csubg^{2})^{2}\\max_{i} (\\CovMatStar_{ii})^2 }\n \\big \\},\n\\end{eqnarray*}\nfor all $\\delta \\in \\big(0, \\max_{i}(\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii})\\,8(1 + 4\n\\ensuremath{\\sigma}^2)\\big)$.\n\n\\end{lems}\nThus, the sample covariance entries the tail condition $\\ensuremath{\\mathcal{T}}(f,\nv_*)$ with $v_* = \\big[\\max_{i}(\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii})\\,8(1 + 4\n\\ensuremath{\\sigma}^2)\\big]^{-1}$, and an exponential-type tail function with\n$\\ensuremath{a} = 2$---namely\n\\begin{eqnarray}\n\\label{EqnSubgaussF}\n\\qquad f(\\ensuremath{n}, \\delta) = \\frac{1}{4} \\exp( c_{*}\n\\ensuremath{n} \\delta^2), & \\mbox{with} & c_{*} = \\big[ 128(1 + 4\\csubg^{2})^{2}\\max_{i} (\\CovMatStar_{ii})^2 \\big]^{-1}\n\\end{eqnarray}\nA little calculation shows that the associated inverse functions take\nthe form\n\\begin{equation}\n\\label{EqnExpInverse}\n\\ensuremath{\\widebar{\\delta}_f}(r; \\ensuremath{n} ) \\, = \\, \\sqrt{\\frac{\\log(4 \\,r)}{c_{*} \\,\n\\ensuremath{n}}}, \\quad \\mbox{and} \\quad \\ensuremath{{\\widebar{\\ensuremath{n}}_f}}(r; \\delta) \\, = \\,\n\\frac{\\log(4 \\, r)}{c_{*} \\delta^2}.\n\\end{equation}\n\n\n\n\\subsubsection{Tail bounds with moment bounds}\n\\label{SecSamCovBoundMoment}\n\nIn the following lemma, proved in\nAppendix~\\ref{APP_LEM_SAM_COV_BOUND_MOMENT}, we show that given\ni.i.d. observations from random variables with bounded moments, the\nsample covariance entries satisfy a polynomial-type tail bound. See\nthe papers~\\cite{Zhao06,Karoui2007} for related results on tail bounds\nfor variables with bounded moments.\n\\begin{lems}\n\\label{LEM_SAM_COV_BOUND_MOMENT}\nSuppose there exists a positive integer $m$ and scalar $K_{\\momentpow} \\in\n\\ensuremath{{\\mathbb{R}}}$ such that for $i = 1,\\hdots,\\ensuremath{p}$,\n\\begin{eqnarray}\n\\label{EqnBoundedMoments}\n\\ensuremath{{\\mathbb{E}}} \\biggr[ \\big(\\frac{X_i}{\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}} \\big)^{4 m}\n\\biggr] & \\leq & K_{\\momentpow}.\n\\end{eqnarray}\nFor i.i.d. samples $\\{\\Xsam{\\ensuremath{k}}_i \\}_{\\ensuremath{k}=1}^\\ensuremath{n}$, the\nsample covariance matrix $\\ensuremath{\\widehat{\\ensuremath{\\Sigma}}}^\\ensuremath{n}$ satisfies the bound\n\\begin{eqnarray}\n\\ensuremath{\\mathbb{P}} \\big [ \\Big| \\ensuremath{\\widehat{\\ensuremath{\\Sigma}}}^\\ensuremath{n}_{ij} - \\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} \\Big)\\Big|\n > \\delta \\big] & \\leq & \\frac{\\big\\{m^{2m+1}\n 2^{2m} (\\max_i \\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}) ^{2m}\\, (K_{\\momentpow}\n + 1 ) \\big\\}}{\\ensuremath{n}^{m}\\, \\delta^{2m}}.\n\\end{eqnarray}\n\\end{lems}\nThus, in this case, the sample covariance satisfies the tail condition\n$\\ensuremath{\\mathcal{T}}(f, v_*)$ with $v_* = 0$, so that the bound\nholds for all $\\delta \\in (0,\\infty)$, and with the polynomial-type\ntail function\n\\begin{equation}\n\\label{EqnPolyF}\nf(\\ensuremath{n},\\delta) = c_{*} \\ensuremath{n}^{m}\n\\delta^{2m} \\quad \\mbox{where $c_{*} =\n1\/\\big\\{m^{2m+1} 2^{2m} (\\max_i\n\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}) ^{2m}\\, (K_{\\momentpow} + 1 ) \\big\\}$.}\n\\end{equation}\nFinally, a little calculation shows that in this case, the inverse\ntail functions take the form\n\\begin{equation}\n\\label{EqnPolyInverse}\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},r) \\, = \\,\n\\frac{(r\/c_{*})^{1\/2m}}{\\sqrt{\\ensuremath{n}}}, \\quad\n\\mbox{and} \\quad \\ensuremath{{\\widebar{\\ensuremath{n}}_f}}(\\delta,r) \\, = \\,\n\\frac{(r\/c_{*})^{1\/m}}{\\delta^{2}}.\n\\end{equation}\n\n\n\n\\section{Main results and some consequences}\n\\label{SecResult}\n\nIn this section, we state our main results, and discuss some of their\nconsequences. We begin in Section~\\ref{SecAssumptions} by stating\nsome conditions on the true concentration matrix $\\ensuremath{\\Theta^*}$ required in\nour analysis, including a particular type of incoherence or\nirrepresentability condition. In Section~\\ref{SecEllinf}, we state\nour first main result---namely, Theorem~\\ref{ThmMain} on consistency\nof the estimator $\\ensuremath{\\widehat{\\Theta}}$, and the rate of decay of its error in\nelementwise $\\ell_\\infty$ norm. Section~\\ref{SecModelCons} is devoted\nto Theorem~\\ref{ThmModel} on the model selection consistency of the\nestimator. Section~\\ref{SecInco} is devoted the relation between the\nlog-determinant estimator and the ordinary Lasso (neighborhood-based\napproach) as methods for graphical model selection; in addition, we\nillustrate our irrepresentability assumption for some simple graphs.\nFinally, in Section~\\ref{SecFrob}, we state and prove some corollaries\nof Theorem~\\ref{ThmMain}, regarding rates in Frobenius and operator\nnorms.\n\n\\subsection{Conditions on covariance and Hessian}\n\\label{SecAssumptions}\n\nOur results involve some quantities involving the Hessian of the\nlog-determinant barrier~\\eqref{EqnDefnLogDet}, evaluated at the true\nconcentration matrix $\\ensuremath{\\Theta^*}$. Using standard results on matrix\nderivatives~\\citep{Boyd02}, it can be shown that this Hessian takes\nthe form\n\\begin{eqnarray}\n\\label{EqnDefnHess}\n\\ensuremath{\\Gamma^*} & \\ensuremath{: =} & \\nabla^2_{\\Theta} g(\\Theta) \\Big |_{\\Theta =\n \\ensuremath{\\Theta^*}} \\; = \\; \\invn{\\ensuremath{\\Theta^*}} \\otimes \\invn{\\ensuremath{\\Theta^*}},\n\\end{eqnarray}\nwhere $\\otimes$ denotes the Kronecker matrix product. By definition,\n$\\ensuremath{\\Gamma^*}$ is a $\\ensuremath{p}^{2} \\times \\ensuremath{p}^{2}$ matrix indexed by vertex\npairs, so that entry $\\ensuremath{\\Gamma^*}_{(j,k), (\\ell, m)}$ corresponds to the\nsecond partial derivative $ \\frac{\\partial^2 g}{\\partial \\Theta_{jk}\n\\partial \\Theta_{\\ell m}}$, evaluated at $\\Theta = \\ensuremath{\\Theta^*}$. When\n$X$ has multivariate Gaussian distribution, then $\\ensuremath{\\Gamma^*}$ is the\nFisher information of the model, and by standard results on cumulant\nfunctions in exponential families~\\cite{Brown86}, we have the more\nspecific expression $\\ensuremath{\\Gamma^*}_{(j,k), (\\ell, m)} =\n\\operatorname{cov}\\{X_j X_k, \\; X_\\ell X_m \\}$. For this reason,\n$\\ensuremath{\\Gamma^*}$ can be viewed as an edge-based counterpart to the usual\ncovariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$.\n\nWe define the set of non-zero off-diagonal entries in the model\nconcentration matrix $\\ensuremath{\\Theta^*}$:\n\\begin{eqnarray}\n\\ensuremath{E}(\\ensuremath{\\Theta^*}) & \\ensuremath{: =} & \\{ (i,j) \\in \\ensuremath{V} \\times \\ensuremath{V} \\,\n\t\\mid \\, i \\neq j, \\ensuremath{\\Theta^*}_{ij} \\neq 0 \\},\n\\end{eqnarray}\nand let $\\ensuremath{S}(\\ensuremath{\\Theta^*}) = \\{ \\ensuremath{E}(\\ensuremath{\\Theta^*}) \\cup \\{(1,1),\n\\ldots, (\\ensuremath{p}, \\ensuremath{p}) \\}$ be the augmented set including the\ndiagonal. We let $\\ensuremath{\\EsetPlus^c}(\\ensuremath{\\Theta^*})$ denote the complement of\n$\\ensuremath{S}(\\ensuremath{\\Theta^*})$ in the set $\\{1, \\ldots, \\ensuremath{p} \\} \\times \\{1,\n\\ldots, \\ensuremath{p}\\}$, corresponding to all pairs $(\\ell, m)$ for which\n$\\ensuremath{\\Theta^*}_{\\ell m} = 0$. When it is clear from context, we shorten\nour notation for these sets to $\\ensuremath{S}$ and $\\ensuremath{\\EsetPlus^c}$,\nrespectively. Finally, for any two subsets $T$ and $T'$ of $\\ensuremath{V}\n\\times \\ensuremath{V}$, we use $\\ensuremath{\\Gamma^*}_{T T'}$ to denote the $|T| \\times\n|T'|$ matrix with rows and columns of $\\ensuremath{\\Gamma^*}$ indexed by $T$ and\n$T'$ respectively.\n\n\nOur main results involve the $\\ell_\\infty\/\\ell_\\infty$ norm applied to\nthe covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$, and to the inverse of a sub-block\nof the Hessian $\\ensuremath{\\Gamma^*}$. In particular, we define\n\\begin{eqnarray}\n\\label{EqnCovConst}\n\\ensuremath{K_{\\ensuremath{\\Sigma}^*}} & \\ensuremath{: =} & \\matnorm{\\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty} \\; = \\; \\Big(\n \\max_{i=1, \\ldots ,\\ensuremath{p}} \\sum_{j=1}^\\ensuremath{p} |\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij}| \\Big),\n\\end{eqnarray}\ncorresponding to the $\\ell_\\infty$-operator norm of the true\ncovariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$, and\n\\begin{eqnarray}\n\\ensuremath{K_{\\ensuremath{\\Gamma^*}}} & \\ensuremath{: =} & \\matnorm{(\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\n\\ensuremath{S}})^{-1}}{\\infty} \\; = \\; \\matnorm{([ {\\ensuremath{\\Theta^*}}^{-1}\n\\otimes {\\ensuremath{\\Theta^*}}^{-1}]_{\\ensuremath{S} \\ensuremath{S}})^{-1}}{\\infty}.\n\\end{eqnarray}\nOur analysis keeps explicit track of these quantities, so that they\ncan scale in a non-trivial manner with the problem dimension $\\ensuremath{p}$.\n\n\nWe assume the Hessian satisfies the following type of \\emph{mutual\nincoherence or irrepresentable condition}:\n\\begin{asss}\n\\label{AssInco}\nThere exists some $\\mutinco \\in (0,1]$ such that\n\\begin{eqnarray}\n\\label{EqnInco}\n\\matnorm{\\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c} \\ensuremath{S}} (\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\n\\ensuremath{S}})^{-1}}{\\infty} & \\leq & (1 - \\mutinco).\n\\end{eqnarray}\n\\end{asss}\n\nThe underlying intuition is that this assumption imposes control on\nthe influence that the non-edge terms, indexed by $\\ensuremath{\\EsetPlus^c}$, can\nhave on the edge-based terms, indexed by $\\ensuremath{S}$. It is worth\nnoting that a similar condition for the Lasso, with the covariance\nmatrix $\\Sigma^*$ taking the place of the matrix $\\ensuremath{\\Gamma^*}$ above, is\nnecessary and sufficient for support recovery using the ordinary\nLasso~\\cite{MeinsBuhl2006,Tropp2006,Wainwright2006_new,Zhao06}. See\nSection~\\ref{SecInco} for illustration of the form taken by\nAssumption~\\ref{AssInco} for specific graphical models.\n\nA remark on notation: although our analysis allows the quantities\n$\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}$ as well as the model size $\\ensuremath{p}$ and maximum\nnode-degree $\\ensuremath{\\ensuremath{d}}$ to grow with the sample size $\\ensuremath{n}$, we\nsuppress this dependence on $\\ensuremath{n}$ in their notation.\n\n\n\\subsection{Rates in elementwise $\\ell_\\infty$-norm}\n\\label{SecEllinf}\n\nWe begin with a result that provides sufficient conditions on the\nsample size $\\ensuremath{n}$ for bounds in the elementwise\n$\\ell_\\infty$-norm. This result is stated in terms of the tail\nfunction $f$, and its inverses $\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}$ and $\\ensuremath{\\widebar{\\delta}_f}$ (equations~\\eqref{EqnSamTailN} and~\\eqref{EqnSamTailT}), and so covers a general range of possible\ntail behaviors. So as to make it more concrete, we follow the general\nstatement with corollaries for the special cases of exponential-type\nand polynomial-type tail functions, corresponding to sub-Gaussian and\nmoment-bounded variables respectively. \n\nIn the theorem statement, the choice of regularization constant\n$\\ensuremath{\\lambda_\\ensuremath{n}}$ is specified in terms of a user-defined parameter $\\tau >\n2$. Larger choices of $\\tau$ yield faster rates of convergence in\nthe probability with which the claims hold, but also lead to more\nstringent requirements on the sample size.\n\\begin{theos}\n\\label{ThmMain}\nConsider a distribution satisfying the incoherence\nassumption~\\eqref{EqnInco} with parameter \\mbox{$\\mutinco \\in (0,1]$,}\nand the tail condition~\\eqref{EqnSamTail} with parameters\n$\\ensuremath{\\mathcal{T}}(f, v_*)$. Let $\\ensuremath{\\widehat{\\Theta}}$ be the unique optimum of\nthe log-determinant program~\\eqref{EqnGaussMLE} with regularization\nparameter \\mbox{$\\ensuremath{\\lambda_\\ensuremath{n}} = (8\/\\mutinco) \\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$} for some $\\tau > 2$. Then,\nif the sample size is lower bounded as\n\\begin{eqnarray}\n\\label{EqnSampleBound}\n\\ensuremath{n} & > & \\ensuremath{{\\widebar{\\ensuremath{n}}_f}} \\Biggr( 1 \\Big\/\\max \\Big \\{ v_*,\\; 6\n \\big(1 + 8\\mutinco^{-1} \\big) \\: \\ensuremath{\\ensuremath{d}}\\, \\max\\{\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}\n \\ensuremath{K_{\\ensuremath{\\Gamma^*}}},\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{3}\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^{2} \\} \\Big \\} , \\; \\;\\ensuremath{p}^{\\tau}\n \\Biggr),\n\\end{eqnarray}\nthen with probability greater than $1-1\/\\ensuremath{p}^{\\tau - 2}\n\\rightarrow 1$, we have:\n\\begin{enumerate}\n\\item[(a)] The estimate $\\ensuremath{\\widehat{\\Theta}}$ satisfies the elementwise\n$\\ell_\\infty$-bound:\n\\begin{eqnarray}\n\\label{EqnEllinfBound}\n\\| \\estim{\\Theta} - \\ensuremath{\\Theta^*}\\|_\\infty & \\leq & \\big\\{2 \\big(1 + 8 \\mutinco^{-1}\\big)\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}\\big\\}\\;\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}).\n\\end{eqnarray}\n\\item[(b)] It specifies an edge set $\\ensuremath{E}(\\ensuremath{\\widehat{\\Theta}})$ that is a\nsubset of the true edge set $\\ensuremath{E}(\\ensuremath{\\Theta^*})$, and includes all\nedges $(i,j)$ with $|\\ensuremath{\\Theta^*}_{ij}| > \\big\\{2 \\big(1 + 8 \\mutinco^{-1}\\big)\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}\\big\\}\\; \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$.\n\\end{enumerate}\n\\end{theos}\n\nIf we assume that the various quantities $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}},\n\\mutinco$ remain constant as a function of $(\\ensuremath{n}, \\ensuremath{p},\n\\ensuremath{\\ensuremath{d}})$, we have the elementwise $\\ell_\\infty$ bound \\mbox{$\\|\n\\estim{\\Theta} - \\ensuremath{\\Theta^*}\\|_\\infty =\n{\\mathcal{O}}(\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}))$}, so that the inverse\ntail function $\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$ (see\nequation~\\eqref{EqnSamTailT}) specifies rate of convergence in the\nelement-wise $\\ell_\\infty$-norm. In the following section, we derive\nthe consequences of this $\\ell_\\infty$-bound for two specific tail\nfunctions, namely those of exponential-type with $\\ensuremath{a} = 2$, and\npolynomial-type tails (see Section~\\ref{SecDistAssum}). Turning to\nthe other factors involved in the theorem statement, the quantities\n$\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}$ and $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}$ measure the sizes of the entries in the\ncovariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$ and inverse Hessian $(\\ensuremath{\\Gamma^*})^{-1}$\nrespectively. Finally, the factor $(1 + \\frac{8}{\\mutinco})$ depends\non the irrepresentability assumption~\\ref{AssInco}, growing in\nparticular as the incoherence parameter $\\mutinco$ approaches $0$.\n\n\\subsubsection{Exponential-type tails}\n\nWe now discuss the consequences of Theorem~\\ref{ThmMain} for\ndistributions in which the sample covariance satisfies an\nexponential-type tail bound with exponent $\\ensuremath{a} = 2$. In\nparticular, recall from Lemma~\\ref{LEM_SAM_COV_BOUND_SUBG} that\nsuch a tail bound holds when the variables are sub-Gaussian.\n\\defc_1{c_1}\n\\defc_2{c_2}\n\n\n\\begin{cors} \n\\label{CorEllinfSubg}\nUnder the same conditions as Theorem~\\ref{ThmMain}, suppose moreover\nthat the variables $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ are sub-Gaussian with\nparameter $\\ensuremath{\\sigma}$, and the samples are drawn independently. Then if\nthe sample size $\\ensuremath{n}$ satisfies the bound\n\\begin{eqnarray}\n\\ensuremath{n} & > & \\ensuremath{C_1} \\; \\ensuremath{\\ensuremath{d}}^2 \\, (1 + \\frac{8}{\\mutinco})^2\n\\;\\big (\\tau \\log \\ensuremath{p} + \\log 4 \\big)\n\\end{eqnarray}\nwhere $\\ensuremath{C_1} \\ensuremath{: =} \\big\\{48\\sqrt{2} \\,(1 + 4 \\csubg^2) \\,\\max_{i} (\\CovMatStar_{ii}) \\, \\max\\{\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}\n\\ensuremath{K_{\\ensuremath{\\Gamma^*}}},\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{3}\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^{2} \\} \\big\\}^{2}$, then with\nprobability greater than $1-1\/\\ensuremath{p}^{\\tau -2}$, the estimate\n$\\ensuremath{\\widehat{\\Theta}}$ satisfies the bound,\n\\begin{eqnarray*}\n\\| \\estim{\\Theta} - \\ensuremath{\\Theta^*}\\|_\\infty & \\leq & \n\\big\\{16\\sqrt{2} \\,(1 + 4 \\csubg^2) \\,\\max_{i} (\\CovMatStar_{ii}) \\, (1 + 8\\mutinco^{-1})\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}\\big\\}\\; \\sqrt{\\frac{\\tau \\log \\ensuremath{p} + \\log 4}{\\ensuremath{n}}}.\n\\end{eqnarray*}\n\\end{cors}\n\\begin{proof}\nFrom Lemma~\\ref{LEM_SAM_COV_BOUND_SUBG}, when the rescaled variables\n$X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ are sub-Gaussian with parameter\n$\\ensuremath{\\sigma}$, the sample covariance entries satisfies a tail bound\n$\\ensuremath{\\mathcal{T}}(f, v_*)$ with with $v_* = \\big[\\max_{i}(\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii})\\,8(1 + 4 \\ensuremath{\\sigma}^2)\\big]^{-1}$ \nand $f(\\ensuremath{n},\\delta) = (1\/4) \\exp(c_{*} \\ensuremath{n} \\delta^2)$, where \\mbox{$c_{*} =\n\\big[128(1 + 4\\csubg^{2})^{2}\\max_{i} (\\CovMatStar_{ii})^2\\big]^{-1}$.} As a consequence, for this particular model, the\ninverse functions $\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$ and\n$\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}(\\delta,\\ensuremath{p}^{\\tau})$ take the form\n\\begin{subequations}\n\\label{EqnExpTailInv}\n\\begin{eqnarray}\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) & = &\n\\sqrt{\\frac{\\log(4\\,\\ensuremath{p}^{\\tau})}{c_{*} \\, \\ensuremath{n}}} \\; = \\;\n\\sqrt{ 128(1 + 4\\csubg^{2})^{2}\\max_{i} (\\CovMatStar_{ii})^2} \\; \\sqrt{\\frac{\\tau \\log \\ensuremath{p} + \\log 4}{\\ensuremath{n}}},\\\\\n\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}(\\delta,\\ensuremath{p}^{\\tau}) &= &\n\\frac{\\log(4\\,\\ensuremath{p}^{\\tau})} {c_{*} \\delta^2} \\; = \\; \n128(1 + 4\\csubg^{2})^{2}\\max_{i} (\\CovMatStar_{ii})^2 \\;\n\\biggr(\\frac{\\tau \\log \\ensuremath{p} + \\log 4}{\\delta^{2}}\\biggr).\n\\end{eqnarray}\n\\end{subequations}\nSubstituting these forms into the claim of Theorem~\\ref{ThmMain} and\ndoing some simple algebra yields the stated corollary.\n\\end{proof}\n\nWhen $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\mutinco$ remain constant as a function of\n$(\\ensuremath{n}, \\ensuremath{p}, \\ensuremath{\\ensuremath{d}})$, the corollary can be summarized\nsuccinctly as a sample size of \\mbox{$\\ensuremath{n} = \\Omega(\\ensuremath{\\ensuremath{d}}^2 \\log\n\\ensuremath{p})$} samples ensures that an elementwise\n$\\ell_\\infty$ bound \\mbox{$\\| \\estim{\\Theta} - \\ensuremath{\\Theta^*}\\|_\\infty =\n{\\mathcal{O}}\\big( \\sqrt{\\frac{\\log \\ensuremath{p}}{\\ensuremath{n}}}\\big)$} holds with high probability. \nIn practice, one frequently considers graphs with maximum node degrees\n$\\ensuremath{\\ensuremath{d}}$ that either remain bounded, or that grow sub-linearly with\nthe graph size (i.e., $\\ensuremath{\\ensuremath{d}} = o(\\ensuremath{p})$). In such cases, the sample\nsize allowed by the corollary can be substantially smaller than the\ngraph size, so that for sub-Gaussian random variables, the method can\nsucceed in the $\\ensuremath{p} \\gg \\ensuremath{n}$ regime.\n\n\\subsubsection{Polynomial-type tails}\n\nWe now state a corollary for the case of a polynomial-type tail\nfunction, such as those ensured by the case of random variables with\nappropriately bounded moments.\n\\begin{cors}\n\\label{CorEllinfPoly}\nUnder the assumptions of Theorem~\\ref{ThmMain}, suppose the rescaled\nvariables $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ have $4m^{th}$\nmoments upper bounded by $K_{\\momentpow}$, and the sampling is i.i.d.\nThen if the sample size $\\ensuremath{n}$ satisfies the bound\n\\begin{eqnarray}\n\\label{EqnPolyTailSampSize}\n\\ensuremath{n} & > & \\ensuremath{C_2} \\, \\ensuremath{\\ensuremath{d}}^{2}\\, \\big(1 + \\frac{8}{\\mutinco}\n\\big)^2\\, \\ensuremath{p}^{\\tau\/m},\n\\end{eqnarray}\nwhere $\\ensuremath{C_2} \\ensuremath{: =} \\big\\{12 m \\,[m (K_{\\momentpow} +\n 1)]^{\\frac{1}{2m}}\\, \\max_i(\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii})\\max\n\\{\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{2} \\ensuremath{K_{\\ensuremath{\\Gamma^*}}},\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{4} \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^{2} \\} \\big\\}^{2}$,\nthen with probability greater than $1-1\/\\ensuremath{p}^{\\tau -2}$, the\nestimate $\\ensuremath{\\widehat{\\Theta}}$ satisfies the bound,\n\\begin{eqnarray*}\n\\| \\estim{\\Theta} - \\ensuremath{\\Theta^*}\\|_\\infty & \\leq & \\{4m\n [m (K_{\\momentpow} + 1)]^{\\frac{1}{2m}}\\, \\big(1 +\n \\frac{8}{\\mutinco} \\big) \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}\\}\\;\n \\sqrt{\\frac{\\ensuremath{p}^{\\tau\/m}}{\\ensuremath{n}}}.\n\\end{eqnarray*}\n\\end{cors}\n\\begin{proof}\nRecall from Lemma~\\ref{LEM_SAM_COV_BOUND_MOMENT} that when the\nrescaled variables $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ have bounded\n$4m^{th}$ moments, then the sample covariance $\\ensuremath{\\widehat{\\ensuremath{\\Sigma}}}$\nsatisfies the tail condition $\\ensuremath{\\mathcal{T}}(f, v_*)$ with\n$v_* = 0$, and with $f(\\ensuremath{n},\\delta) = c_{*}\n\\ensuremath{n}^{m} \\delta^{2m}$ with $c_{*}$ defined as\n$c_{*} = 1\/\\big\\{m^{2m+1} 2^{2m} (\\max_i\n\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}) ^{2m}\\, (K_{\\momentpow} + 1 ) \\big\\}$. As a\nconsequence, for this particular model, the inverse functions take the\nform\n\\begin{subequations}\n\\label{EqnPolyTailInv}\n\\begin{eqnarray}\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) & = &\n\\frac{(\\ensuremath{p}^{\\tau}\/c_{*})^{1\/2m}}{\\sqrt{\\ensuremath{n}}}\n\\,=\\, \\{2m [m (K_{\\momentpow} +\n1)]^{\\frac{1}{2m}} \\max_i \\ensuremath{\\ensuremath{\\Sigma}^*}_{ii} \\}\\;\n\\sqrt{\\frac{\\ensuremath{p}^{\\tau\/m}}{\\ensuremath{n}}},\\\\\n\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}(\\delta,\\ensuremath{p}^{\\tau}) & = &\n\\frac{(\\ensuremath{p}^{\\tau}\/c_{*})^{1\/m}}{\\delta^{2}} \\,=\\,\n\\{2m [m (K_{\\momentpow} + 1)]^{\\frac{1}{2m}}\n\\max_i \\ensuremath{\\ensuremath{\\Sigma}^*}_{ii} \\}^{2}\\;\n\\big(\\frac{\\ensuremath{p}^{\\tau\/m}}{\\delta^{2}}\\big).\n\\end{eqnarray}\n\\end{subequations}\nThe claim then follows by substituting these expressions into Theorem~\\ref{ThmMain} and performing some algebra.\n\\end{proof}\n\nWhen the quantities $(\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\mutinco)$ remain constant\nas a function of $(\\ensuremath{n}, \\ensuremath{p}, \\ensuremath{\\ensuremath{d}})$,\nCorollary~\\ref{CorEllinfPoly} can be summarized succinctly as\n\\mbox{$\\ensuremath{n} = \\Omega(\\ensuremath{\\ensuremath{d}}^2 \\, \\ensuremath{p}^{\\tau\/m})$}\nsamples are sufficient to achieve a convergence rate in elementwise\n$\\ell_\\infty$-norm of the order \\mbox{$\\| \\estim{\\Theta} -\n\\ensuremath{\\Theta^*}\\|_\\infty = {\\mathcal{O}}\\big(\n\\sqrt{\\frac{\\ensuremath{p}^{\\tau\/m}}{\\ensuremath{n}}}\\big)$,} with high\nprobability. Consequently, both the required sample size and the rate\nof convergence of the estimator are polynomial in the number of\nvariables $\\ensuremath{p}$. It is worth contrasting these rates with the case\nof sub-Gaussian random variables, where the rates have only\nlogarithmic dependence on the problem size $\\ensuremath{p}$.\n\n\n \n\n\n\n\\subsection{Model selection consistency}\n\\label{SecModelCons}\n\nPart (b) of Theorem~\\ref{ThmMain} asserts that the edge set\n$\\ensuremath{E}(\\ensuremath{\\widehat{\\Theta}})$ returned by the estimator is contained within the\ntrue edge set $\\ensuremath{E}(\\ensuremath{\\Theta^*})$---meaning that it correctly\n\\emph{excludes} all non-edges---and that it includes all edges that\nare ``large'', relative to the $\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$\ndecay of the error. The following result, essentially a minor\nrefinement of Theorem~\\ref{ThmMain}, provides sufficient conditions\nlinking the sample size $\\ensuremath{n}$ and the minimum value\n\\begin{eqnarray}\n\\label{EqnDefnThetaMin}\n\\ensuremath{\\theta_{\\operatorname{min}}} & \\ensuremath{: =} & \\min_{(i,j) \\in \\ensuremath{E}(\\ensuremath{\\Theta^*})} |\\ensuremath{\\Theta^*}_{ij}|\n\\end{eqnarray}\nfor model selection consistency. More precisely, define the event\n\\begin{eqnarray}\n\\ensuremath{\\mathcal{M}}(\\ensuremath{\\widehat{\\Theta}}; \\ensuremath{\\Theta^*}) & \\ensuremath{: =} & \\big \\{ \\textrm{sign}(\\ensuremath{\\widehat{\\Theta}}_{ij})\n= \\textrm{sign}(\\ensuremath{\\Theta^*}_{ij}) \\quad \\forall (i,j) \\in \\ensuremath{E}(\\ensuremath{\\Theta^*})\n\\big \\}\n\\end{eqnarray}\nthat the estimator $\\ensuremath{\\widehat{\\Theta}}$ has the same edge set as $\\ensuremath{\\Theta^*}$,\nand moreover recovers the correct signs on these edges. With this\nnotation, we have:\n\\begin{theos}\n\\label{ThmModel}\n\nUnder the same conditions as Theorem~\\ref{ThmMain}, suppose that\nthe sample size satisfies the lower bound\n\\begin{eqnarray}\n\\label{EqnNumobsModel}\n\\ensuremath{n} & > & \\ensuremath{{\\widebar{\\ensuremath{n}}_f}} \\Biggr( 1 \\big\/\\max \\Big \\{ 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} (1 +\n8\\mutinco^{-1})\\, \\ensuremath{\\theta_{\\operatorname{min}}}^{-1}, \\; v_*, \\; 6 \\big (1 +\n8\\mutinco^{-1} \\big) \\: \\ensuremath{\\ensuremath{d}}\\, \\max\\{\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}\n\\ensuremath{K_{\\ensuremath{\\Gamma^*}}},\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{3}\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^{2} \\} \\Big\\} , \\; \\;\\ensuremath{p}^{\\tau}\n\\Biggr).\n\\end{eqnarray}\nThen the estimator is model selection consistent with high probability\nas $\\ensuremath{p} \\rightarrow \\infty$,\n\\begin{eqnarray}\n\\ensuremath{\\mathbb{P}} \\big[ \\ensuremath{\\mathcal{M}}(\\ensuremath{\\widehat{\\Theta}}; \\ensuremath{\\Theta^*}) \\big] & \\geq & 1 -\n1\/\\ensuremath{p}^{\\tau - 2} \\; \\rightarrow \\; 1.\n\\end{eqnarray}\n\\end{theos}\n\nIn comparison to Theorem~\\ref{ThmMain}, the sample size\nrequirement~\\eqref{EqnNumobsModel} differs only in the additional term\n$\\frac{2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} (1 + \\frac{8}{\\mutinco})}{\\ensuremath{\\theta_{\\operatorname{min}}}}$ involving the\nminimum value. This term can be viewed as constraining how quickly\nthe minimum can decay as a function of $(\\ensuremath{n}, \\ensuremath{p})$, as we\nillustrate with some concrete tail functions.\n\n\n\\subsubsection{Exponential-type tails} \n\nRecall the setting of Section~\\ref{SecSamCovBoundSubG}, where the\nrandom variables $\\{X^{(\\obsind)}_{i}\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}\\}$ are\nsub-Gaussian with parameter $\\ensuremath{\\sigma}$. Let us suppose that the\nparameters $(\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\mutinco)$ are viewed as constants\n(not scaling with $(\\ensuremath{p}, \\ensuremath{\\ensuremath{d}})$. Then, using the\nexpression~\\eqref{EqnExpTailInv} for the inverse function\n$\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}$ in this setting, a corollary of Theorem~\\ref{ThmModel}\nis that a sample size \n\\begin{eqnarray}\n\\label{EqnModelSampSub}\n\\ensuremath{n} & = & \\Omega \\big( (\\ensuremath{\\ensuremath{d}}^2 + \\ensuremath{\\theta_{\\operatorname{min}}}^{-2}) \\, \\tau \\log\n\\ensuremath{p} \\big)\n\\end{eqnarray}\nis sufficient for model selection consistency with probability greater\nthan $1-1\/\\ensuremath{p}^{\\tau-2}$. Alternatively, we can state that $\\ensuremath{n}\n= \\Omega(\\tau \\ensuremath{\\ensuremath{d}}^2 \\log \\ensuremath{p})$ samples are sufficient, as\nalong as the minimum value scales as \\mbox{$\\ensuremath{\\theta_{\\operatorname{min}}} =\n\\Omega(\\sqrt{\\frac{\\log \\ensuremath{p}}{\\ensuremath{n}}})$.}\n\n\\subsubsection{Polynomial-type tails} \n\nRecall the setting of Section~\\ref{SecSamCovBoundMoment}, where the\nrescaled random variables $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ have bounded\n$4m^{th}$ moments. Using the expression~\\eqref{EqnPolyTailInv}\nfor the inverse function $\\ensuremath{{\\widebar{\\ensuremath{n}}_f}}$ in this setting, a corollary of\nTheorem~\\ref{ThmModel} is that a sample size\n\\begin{eqnarray}\n\\label{EqnModelSampPoly}\n\\ensuremath{n} & = & \\Omega\\big( (\\ensuremath{\\ensuremath{d}}^2 + \\ensuremath{\\theta_{\\operatorname{min}}}^{-2})\\,\n\\ensuremath{p}^{\\tau\/m} \\big)\n\\end{eqnarray}\nis sufficient for model selection consistency with probability greater\nthan $1-1\/\\ensuremath{p}^{\\tau-2}$. Alternatively, we can state than $\\ensuremath{n}\n= \\Omega(\\ensuremath{\\ensuremath{d}}^2 \\ensuremath{p}^{\\tau\/m})$ samples are\nsufficient, as long as the minimum value scales as \\mbox{$\\ensuremath{\\theta_{\\operatorname{min}}} =\n\\Omega(\\ensuremath{p}^{\\tau\/(2m)}\/{\\sqrt{\\ensuremath{n}}})$.}\n\n\n\n\\subsection{Comparison to neighbor-based graphical model selection}\n\\label{SecInco}\n\nSuppose that $X$ follows a multivariate Gaussian distribution, so that\nthe structure of the concentration matrix $\\ensuremath{\\Theta^*}$ specifies the\nstructure of a Gaussian graphical model. In this case, it is\ninteresting to compare our sufficient conditions for graphical model\nconsistency of the log-determinant approach, as specified in\nTheorem~\\ref{ThmModel}, to those of the neighborhood-based method,\nfirst proposed by \\citet{MeinsBuhl2006}. The latter method estimates\nthe full graph structure by performing an $\\ell_1$-regularized linear\nregression (Lasso)---of the form $X_i = \\sum_{j \\neq i} \\theta_{ij}\nX_j + W$--- of each node on its neighbors and using the support of the\nestimated regression vector $\\theta$ to predict the neighborhood set.\nThese neighborhoods are then combined, by either an OR rule or an AND\nrule, to estimate the full graph. Various aspects of the\nhigh-dimensional model selection consistency of the Lasso are now\nunderstood~\\cite{MeinsBuhl2006,Wainwright2006_new,Zhao06}; for\ninstance, it is known that mutual incoherence or irrepresentability\nconditions are necessary and sufficient for its\nsuccess~\\cite{Tropp2006,Zhao06}. In terms of scaling,\nWainwright~\\cite{Wainwright2006_new} shows that the Lasso succeeds\nwith high probability if and only if the sample size scales as\n\\mbox{$\\ensuremath{n} \\asymp c (\\{\\ensuremath{\\ensuremath{d}} + \\theta_{\\operatorname{min}}^{-2}\n\\} \\log \\ensuremath{p})$,} where $c$ is a constant determined by the covariance\nmatrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$. By a union bound over the $\\ensuremath{p}$ nodes in the\ngraph, it then follows that the neighbor-based graph selection method\nin turn succeeds with high probability if $\\ensuremath{n} = \\Omega(\\{\\ensuremath{\\ensuremath{d}}\n+ \\ensuremath{\\theta_{\\operatorname{min}}}^{-2} \\} \\log \\ensuremath{p})$.\n\nFor comparison, consider the application of Theorem~\\ref{ThmModel} to\nthe case where the variables are sub-Gaussian (which includes the\nGaussian case). For this setting, we have seen that the scaling\nrequired by Theorem~\\ref{ThmModel} is $\\ensuremath{n} = \\Omega( \\{ \\ensuremath{\\ensuremath{d}}^2\n+ \\ensuremath{\\theta_{\\operatorname{min}}}^{-2} \\} \\log \\ensuremath{p})$, so that the dependence of the\nlog-determinant approach in $\\ensuremath{\\theta_{\\operatorname{min}}}$ is identical, but it depends\nquadratically on the maximum degree $\\ensuremath{\\ensuremath{d}}$. We suspect that that\nthe quadratic dependence $\\ensuremath{\\ensuremath{d}}^2$ might be an artifact of our\nanalysis, but have not yet been able to reduce it to $\\ensuremath{\\ensuremath{d}}$.\nOtherwise, the primary difference between the two methods is in the\nnature of the irrepresentability assumptions that are imposed: our\nmethod requires Assumption~\\ref{AssInco} on the Hessian $\\ensuremath{\\Gamma^*}$,\nwhereas the neighborhood-based method imposes this same type of\ncondition on a set of $\\ensuremath{p}$ covariance matrices, each of size\n$(\\ensuremath{p} -1) \\times (\\ensuremath{p}-1)$, one for each node of the graph. Below\nwe show two cases where the Lasso irrepresentability condition holds,\nwhile the log-determinant requirement fails. However, in general, we\ndo not know whether the log-determinant irrepresentability strictly\ndominates its analog for the Lasso.\n\n\n\\subsubsection{Illustration of irrepresentability: Diamond graph} \nConsider the following Gaussian graphical model example from\n\\citet{Meins2008}. Figure~\\ref{FigSimpGraph}(a) shows a\ndiamond-shaped graph $G = (V,E)$, with vertex set $V = \\{1,2,3,4\\}$\nand edge-set as the fully connected graph over $V$ with the edge\n$(1,4)$ removed.\n\\begin{figure}[htb]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\psfrag{#1#}{$1$} \\psfrag{#2#}{$2$} \\psfrag{#3#}{$3$}\n\\psfrag{#4#}{$4$} \\widgraph{.3\\textwidth}{meins.eps} & \\hspace*{.2in}\n& \\psfrag{#1#}{$1$} \\psfrag{#2#}{$2$} \\psfrag{#3#}{$3$}\n\\psfrag{#4#}{$4$} \\widgraph{.3\\textwidth}{small_star.eps} \\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{(a) Graph of the example discussed by~\\citet{Meins2008}. (b)\nA simple $4$-node star graph.}\n\\label{FigSimpGraph}\n\\end{figure}\nThe covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$ is parameterized by the\ncorrelation parameter $\\rho \\in [0,1\/\\sqrt{2}]$: the diagonal entries\nare set to $\\ensuremath{\\ensuremath{\\Sigma}^*}_{i} = 1$, for all $i \\in \\ensuremath{V}$; the entries\ncorresponding to edges are set to $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = \\rho$ for $(i,j)\n\\in \\ensuremath{E} \\backslash \\{(2,3)\\}$, $\\ensuremath{\\ensuremath{\\Sigma}^*}_{23} = 0$; and finally\nthe entry corresponding to the non-edge is set as $\\ensuremath{\\ensuremath{\\Sigma}^*}_{14} =\n2 \\rho^2$. \\citet{Meins2008} showed that the $\\ell_1$-penalized\nlog-determinant estimator $\\ensuremath{\\widehat{\\Theta}}$ fails to recover the graph\nstructure, for any sample size, if $\\rho > -1 + (3\/2)^{1\/2} \\approx\n0.23$. It is instructive to compare this necessary condition to the\nsufficient condition provided in our analysis, namely the incoherence\nAssumption~\\ref{AssInco} as applied to the Hessian $\\ensuremath{\\Gamma^*}$. For\nthis particular example, a little calculation shows that\nAssumption~\\ref{AssInco} is equivalent to the constraint\n\\begin{eqnarray*}\n4 |\\rho| (|\\rho| + 1) & < & 1,\n\\end{eqnarray*}\nan inequality which holds for all $\\rho \\in (-0.2017, 0.2017)$. Note\nthat the upper value $0.2017$ is just below the necessary threshold\ndiscussed by \\citet{Meins2008}. On the other hand, the\nirrepresentability condition for the Lasso requires only that $2\n|\\rho| < 1$, i.e., $\\rho \\in (-0.5,0.5)$. Thus, in the regime $|\\rho|\n\\in [0.2017,0.5)$, the Lasso irrepresentability condition holds while\nthe log-determinant counterpart fails.\n\n\n\\subsubsection{Illustration of irrepresentability: Star graphs} \nA second interesting example is the star-shaped graphical model,\nillustrated in Figure~\\ref{FigSimpGraph}(b), which consists of a\nsingle hub node connected to the rest of the spoke nodes. We consider\na four node graph, with vertex set $V = \\{1,2,3,4\\}$ and edge-set $E =\n\\{(1,s) \\mid s \\in \\{2,3,4\\}\\}$. The covariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$\nis parameterized the correlation parameter $\\rho \\in [-1,1]$: the\ndiagonal entries are set to $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii} = 1$, for all $i \\in V$;\nthe entries corresponding to edges are set to $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} =\n\\rho$ for $(i,j) \\in E$; while the non-edge entries are set as\n$\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = \\rho^2$ for $(i,j) \\notin E$. Consequently, for\nthis particular example, Assumption~\\ref{AssInco} reduces to the\nconstraint $|\\rho| (|\\rho| + 2) < 1$, which holds for all $\\rho \\in\n(-0.414, 0.414)$. The irrepresentability condition for the Lasso on\nthe other hand allows the full range $\\rho \\in (-1,1)$. Thus there is\nagain a regime, $|\\rho| \\in [0.414,1)$, where the Lasso\nirrepresentability condition holds while the log-determinant\ncounterpart fails.\n\n\n\\subsection{Rates in Frobenius and spectral norm}\n\\label{SecFrob}\nWe now derive some corollaries of Theorem~\\ref{ThmMain} concerning\nestimation of $\\ensuremath{\\Theta^*}$ in Frobenius norm, as well as the spectral\nnorm. Recall that $\\ensuremath{s} = |\\ensuremath{E}(\\ensuremath{\\Theta^*})|$ denotes the total\nnumber of off-diagonal non-zeros in $\\ensuremath{\\Theta^*}$.\n\\begin{cors}\n\\label{CorOperatorNorm} \nUnder the same assumptions as Theorem~\\ref{ThmMain}, with probability\nat least $1 - 1\/\\ensuremath{p}^{\\tau - 2}$, the\nestimator $\\estim{\\Theta}$ satisfies\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{EqnPrecFrob}\n\\matnorm{\\ensuremath{\\widehat{\\Theta}} - \\ensuremath{\\Theta^*}}{F} & \\leq & \\big\\{2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big(1\n + \\frac{8}{\\mutinco}\\big)\\big\\} \\, \\sqrt{\\ensuremath{s} +\\ensuremath{p}} \\;\n \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}), \\qquad \\mbox{and} \\\\\n\\label{EqnPrecSpectral}\n\\matnorm{\\ensuremath{\\widehat{\\Theta}} - \\ensuremath{\\Theta^*}}{2} & \\leq & \\big\\{2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big(1\n + \\frac{8}{\\mutinco}\\big) \\big\\}\\, \\min \\{\\sqrt{\\ensuremath{s} +\\ensuremath{p}}, \\,\n \\ensuremath{\\ensuremath{d}} \\} \\; \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}).\n\\end{eqnarray}\n\\end{subequations}\n\\end{cors}\n\\begin{proof}\n\\newcommand{\\ensuremath{\\nu}}{\\ensuremath{\\nu}}\n\nWith the shorthand notation $\\ensuremath{\\nu} \\ensuremath{: =} 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} (1 + 8\/\\mutinco)\n\\; \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$, Theorem~\\ref{ThmMain}\nguarantees that, with probability at least $1 - 1\/\\ensuremath{p}^{\\tau -\n2}$, $\\|\\ensuremath{\\widehat{\\Theta}} - \\ensuremath{\\Theta^*}\\|_\\infty \\leq \\ensuremath{\\nu}$. Since the edge\nset of $\\ensuremath{\\widehat{\\Theta}}$ is a subset of that of $\\ensuremath{\\Theta^*}$, and\n$\\ensuremath{\\Theta^*}$ has at most $\\ensuremath{p} + \\ensuremath{s}$ non-zeros (including the\ndiagonal), we conclude that\n\\begin{eqnarray*}\n\\matnorm{\\ensuremath{\\widehat{\\Theta}} - \\ensuremath{\\Theta^*}}{F} & = & \\big[ \\sum_{i=1}^\\ensuremath{p}\n(\\ensuremath{\\widehat{\\Theta}}_{ii} - \\ensuremath{\\Theta^*}_{ii})^2 + \\sum_{(i,j) \\in \\ensuremath{E}}\n(\\ensuremath{\\widehat{\\Theta}}_{ij} - \\ensuremath{\\Theta^*}_{ij})^2 \\big]^{1\/2} \\\\\n& \\leq & \\ensuremath{\\nu} \\; \\sqrt{\\ensuremath{s} + \\ensuremath{p}},\n\\end{eqnarray*}\nfrom which the bound~\\eqref{EqnPrecFrob} follows. On the other hand,\nfor a symmetric matrix, we have\n\\begin{eqnarray}\\label{EqnPrecInftyOp}\n\\matnorm{\\ensuremath{\\widehat{\\Theta}}- \\ensuremath{\\Theta^*}}{2} & \\leq & \\matnorm{\\ensuremath{\\widehat{\\Theta}} -\n\\ensuremath{\\Theta^*}}{\\infty} \\; \\leq \\; \\ensuremath{\\ensuremath{d}} \\ensuremath{\\nu},\n\\end{eqnarray}\nusing the definition of the $\\ensuremath{\\nu}_\\infty$-operator norm, and the fact\nthat $\\ensuremath{\\widehat{\\Theta}}$ and $\\ensuremath{\\Theta^*}$ have at most $\\ensuremath{\\ensuremath{d}}$ non-zeros per\nrow. Since the Frobenius norm upper bounds the spectral norm, the\nbound~\\eqref{EqnPrecSpectral} follows.\n\n\n\\end{proof}\n\n\\subsubsection{Exponential-type tails}\nFor the exponential tail function case where the rescaled random\nvariables $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ are sub-Gaussian with\nparameter $\\ensuremath{\\sigma}$, we can use the expression~\\eqref{EqnExpTailInv}\nfor the inverse function $\\ensuremath{\\widebar{\\delta}_f}$ to derive rates in Frobenius\nand spectral norms. When the quantities $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}},\n\\mutinco$ remain constant, these bounds can be summarized succinctly\nas a sample size \\mbox{$\\ensuremath{n} = \\Omega(\\ensuremath{\\ensuremath{d}}^2 \\log \\ensuremath{p})$} is\nsufficient to guarantee the bounds\n\\begin{subequations}\n\\begin{eqnarray}\n\\matnorm{\\estim{\\Theta} - \\ensuremath{\\Theta^*}}{F} & = &\n{\\mathcal{O}}\\biggr(\\sqrt{\\frac{(\\ensuremath{s} + \\ensuremath{p})\\,\\log \\ensuremath{p}}{\\ensuremath{n}}}\\,\\biggr),\n\\quad \\mbox{and} \\\\\n\\matnorm{\\estim{\\Theta} - \\ensuremath{\\Theta^*}}{2} & = &\n{\\mathcal{O}}\\biggr(\\sqrt{\\frac{\\min\\{\\ensuremath{s} + \\ensuremath{p},\\,\\ensuremath{\\ensuremath{d}}^{2}\\} \\, \\log\n\\ensuremath{p}}{\\ensuremath{n}}}\\,\\biggr),\n\\end{eqnarray}\n\\end{subequations}\nwith probability at least $1 - 1\/\\ensuremath{p}^{\\tau - 2}$.\n\n\\subsubsection{Polynomial-type tails}\n\nSimilarly, let us again consider the polynomial tail case, in which\nthe rescaled variates $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ have bounded $4\nm^{th}$ moments and the samples are drawn i.i.d. Using the\nexpression~\\eqref{EqnPolyTailInv} for the inverse function we can\nderive rates in the Frobenius and spectral norms. When the quantities\n$\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\mutinco$ are viewed as constant, we are\nguaranteed that a sample size \\mbox{$\\ensuremath{n} = \\Omega(\\ensuremath{\\ensuremath{d}}^2 \\,\n\\ensuremath{p}^{\\tau\/m})$} is sufficient to guarantee the bounds\n\\begin{subequations}\n\\begin{eqnarray}\n \\matnorm{\\estim{\\Theta} - \\ensuremath{\\Theta^*}}{F} & = &\n {\\mathcal{O}}\\biggr(\\sqrt{\\frac{(\\ensuremath{s} +\n \\ensuremath{p})\\,\\ensuremath{p}^{\\tau\/m}}{\\ensuremath{n}}}\\,\\biggr),\\textrm{ and } \\\\\n\\matnorm{\\estim{\\Theta} - \\ensuremath{\\Theta^*}}{2} & = &\n{\\mathcal{O}}\\biggr(\\sqrt{\\frac{\\min\\{\\ensuremath{s} + \\ensuremath{p},\\,\\ensuremath{\\ensuremath{d}}^{2}\\} \\,\n\\ensuremath{p}^{\\tau\/m}}{\\ensuremath{n}}}\\,\\biggr),\n\\end{eqnarray}\n\\end{subequations}\nwith probability at least $1 - 1\/\\ensuremath{p}^{\\tau - 2}$.\n\n\n\\subsection{Rates for the covariance matrix estimate}\nFinally, we describe some bounds on the estimation of the covariance\nmatrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$. By Lemma~\\ref{LEM_MLE_CHARAC}, the estimated\nconcentration matrix $\\ensuremath{\\widehat{\\Theta}}$ is positive definite, and hence can\nbe inverted to obtain an estimate of the covariance matrix, which we\ndenote as $\\hat{\\CovHat} \\ensuremath{: =} (\\ensuremath{\\widehat{\\Theta}})^{-1}$.\n\n\\begin{cors}\n\\label{CorCovBound}\nUnder the same assumptions as Theorem~\\ref{ThmMain}, with probability\nat least $1 - 1\/\\ensuremath{p}^{\\tau - 2}$, the following bounds hold.\n\\begin{enumerate}\n\\item[(a)] The element-wise $\\ell_{\\infty}$ norm of the deviation\n $(\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*})$ satisfies the bound\n\\begin{eqnarray}\n\\label{EqnCovInftyBound}\n\\vecnorm{\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty} & \\leq & \\ensuremath{C_3},\n\t[\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})] + \\ensuremath{C_4}\n\t\\ensuremath{\\ensuremath{d}}\\, [\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})]^{2}\n\\end{eqnarray}\nwhere $\\ensuremath{C_3} = 2 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{2} \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}\\Big(1 +\n\\frac{8}{\\mutinco}\\Big)$ and $\\ensuremath{C_4} = 6 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{3}\n\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^{2}\\Big(1 + \\frac{8}{\\mutinco}\\Big)^{2}$.\n\\item[(b)] The $\\ell_2$ operator-norm of the deviation $(\\hat{\\CovHat} -\n\\ensuremath{\\ensuremath{\\Sigma}^*})$ satisfies the bound\n\\begin{eqnarray}\n\\label{EqnCovSpectralBound}\n\\matnorm{\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*}}{2} & \\leq & \\ensuremath{C_3} \\, \\ensuremath{\\ensuremath{d}} \\,\n\t[\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})] + \\ensuremath{C_4} \\ensuremath{\\ensuremath{d}}^{2}\n\t\\, [\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})]^{2}.\n\\end{eqnarray}\n\\end{enumerate}\n\\end{cors}\nThe proof involves certain lemmata and derivations that are parts of\nthe proofs of Theorems~\\ref{ThmMain} and \\ref{ThmModel}, so that we\ndefer it to Section~\\ref{SecCorCovProof}.\n\n\t\n\n\\section{Proofs of main result}\n\\label{SecProof}\n\nIn this section, we work through the proofs of Theorems~\\ref{ThmMain}\nand~\\ref{ThmModel}. We break down the proofs into a sequence of\nlemmas, with some of the more technical aspects deferred to\nappendices.\n\nOur proofs are based on a technique that we call a \\emph{primal-dual\nwitness method}, used previously in analysis of the\nLasso~\\cite{Wainwright2006_new}. It involves following a specific\nsequence of steps to construct a pair $(\\ThetaWitness, \\ensuremath{\\widetilde{Z}})$ of\nsymmetric matrices that together satisfy the optimality conditions\nassociated with the convex program~\\eqref{EqnGaussMLE} \\emph{with high\nprobability}. Thus, when the constructive procedure succeeds,\n$\\ThetaWitness$ is \\emph{equal} to the unique solution\n$\\estim{\\Theta}$ of the convex program~\\eqref{EqnGaussMLE}, and\n$\\ensuremath{\\widetilde{Z}}$ is an optimal solution to its dual. In this way, the\nestimator $\\estim{\\Theta}$ inherits from $\\ThetaWitness$ various\noptimality properties in terms of its distance to the truth\n$\\ensuremath{\\Theta^*}$, and its recovery of the signed sparsity pattern. To be\nclear, our procedure for constructing $\\ThetaWitness$ is \\emph{not} a\npractical algorithm for solving the log-determinant\nproblem~\\eqref{EqnGaussMLE}, but rather is used as a proof technique\nfor certifying the behavior of the $M$-estimator~\\eqref{EqnGaussMLE}.\n\n\n\n\\subsection{Primal-dual witness approach}\n\\label{SecPrimalDualWitness}\nAs outlined above, at the core of the primal-dual witness method are the standard convex\noptimality conditions that characterize the optimum $\\ensuremath{\\widehat{\\Theta}}$ of the\nconvex program~\\eqref{EqnGaussMLE}. For future reference, we note\nthat the sub-differential of the norm $\\ellreg{\\cdot}$ evaluated at\nsome $\\Theta$ consists the set of all symmetric matrices $Z \\in\n\\ensuremath{{\\mathbb{R}}}^{\\ensuremath{p} \\times \\ensuremath{p}}$ such that\n\\begin{eqnarray}\\label{EqnSubGradDefn}\nZ_{ij} & = & \\begin{cases} 0 & \\mbox{if $i =j$} \\\\ \\textrm{sign}(\\Theta_{ij})\n& \\mbox{if $i \\neq j$ and $\\Theta_{ij} \\neq 0$} \\\\ \\in [-1, +1] &\n\\mbox{if $i \\neq j$ and $\\Theta_{ij} = 0$.}\n \\end{cases}\n\\end{eqnarray}\nThe following result is proved in Appendix~\\ref{AppLemMLECharac}:\n\\begin{lems}\n\\label{LEM_MLE_CHARAC}\nFor any $\\ensuremath{\\lambda_\\ensuremath{n}} > 0$ and sample covariance $\\ensuremath{\\widehat{\\Sigma}}$ with strictly\npositive diagonal, the $\\ell_1$-regularized log-determinant\nproblem~\\eqref{EqnGaussMLE} has a unique solution $\\ensuremath{\\widehat{\\Theta}} \\succ 0$\ncharacterized by\n\\begin{eqnarray}\n\\label{EqnZeroSubgrad}\n\\ensuremath{\\widehat{\\Sigma}} - \\ensuremath{\\widehat{\\Theta}}^{-1} + \\ensuremath{\\lambda_\\ensuremath{n}} \\ensuremath{\\hat Z} & = & 0,\n\\end{eqnarray}\nwhere $\\ensuremath{\\hat Z}$ is an element of the subdifferential $\\partial\n\\ellreg{\\ensuremath{\\widehat{\\Theta}}}$. \n\\end{lems}\n\nBased on this lemma, we construct the primal-dual witness solution\n$(\\ThetaWitness, \\ensuremath{\\widetilde{Z}})$ as follows:\n\\begin{enumerate}\n\\item[(a)] We determine the\nmatrix $\\ThetaWitness$ by solving the restricted log-determinant\nproblem\n\\begin{eqnarray}\n\\label{EqnRestricted}\n\\ThetaWitness & \\ensuremath{: =} & \\arg \\min_{\\Theta \\succ 0, \\;\n\\Theta_{\\ensuremath{\\EsetPlus^c}} = 0} \\big \\{\\tracer{\\Theta}{\\ensuremath{\\widehat{\\Sigma}}} - \\log\n\\det(\\Theta) + \\ensuremath{\\lambda_\\ensuremath{n}} \\ellreg{\\Theta} \\big \\}.\n\\end{eqnarray}\nNote that by construction, we have $\\ThetaWitness \\succ 0$, and\nmoreover $\\ThetaWitness_{\\ensuremath{\\EsetPlus^c}} = 0$.\n\\item[(b)] We choose $\\ensuremath{\\widetilde{Z}}_{\\ensuremath{S}}$ as a member of the\nsub-differential of the regularizer $\\ellreg{\\cdot}$, evaluated\nat $\\ensuremath{\\ThetaWitness}$.\n\\item[(c)] We set $\\ensuremath{\\widetilde{Z}}_{\\ensuremath{\\EsetPlus^c}}$ as\n\\begin{eqnarray}\n \\ensuremath{\\widetilde{Z}}_{\\ensuremath{\\EsetPlus^c}} &=& \\frac{1}{\\ensuremath{\\lambda_\\ensuremath{n}}}\\big\\{-\n \\ensuremath{\\widehat{\\Sigma}}_{\\ensuremath{\\EsetPlus^c}} + [\\invn{\\ensuremath{\\ThetaWitness}}]_{\\ensuremath{\\EsetPlus^c}}\\big\\},\n\\end{eqnarray}\nwhich ensures that constructed matrices $(\\ensuremath{\\ThetaWitness},\\ensuremath{\\widetilde{Z}})$ satisfy\nthe optimality condition~\\eqref{EqnZeroSubgrad}.\n\\item[(d)] We verify the \\emph{strict dual feasibility} condition\n\\begin{eqnarray*}\n|\\ensuremath{\\widetilde{Z}}_{ij}| & < & 1 \\quad \\mbox{for all $(i,j) \\in \\ensuremath{\\EsetPlus^c}$}.\n\\end{eqnarray*}\n\n\\end{enumerate}\nTo clarify the nature of the construction, steps (a) through (c)\nsuffice to obtain a pair $(\\ensuremath{\\ThetaWitness},\\ensuremath{\\widetilde{Z}})$ that satisfy the\noptimality conditions~\\eqref{EqnZeroSubgrad}, but do \\emph{not}\nguarantee that $\\ensuremath{\\widetilde{Z}}$ is an element of sub-differential $\\partial\n\\ellreg{\\ensuremath{\\ThetaWitness}}$. By construction, specifically step (b) of the\nconstruction ensures that the entries $\\ensuremath{\\widetilde{Z}}$ in $\\ensuremath{S}$ satisfy\nthe sub-differential conditions, since $\\ensuremath{\\widetilde{Z}}_{\\ensuremath{S}}$ is a member\nof the sub-differential of $\\partial \\ellreg{\\ensuremath{\\ThetaWitness}_{\\ensuremath{S}}}$.\nThe purpose of step (d), then, is to verify that the remaining\nelements of $\\ensuremath{\\widetilde{Z}}$ satisfy the necessary conditions to belong\nto the sub-differential.\n\n\nIf the primal-dual witness construction succeeds, then it acts as a\n\\emph{witness} to the fact that the solution $\\ensuremath{\\ThetaWitness}$ to the\nrestricted problem~\\eqref{EqnRestricted} is equivalent to the solution\n$\\ensuremath{\\widehat{\\Theta}}$ to the original (unrestricted)\nproblem~\\eqref{EqnGaussMLE}. We exploit this fact in our proofs of\nTheorems~\\ref{ThmMain} and \\ref{ThmModel} that build on this: we first show\nthat the primal-dual witness technique succeeds with high-probability,\nfrom which we can conclude that the support of the optimal solution\n$\\ensuremath{\\widehat{\\Theta}}$ is contained within the support of the true $\\ensuremath{\\Theta^*}$.\nIn addition, we exploit the characterization of $\\ensuremath{\\widehat{\\Theta}}$ provided\nby the primal-dual witness construction to establish the elementwise\n$\\ell_\\infty$ bounds claimed in Theorem~\\ref{ThmMain}.\nTheorem~\\ref{ThmModel} requires checking, in addition, that certain\nsign consistency conditions hold, for which we require lower bounds on\nthe value of the minimum value $\\ensuremath{\\theta_{\\operatorname{min}}}$.\n\nIn the analysis to follow, some additional notation is useful. We let\n$\\ensuremath{W}$ denote the ``effective noise'' in the sample covariance\nmatrix $\\ensuremath{\\widehat{\\Sigma}}$, namely\n\\begin{eqnarray}\n\\label{EqnWdefn}\n\\ensuremath{W} & \\ensuremath{: =} & \\ensuremath{\\widehat{\\Sigma}} - (\\ensuremath{\\Theta^*})^{-1}.\n\\end{eqnarray}\nSecond, we use $\\Delta = \\ThetaWitness - \\ensuremath{\\Theta^*}$ to measure the\ndiscrepancy between the primal witness matrix $\\ensuremath{\\ThetaWitness}$ and the\ntruth $\\ensuremath{\\Theta^*}$. Finally, recall the log-determinant barrier $g$\nfrom equation~\\eqref{EqnDefnLogDet}. We let $\\ensuremath{R}(\\Delta)$ denote the\ndifference of the gradient $\\nabla g(\\ensuremath{\\ThetaWitness}) =\n\\invn{\\ThetaWitness}$ from its first-order Taylor expansion around\n$\\ensuremath{\\Theta^*}$. Using known results on the first and second derivatives\nof the log-determinant function (see p. 641 in Boyd and\nVandenberghe~\\cite{Boyd02}), this remainder takes the form\n\\begin{eqnarray}\n\\label{EqnRdefn}\n\\ensuremath{R}(\\Delta) & = & \\invn{\\ThetaWitness} - \\invn{\\ensuremath{\\Theta^*}} +\n{\\ensuremath{\\Theta^*}}^{-1} \\Delta {\\ensuremath{\\Theta^*}}^{-1}.\n\\end{eqnarray}\n\n\\subsection{Auxiliary results}\n\nWe begin with some auxiliary lemmata, required in the proofs of our\nmain theorems. In Section~\\ref{SecStrictDual}, we provide sufficient\nconditions on the quantities $\\ensuremath{W}$ and $\\ensuremath{R}$ for the strict dual\nfeasibility condition to hold. In Section~\\ref{SecRemainder}, we\ncontrol the remainder term $\\ensuremath{R}(\\Delta)$ in terms of $\\Delta$, while\nin Section~\\ref{SecEllinfBound}, we control $\\Delta$ itself, providing\nelementwise $\\ell_\\infty$ bounds on $\\Delta$. In\nSection~\\ref{SecSignConsis}, we show that under appropriate conditions\non the minimum value $\\ensuremath{\\theta_{\\operatorname{min}}}$, the bounds in the earlier lemmas\nguarantee that the sign consistency condition holds. All of the\nanalysis in these sections is \\emph{deterministic} in nature. In\nSection~\\ref{SecNoise}, we turn to the probabilistic component of the\nanalysis, providing control of the noise $\\ensuremath{W}$ in the sample\ncovariance matrix. Finally, the proofs of Theorems~\\ref{ThmMain}\nand~\\ref{ThmModel} follows by using this probabilistic control of\n$\\ensuremath{W}$ and the stated conditions on the sample size to show that\nthe deterministic conditions hold with high probability.\n\n\n\\subsubsection{Sufficient conditions for strict dual feasibility}\n\\label{SecStrictDual}\n\nWe begin by stating and proving a lemma that provides sufficient\n(deterministic) conditions for strict dual feasibility to hold, so\nthat $\\|\\ensuremath{\\widetilde{Z}}_{\\ensuremath{\\EsetPlus^c}}\\|_\\infty < 1$.\n\\begin{lems}[Strict dual feasibility]\n\\label{LemStrictDual}\nSuppose that\n\\begin{eqnarray}\n\\label{EqnDetSuff}\n\\max \\big \\{ \\|\\ensuremath{W}\\|_\\infty, \\; \\|\\ensuremath{R}(\\Delta)\\|_\\infty \\big \\} &\n\\leq & \\frac{\\mutinco \\, \\ensuremath{\\lambda_\\ensuremath{n}}}{8}.\n\\end{eqnarray}\nThen the matrix $\\ensuremath{\\widetilde{Z}}_{\\ensuremath{\\EsetPlus^c}}$ constructed in step (c)\nsatisfies $\\|\\ensuremath{\\widetilde{Z}}_{\\ensuremath{\\EsetPlus^c}}\\|_\\infty < 1$, and therefore\n$\\ThetaWitness = \\ensuremath{\\widehat{\\Theta}}$.\n\\end{lems}\n\\begin{proof}\nUsing the definitions~\\eqref{EqnWdefn} and~\\eqref{EqnRdefn}, we can\nre-write the stationary condition~\\eqref{EqnZeroSubgrad} in an\nalternative but equivalent form\n\\begin{eqnarray}\n\\label{EqnLME}\n\\invn{\\ensuremath{\\Theta^*}} \\Delta \\invn{\\ensuremath{\\Theta^*}} + \\ensuremath{W} - \\ensuremath{R}(\\Delta) +\n\\ensuremath{\\lambda_\\ensuremath{n}} \\ensuremath{\\wtil{Z}} & = 0.\n\\end{eqnarray}\nThis is a linear-matrix equality, which can be re-written as an\nordinary linear equation by ``vectorizing'' the matrices. We use the\nnotation $\\ensuremath{\\operatorname{vec}}(A)$, or equivalently $\\myvec{A}$ for the\n$\\ensuremath{p}^2$-vector version of the matrix $A \\in \\ensuremath{{\\mathbb{R}}}^{\\ensuremath{p} \\times\n\\ensuremath{p}}$, obtained by stacking up the rows into a single column vector.\nIn vectorized form, we have\n\\begin{equation*}\n\\ensuremath{\\operatorname{vec}} \\big(\\invn{\\ensuremath{\\Theta^*}} \\Delta \\invn{\\ensuremath{\\Theta^*}} \\big) = \\big\n(\\invn{\\ensuremath{\\Theta^*}} \\otimes \\invn{\\ensuremath{\\Theta^*}} \\big) \\myvec{\\Delta} \\; =\n\\; \\ensuremath{\\Gamma^*} \\myvec{\\Delta}.\n\\end{equation*}\nIn terms of the disjoint decomposition $\\ensuremath{S}$ and\n$\\ensuremath{\\EsetPlus^c}$, equation~\\eqref{EqnLME} can be re-written as two\nblocks of linear equations as follows:\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{EqnStatBlockS} \\ensuremath{\\Gamma^*}_{\\ensuremath{S}\n\\ensuremath{S}} \\myvec{\\Delta}_{\\ensuremath{S}} + \\myvec{\\ensuremath{W}}_{\\ensuremath{S}} -\n\\myvec{\\ensuremath{R}}_{\\ensuremath{S}} + \\ensuremath{\\lambda_\\ensuremath{n}} \\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{S}} & = & 0\n\\\\\n\\label{EqnStatBlockScomp}\n\\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c} \\ensuremath{S}} \\myvec{\\Delta}_{\\ensuremath{S}} +\n\\myvec{\\ensuremath{W}}_{\\ensuremath{\\EsetPlus^c}} - \\myvec{\\ensuremath{R}}_{\\ensuremath{\\EsetPlus^c}} +\n\\ensuremath{\\lambda_\\ensuremath{n}} \\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{\\EsetPlus^c}} & = & 0.\n\\end{eqnarray}\n\\end{subequations}\nHere we have used the fact that $\\Delta_{\\ensuremath{\\EsetPlus^c}} = 0$ by\nconstruction.\n\nSince $\\ensuremath{\\Gamma^*}_{\\ensuremath{S} \\ensuremath{S}}$ is invertible, we can solve for\n$\\myvec{\\Delta}_{\\ensuremath{S}}$ from equation~\\eqref{EqnStatBlockS} as\nfollows:\n\\begin{eqnarray*}\n\\myvec{\\Delta}_{\\ensuremath{S}} & = & \\inv{\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\\EsetPlus}}\n\\big[-\\myvec{\\ensuremath{W}}_{\\ensuremath{S}} + \\myvec{\\ensuremath{R}}_{\\ensuremath{S}} - \\ensuremath{\\lambda_\\ensuremath{n}}\n\\myvec{\\ensuremath{\\widetilde{Z}}_\\ensuremath{S}} \\big].\n\\end{eqnarray*}\nSubstituting this expression into equation~\\eqref{EqnStatBlockScomp},\nwe can solve for $\\ensuremath{\\widetilde{Z}}_{\\ensuremath{\\EsetPlus^c}}$ as follows:\n\\begin{eqnarray}\n\\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{\\EsetPlus^c}} &= & -\\frac{1}{\\ensuremath{\\lambda_\\ensuremath{n}}}\n\\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c}\\ensuremath{S}}\\myvec{\\Delta}_{\\ensuremath{S}} +\n\\frac{1}{\\ensuremath{\\lambda_\\ensuremath{n}}} \\myvec{\\ensuremath{R}}_{\\ensuremath{\\EsetPlus^c}} - \\frac{1}{\\ensuremath{\\lambda_\\ensuremath{n}}}\n\\myvec{\\ensuremath{W}}_{\\ensuremath{\\EsetPlus^c}} \\nonumber\\\\\n& = & -\\frac{1}{\\ensuremath{\\lambda_\\ensuremath{n}}} \\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c}\\ensuremath{S}}\n\\inv{\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\\EsetPlus}}(\\myvec{\\ensuremath{W}}_{\\ensuremath{S}} -\n\\myvec{\\ensuremath{R}}_{\\ensuremath{S}}) + \\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c}\\ensuremath{S}}\n\\inv{\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\\EsetPlus}} \\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{S}} -\n\\frac{1}{\\ensuremath{\\lambda_\\ensuremath{n}}} (\\myvec{\\ensuremath{W}}_{\\ensuremath{\\EsetPlus^c}} -\n\\myvec{\\ensuremath{R}}_{\\ensuremath{\\EsetPlus^c}}).\n\\end{eqnarray}\nTaking the $\\ell_\\infty$ norm of both sides yields\n\\begin{multline*}\n\\|\\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{\\EsetPlus^c}}\\|_{\\infty} \\leq \\frac{1}{\\lambda_n}\n\\matnorm{\\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c}\\ensuremath{S}}\n\\inv{\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\\EsetPlus}}}{\\infty}\n(\\|\\myvec{\\ensuremath{W}}_{\\ensuremath{S}}\\|_{\\infty} +\n\\|\\myvec{\\ensuremath{R}}_{\\ensuremath{S}}\\|_{\\infty}) \\\\\n+ \\matnorm{\\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c}\\ensuremath{S}}\n\\inv{\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\\EsetPlus}}}{\\infty}\n\\|\\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{S}}\\|_{\\infty} + \\frac{1}{\\lambda_n}\n(\\|\\myvec{\\ensuremath{W}}_{\\ensuremath{S}}\\|_{\\infty} +\n\\|\\myvec{\\ensuremath{R}}_{\\ensuremath{S}}\\|_{\\infty}).\n\\end{multline*}\nRecalling Assumption~\\ref{AssInco}---namely, that\n$\\matnorm{\\ensuremath{\\Gamma^*}_{\\ensuremath{\\EsetPlus^c}\\ensuremath{S}}\n\\inv{\\ensuremath{\\Gamma^*}_{\\ensuremath{S}\\EsetPlus}}}{\\infty} \\le (1 - \\mutinco)$---we\nhave\n\\begin{eqnarray*}\n\\|\\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{\\EsetPlus^c}}\\|_{\\infty} & \\leq &\n\\frac{2-\\mutinco}{\\ensuremath{\\lambda_\\ensuremath{n}}} \\,\n(\\|\\myvec{\\ensuremath{W}}_{\\ensuremath{S}}\\|_{\\infty} +\n\\|\\myvec{\\ensuremath{R}}_{\\ensuremath{S}}\\|_{\\infty}) + (1 - \\mutinco),\n\\end{eqnarray*}\nwhere we have used the fact that $\\|\\myvec{\\ensuremath{\\widetilde{Z}}}_\\ensuremath{S}\\|_{\\infty}\n\\leq 1$, since $\\ensuremath{\\widetilde{Z}}$ belongs to the sub-differential of the norm\n$\\ellreg{\\cdot}$ by construction. Finally, applying\nassumption~\\eqref{EqnDetSuff} from the lemma statement, we have\n\\begin{eqnarray*}\n\\|\\myvec{\\ensuremath{\\widetilde{Z}}}_{\\ensuremath{\\EsetPlus^c}}\\|_{\\infty} & \\leq &\n\\frac{(2-\\mutinco)}{\\ensuremath{\\lambda_\\ensuremath{n}}} \\, \\big( \\frac{\\mutinco \\ensuremath{\\lambda_\\ensuremath{n}}}{4}) +\n(1-\\mutinco) \\\\\n& \\leq & \\frac{\\mutinco}{2} + (1-\\mutinco) \\: < \\; 1,\n\\end{eqnarray*}\nas claimed.\n\n\n\\end{proof}\n\n\n\\subsubsection{Control of remainder term}\n\\label{SecRemainder}\n\nOur next step is to relate the behavior of the remainder\nterm~\\eqref{EqnRdefn} to the deviation $\\Delta = \\ensuremath{\\ThetaWitness} -\n\\ensuremath{\\Theta^*}$.\n\\begin{lems}[Control of remainder]\n\\label{LEM_R_CONV}\nSuppose that the elementwise $\\ell_\\infty$ bound $\\|\\Delta\\|_\\infty\n\\leq \\frac{1}{3 \\, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}} \\ensuremath{\\ensuremath{d}}}$ holds. Then:\n\\begin{eqnarray}\n\\label{EqnRemExpand}\n\\ensuremath{R}(\\Delta) & = & \\invn{\\opt{\\Theta}} \\Delta \\invn{\\opt{\\Theta}} \\Delta\nJ \\invn{\\opt{\\Theta}},\n\\end{eqnarray}\nwhere $J \\ensuremath{: =} \\sum_{k=0}^{\\infty} (-1)^{k} \\big(\\invn{\\opt{\\Theta}}\n\\Delta\\big)^{k}$ has norm $\\matnorm{J^T}{\\infty} \\leq 3\/2$. Moreover,\nin terms of the elementwise $\\ell_\\infty$-norm, we have\n\\begin{eqnarray}\n\\label{EqnRemBound}\n\\| \\ensuremath{R}(\\Delta)\\|_\\infty & \\leq & \\frac{3}{2} \\ensuremath{\\ensuremath{d}}\n\\|\\Delta\\|_\\infty^2 \\; \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3.\n\\end{eqnarray}\n\\end{lems}\nWe provide the proof of this lemma in Appendix~\\ref{APP_LEM_R_CONV}. It\nis straightforward, based on standard matrix expansion techniques.\n\n\\subsubsection{Sufficient conditions for $\\ell_\\infty$ bounds}\n\\label{SecEllinfBound}\n\nOur next lemma provides control on the deviation \\mbox{$\\Delta =\n\\ThetaWitness - \\ensuremath{\\Theta^*}$,} measured in elementwise $\\ell_\\infty$\nnorm.\n\\begin{lems}[Control of $\\Delta$]\n\\label{LEM_D_CONV}\nSuppose that\n\\begin{eqnarray}\n\\label{EqnDconvAss}\nr \\ensuremath{: =} 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big( \\|\\ensuremath{W}\\|_\\infty + \\ensuremath{\\lambda_\\ensuremath{n}} \\big) & \\leq & \\min\n\\big \\{ \\frac{1}{3 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}} \\ensuremath{\\ensuremath{d}}}, \\; \\frac{1}{3 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3\n\\;\\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\ensuremath{\\ensuremath{d}}} \\big \\}.\n\\end{eqnarray}\nThen we have the elementwise $\\ell_\\infty$ bound\n\\begin{eqnarray}\n\\label{EqnDconvBound}\n\\|\\Delta \\|_\\infty = \\| \\ensuremath{\\ThetaWitness} - \\ensuremath{\\Theta^*} \\|_\\infty & \\leq & r.\n\\end{eqnarray}\n\\end{lems}\n\nWe prove the lemma in Appendix~\\ref{APP_LEM_D_CONV}; at a high level,\nthe main steps involved are the following. We begin by noting that\n$\\ThetaWitness_{\\ensuremath{\\EsetPlus^c}} = \\ensuremath{\\Theta^*}_{\\ensuremath{\\EsetPlus^c}} = 0$, so\nthat $\\vecnorm{\\Delta}{\\infty} =\n\\vecnorm{\\Delta_{\\ensuremath{S}}}{\\infty}$. Next, we characterize\n$\\ensuremath{\\ThetaWitness}_{\\ensuremath{S}}$ in terms of the zero-gradient condition\nassociated with the restricted problem~\\eqref{EqnRestricted}. We then\ndefine a continuous map $F: \\Delta_{\\ensuremath{S}} \\mapsto\nF(\\Delta_{\\ensuremath{S}})$ such that its fixed points are equivalent to\nzeros of this gradient expression in terms of $\\Delta_\\ensuremath{S} =\n\\ensuremath{\\ThetaWitness}_\\ensuremath{S} - \\ensuremath{\\Theta^*}_{\\ensuremath{S}}$. We then show that the\nfunction $F$ maps the $\\ell_\\infty$-ball\n\\begin{eqnarray}\n\\label{EqnDefnRad}\n\\ensuremath{\\mathbb{B}}(\\ensuremath{r}) & \\ensuremath{: =} & \\{ \\Theta_{\\ensuremath{S}} \\mid \\|\\Theta_\\ensuremath{S}\n\\|_\\infty \\leq \\ensuremath{r} \\}, \\qquad \\mbox{with $\\ensuremath{r} \\ensuremath{: =} 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big\n(\\|\\ensuremath{W}\\|_\\infty + \\ensuremath{\\lambda_\\ensuremath{n}} \\big)$},\n\\end{eqnarray}\nonto itself. Finally, with these results in place, we can apply\nBrouwer's fixed point theorem (e.g., p. 161; Ortega and\nRheinboldt~\\cite{OrtegaR70}) to conclude that $F$ does indeed have a\nfixed point inside $\\ensuremath{\\mathbb{B}}(\\ensuremath{r})$. \n\n\\subsubsection{Sufficient conditions for sign consistency}\n\\label{SecSignConsis}\n\n\nWe now show how a lower bound on the minimum value $\\ensuremath{\\theta_{\\operatorname{min}}}$, when\ncombined with Lemma~\\ref{LEM_D_CONV}, allows us to guarantee\n\\emph{sign consistency} of the primal witness matrix\n$\\ensuremath{\\ThetaWitness}_{\\ensuremath{S}}$.\n\\begin{lems}[Sign Consistency]\n\\label{LemSignConsis}\nSuppose the minimum absolute value $\\ensuremath{\\theta_{\\operatorname{min}}}$ of non-zero entries\nin the true concentration matrix $\\ensuremath{\\Theta^*}$ is lower bounded as\n\\begin{eqnarray}\n\\label{EqnThetaminLow}\n\\ensuremath{\\theta_{\\operatorname{min}}} & \\geq & 4 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big (\\|\\ensuremath{W}\\|_\\infty + \\ensuremath{\\lambda_\\ensuremath{n}}\n\\big),\n\\end{eqnarray}\nthen \\mbox{$\\textrm{sign}(\\ensuremath{\\ThetaWitness}_\\ensuremath{S}) = \\textrm{sign}(\\ensuremath{\\Theta^*}_\\ensuremath{S})$}\nholds. \n\\end{lems}\nThis claim follows from the bound~\\eqref{EqnThetaminLow} combined with\nthe bound~\\eqref{EqnDconvBound} ,which together imply that for all\n$(i,j) \\in \\ensuremath{S}$, the estimate $\\ensuremath{\\ThetaWitness}_{ij}$ cannot differ\nenough from $\\ensuremath{\\Theta^*}_{ij}$ to change sign.\n\n\n\n\n\n\n\n\\subsubsection{Control of noise term}\n\\label{SecNoise}\n\nThe final ingredient required for the proofs of Theorems~\\ref{ThmMain}\nand \\ref{ThmModel} is control on the sampling noise $\\ensuremath{W} = \\ensuremath{\\estim{\\ensuremath{\\Sigma}}}\n- \\ensuremath{\\ensuremath{\\Sigma}^*}$. This control is specified in terms of the decay\nfunction $f$ from equation~\\eqref{EqnSamTail}.\n\\begin{lems}[Control of Sampling Noise]\n\\label{LemWconv}\nFor any $\\tau > 2$ and sample size $\\ensuremath{n}$ such that\n$\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^\\tau) \\le 1\/v_*$, we have\n\\begin{eqnarray}\n\\ensuremath{\\mathbb{P}}\\biggr[ \\|\\ensuremath{W}\\|_\\infty \\geq\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) \\biggr] & \\leq &\n\\frac{1}{\\ensuremath{p}^{\\tau - 2}} \\; \\rightarrow \\; 0.\n\\end{eqnarray}\n\n\\end{lems}\n\\begin{proof}\nUsing the definition~\\eqref{EqnSamTail} of the decay function\n$f$, and applying the union bound over all $\\ensuremath{p}^2$ entries of\nthe noise matrix, we obtain that for all $\\delta \\leq 1\/v_*$,\n\\begin{eqnarray*}\n\\ensuremath{\\mathbb{P}} \\big[\\max_{i, j} |\\ensuremath{W}_{ij}| \\geq \\delta \\big] & \\leq &\n\\ensuremath{p}^2\/f(\\ensuremath{n},\\delta).\n\\end{eqnarray*}\nSetting $\\delta = \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$ yields that\n\\begin{eqnarray*}\n\\ensuremath{\\mathbb{P}} \\big[\\max_{i, j} |\\ensuremath{W}_{ij}| \\geq\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) \\big] & \\leq & \\ensuremath{p}^2\/\n\\big[f(\\ensuremath{n},\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})) \\big] \\; =\n\\; 1\/\\ensuremath{p}^{\\tau - 2},\n\\end{eqnarray*}\nas claimed. Here the last equality follows since\n$f(\\ensuremath{n},\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})) =\n\\ensuremath{p}^\\tau$, using the definition~\\eqref{EqnSamTailT} of the\ninverse function $\\ensuremath{\\widebar{\\delta}_f}$.\n\\end{proof}\n\n\\def\\delta'{\\delta'}\n\\def\\numobsNew{\\ensuremath{n}'} \n\n\n\n\\subsection{Proof of Theorem~\\ref{ThmMain}}\n\\label{SecThmMain}\n\nWe now have the necessary ingredients to prove\nTheorem~\\ref{ThmMain}. We first show that with high probability the\nwitness matrix $\\ensuremath{\\ThetaWitness}$ is equal to the solution $\\ensuremath{\\widehat{\\Theta}}$ to the\noriginal log-determinant problem~\\eqref{EqnGaussMLE}, in particular by\nshowing that the primal-dual witness construction (described in in\nSection~\\ref{SecPrimalDualWitness}) succeeds with high probability.\nLet $\\mathcal{A}$ denote the event that $\\|\\ensuremath{W}\\|_\\infty \\leq\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$. Using the monotonicity of the\ninverse tail function~\\eqref{EqnMonot}, the lower lower\nbound~\\eqref{EqnSampleBound} on the sample size $\\ensuremath{n}$ implies that\n$\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) \\le 1\/v_*$. Consequently,\nLemma~\\ref{LemWconv} implies that $\\ensuremath{\\mathbb{P}}(\\mathcal{A}) \\ge 1 -\n\\frac{1}{\\ensuremath{p}^{\\tau - 2}}$. Accordingly, we condition on the\nevent $\\mathcal{A}$ in the analysis to follow.\n\nWe proceed by verifying that assumption~\\eqref{EqnDetSuff} of\nLemma~\\ref{LemStrictDual} holds. Recalling the choice of\nregularization penalty \\mbox{$\\ensuremath{\\lambda_\\ensuremath{n}} = (8\/\\mutinco)\\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$,} we have $\\|\\ensuremath{W}\\|_\\infty\n\\leq (\\mutinco\/8) \\ensuremath{\\lambda_\\ensuremath{n}}$. In order to establish\ncondition~\\eqref{EqnDetSuff} it remains to establish the bound\n$\\|\\ensuremath{R}(\\Delta)\\|_\\infty \\leq \\frac{\\mutinco \\,\\ensuremath{\\lambda_\\ensuremath{n}}}{8}$. We do so\nin two steps, by using Lemmas~\\ref{LEM_D_CONV} and~\\ref{LEM_R_CONV}\nconsecutively. First, we show that the\nprecondition~\\eqref{EqnDconvAss} required for Lemma~\\ref{LEM_D_CONV}\nto hold is satisfied under the specified conditions on $\\ensuremath{n}$ and\n$\\ensuremath{\\lambda_\\ensuremath{n}}$. From Lemma~\\ref{LemWconv} and our choice of regularization\nconstant \\mbox{$\\ensuremath{\\lambda_\\ensuremath{n}} = (8\/\\mutinco)\\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})$,}\n\\begin{eqnarray*}\n2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big( \\,\\|\\ensuremath{W}\\|_\\infty + \\ensuremath{\\lambda_\\ensuremath{n}} \\big) & \\leq & 2\n\\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\Big(1 + \\frac{8}{\\mutinco}\\Big) \\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}),\n\\end{eqnarray*}\nprovided $\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) \\le 1\/v_*$. From\nthe lower bound~\\eqref{EqnSampleBound} and the\nmonotonicity~\\eqref{EqnMonot} of the tail inverse functions, we have\n\\begin{eqnarray}\n\\label{EqnDelPreqBound}\n2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\Big(1 + \\frac{8}{\\mutinco}\\Big) \\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) & \\leq & \\min \\big\\{ \\frac{1}{3\n\\ensuremath{K_{\\ensuremath{\\Sigma}^*}} \\ensuremath{\\ensuremath{d}}}, \\; \\frac{1}{3 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3 \\;\\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\ensuremath{\\ensuremath{d}}}\n\\big\\},\n\\end{eqnarray}\nshowing that the assumptions of Lemma~\\ref{LEM_D_CONV} are satisfied.\nApplying this lemma, we conclude that\n\\begin{eqnarray}\n\\label{EqnDelBound}\n\\| \\Delta \\|_\\infty & \\leq & 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\big( \\, \\|\\ensuremath{W}\\|_\\infty +\n\\ensuremath{\\lambda_\\ensuremath{n}} \\big) \\; \\leq \\; 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\Big(1 + \\frac{8}{\\mutinco}\\Big)\n\\, \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}).\n\\end{eqnarray}\n\nTurning next to Lemma~\\ref{LEM_R_CONV}, we see that its assumption\n$\\|\\Delta\\|_{\\infty} \\leq \\frac{1}{3 \\, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}} \\ensuremath{\\ensuremath{d}}}$ holds, by\napplying equations~\\eqref{EqnDelPreqBound} and \\eqref{EqnDelBound}.\nConsequently, we have\n\\begin{eqnarray*}\n\\|\\ensuremath{R}(\\Delta)\\|_\\infty & \\leq & \\frac{3}{2} \\,\\ensuremath{\\ensuremath{d}}\\;\n\\|\\Delta\\|_\\infty^2 \\; \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3 \\\\\n& \\leq & 6 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^2 \\, \\ensuremath{\\ensuremath{d}} \\, \\Big(1 +\n\\frac{8}{\\mutinco}\\Big)^2 [\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})]^{2}\n\\\\\n& = & \\Biggr \\{ 6 \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^2 \\, \\ensuremath{\\ensuremath{d}} \\, \\Big(1 +\n\\frac{8}{\\mutinco}\\Big)^2 \\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) \\Biggr\n\\} \\frac{\\mutinco \\ensuremath{\\lambda_\\ensuremath{n}}}{8} \\\\\n& \\leq & \\frac{\\mutinco \\ensuremath{\\lambda_\\ensuremath{n}}}{8},\n\\end{eqnarray*}\nas required, where the final inequality follows from our\ncondition~\\eqref{EqnSampleBound} on the sample size, and the\nmonotonicity property~\\eqref{EqnMonot}. \n\nOverall, we have shown that the assumption~\\eqref{EqnDetSuff} of\nLemma~\\ref{LemStrictDual} holds, allowing us to conclude that\n$\\ensuremath{\\ThetaWitness} = \\ensuremath{\\widehat{\\Theta}}$. The estimator $\\ensuremath{\\widehat{\\Theta}}$ then satisfies the\n$\\ell_\\infty$-bound~\\eqref{EqnDelBound} of $\\ensuremath{\\ThetaWitness}$, as claimed in\nTheorem~\\ref{ThmMain}(a), and moreover, we have\n$\\ensuremath{\\widehat{\\Theta}}_{\\ensuremath{\\EsetPlus^c}} = \\ensuremath{\\ThetaWitness}_{\\ensuremath{\\EsetPlus^c}} = 0$, as\nclaimed in Theorem~\\ref{ThmMain}(b). Since the above was conditioned\non the event $\\mathcal{A}$, these statements hold with probability\n$\\ensuremath{\\mathbb{P}}(\\mathcal{A}) \\ge 1 - \\frac{1}{\\ensuremath{p}^{\\tau - 2}}$.\n\n\\subsection{Proof of Theorem~\\ref{ThmModel}}\n\\label{SecThmModel}\n\nWe now turn to the proof of Theorem~\\ref{ThmModel}. A little\ncalculation shows that the assumed lower bound~\\eqref{EqnNumobsModel}\non the sample size $\\ensuremath{n}$ and the monotonicity\nproperty~\\eqref{EqnMonot} together guarantee that\n\\begin{eqnarray*}\n\\ensuremath{\\theta_{\\operatorname{min}}} & > & 4 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\Big(1 +\\frac{8}{\\mutinco}\\Big) \\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau})\n\\end{eqnarray*}\nProceeding as in the proof of Theorem~\\ref{ThmMain}, with probability\nat least $1 - 1\/\\ensuremath{p}^{\\tau - 2}$, we have the equality\n\\mbox{$\\ensuremath{\\ThetaWitness} = \\ensuremath{\\widehat{\\Theta}}$,} and also that $\\|\\ensuremath{\\ThetaWitness} -\n\\ensuremath{\\Theta^*}\\|_\\infty \\leq \\ensuremath{\\theta_{\\operatorname{min}}}\/2$. Consequently,\nLemma~\\ref{LemSignConsis} can be applied, guaranteeing that\n$\\textrm{sign}(\\ensuremath{\\Theta^*}_{ij}) = \\textrm{sign}(\\ensuremath{\\ThetaWitness}_{ij})$ for all $(i,j) \\in\n\\ensuremath{E}$. Overall, we conclude that with probability at least $1 -\n1\/\\ensuremath{p}^{\\tau - 2}$, the sign consistency condition\n$\\textrm{sign}(\\ensuremath{\\Theta^*}_{ij}) = \\textrm{sign}(\\ensuremath{\\widehat{\\Theta}}_{ij})$ holds for all $(i,j)\n\\in \\ensuremath{E}$, as claimed.\n\n\n\n\\def\\hat{\\Delta}} \\def\\Lin{L{\\hat{\\Delta}} \\def\\Lin{L}\n\n\\subsection{Proof of Corollary~\\ref{CorCovBound}}\n\\label{SecCorCovProof}\nWith the shorthand $\\hat{\\Delta}} \\def\\Lin{L = \\ensuremath{\\widehat{\\Theta}} - \\ensuremath{\\Theta^*}$, we have\n\\begin{equation*}\n\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*} = (\\ensuremath{\\Theta^*} + \\hat{\\Delta}} \\def\\Lin{L)^{-1} -\n\\inv{\\ensuremath{\\Theta^*}}.\n\\end{equation*}\nFrom the definition~\\eqref{EqnRdefn} of the residual $R(\\cdot)$, this\ndifference can be written as\n\\begin{eqnarray}\n\\label{EqnCovDev}\n\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*} & = & - \\invn{\\ensuremath{\\Theta^*}} \\hat{\\Delta}} \\def\\Lin{L\n\\invn{\\ensuremath{\\Theta^*}} + R(\\hat{\\Delta}} \\def\\Lin{L).\n\\end{eqnarray}\n\nProceeding as in the proof of Theorem~\\ref{ThmMain} we condition on\nthe event $\\mathcal{A} = \\{ \\|\\ensuremath{W}\\|_\\infty \\leq\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}) \\}$, and which holds with\nprobability $\\ensuremath{\\mathbb{P}}(\\mathcal{A}) \\ge 1 - \\frac{1}{\\ensuremath{p}^{\\tau - 2}}$.\nAs in the proof of that theorem, we are guaranteed that the\nassumptions of Lemma~\\ref{LEM_R_CONV} are satisfied, allowing us to\nconclude\n\\begin{align}\\label{EqnRequiv}\n\\ensuremath{R}(\\hat{\\Delta}} \\def\\Lin{L) = \\invn{\\opt{\\Theta}} \\hat{\\Delta}} \\def\\Lin{L \\invn{\\opt{\\Theta}}\n\\hat{\\Delta}} \\def\\Lin{L J \\invn{\\opt{\\Theta}},\n\\end{align}\nwhere $J \\ensuremath{: =} \\sum_{k=0}^{\\infty} (-1)^{k} \\big(\\invn{\\opt{\\Theta}}\n\\hat{\\Delta}} \\def\\Lin{L\\big)^{k}$ has norm $\\matnorm{J^T}{\\infty} \\leq 3\/2$.\n\nWe begin by proving the bound~\\eqref{EqnCovInftyBound}. From\nequation~\\eqref{EqnCovDev}, we have $\\vecnorm{\\hat{\\CovHat} -\n\\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty} \\leq \\vecnorm{L(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} +\n\\vecnorm{R(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty}$. From Lemma~\\ref{LEM_R_CONV}, we have\nthe elementwise $\\ell_\\infty$-norm bound\n\\begin{eqnarray*}\n\\| \\ensuremath{R}(\\hat{\\Delta}} \\def\\Lin{L)\\|_\\infty & \\leq & \\frac{3}{2} \\ensuremath{\\ensuremath{d}}\n\\|\\hat{\\Delta}} \\def\\Lin{L\\|_\\infty^2 \\; \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3.\n\\end{eqnarray*}\nThe quantity $L(\\hat{\\Delta}} \\def\\Lin{L)$ in turn can be bounded as follows,\n\\begin{eqnarray*}\n\\vecnorm{L(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} & = &\n\\max_{ij}\\big|e_{i}^{T}\\invn{\\ensuremath{\\Theta^*}} \\hat{\\Delta}} \\def\\Lin{L \\invn{\\ensuremath{\\Theta^*}}\ne_{j}\\big|\\\\ &\\le& \\max_{i} \\|e_{i}^{T}\\invn{\\ensuremath{\\Theta^*}}\\|_{1}\n\\max_{j} \\|\\hat{\\Delta}} \\def\\Lin{L \\invn{\\ensuremath{\\Theta^*}} e_j\\|_{\\infty}\\\\ &\\le& \\max_{i}\n\\|e_{i}^{T}\\invn{\\ensuremath{\\Theta^*}}\\|_{1} \\|\\hat{\\Delta}} \\def\\Lin{L\\|_{\\infty} \\|\\max_{j}\n\\|\\invn{\\ensuremath{\\Theta^*}} e_j\\|_{1}\n\\end{eqnarray*}\nwhere we used the inequality that $\\|\\hat{\\Delta}} \\def\\Lin{L u \\|_{\\infty} \\le\n\\|\\hat{\\Delta}} \\def\\Lin{L\\|_{\\infty} \\|u\\|_{1}$. Simplifying further, we obtain\n\\begin{eqnarray*}\n\\vecnorm{L(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} & \\leq &\n\\matnorm{\\invn{\\ensuremath{\\Theta^*}}}{\\infty} \\|\\hat{\\Delta}} \\def\\Lin{L\\|_{\\infty}\n\\matnorm{\\invn{\\ensuremath{\\Theta^*}}}{1}\\\\ \n& \\leq & \\matnorm{\\invn{\\ensuremath{\\Theta^*}}}{\\infty}^{2}\n\\|\\hat{\\Delta}} \\def\\Lin{L\\|_{\\infty} \\\\ \n& \\leq & \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{2} \\|\\hat{\\Delta}} \\def\\Lin{L\\|_{\\infty},\n\\end{eqnarray*}\nwhere we have used the fact that $\\matnorm{\\invn{\\ensuremath{\\Theta^*}}}{1} =\n\\matnorm{[\\invn{\\ensuremath{\\Theta^*}}]^{T}}{\\infty} =\n\\matnorm{\\invn{\\ensuremath{\\Theta^*}}}{\\infty}$, which follows from the symmetry\nof $\\invn{\\ensuremath{\\Theta^*}}$. Combining the pieces, we obtain\n\\begin{eqnarray}\n\\label{EqnCovDevInftya}\n\\vecnorm{\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty} &\\le &\n\\vecnorm{L(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} + \\vecnorm{R(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty}\\\\\n& \\leq & \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{2} \\|\\hat{\\Delta}} \\def\\Lin{L\\|_{\\infty} + \\frac{3}{2} \\ensuremath{\\ensuremath{d}}\n\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3 \\|\\hat{\\Delta}} \\def\\Lin{L\\|_\\infty^2. \\nonumber\n\\end{eqnarray}\nThe claim then follows from the elementwise $\\ell_\\infty$-norm\nbound~\\eqref{EqnEllinfBound} from Theorem~\\ref{ThmMain}.\n\n\nNext, we establish the bound~\\eqref{EqnCovSpectralBound} in spectral\nnorm. Taking the $\\ell_{\\infty}$ operator norm of both sides of\nequation~\\eqref{EqnCovDev} yields the inequality $\\matnorm{\\hat{\\CovHat} -\n\\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty} \\leq \\matnorm{L(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} +\n\\matnorm{R(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty}$. Using the\nexpansion~\\eqref{EqnRequiv}, and the sub-multiplicativity of the\n$\\ell_{\\infty}$ operator norm, we obtain\n\\begin{eqnarray*}\n\\matnorm{\\ensuremath{R}(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} & \\leq &\n\\matnorm{\\invn{\\opt{\\Theta}}}{\\infty} \\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty}\n\\matnorm{\\invn{\\opt{\\Theta}}}{\\infty} \\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty}\n\\matnorm{J}{\\infty} \\matnorm{\\invn{\\opt{\\Theta}}}{\\infty}\\\\ &\\le&\n\\matnorm{\\invn{\\opt{\\Theta}}}{\\infty}^{3} \\matnorm{J}{\\infty}\n\\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty}^{2}\\\\ \n& \\leq & \\frac{3}{2} \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{3} \\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty}^{2},\n\\end{eqnarray*}\nwhere the last inequality uses the bound $\\matnorm{J}{\\infty} \\leq\n3\/2$. (Proceeding as in the proof of Lemma~\\ref{LEM_R_CONV}, this\nbound holds conditioned on $\\mathcal{A}$, and for the sample size\nspecified in the theorem statement.) In turn, the term $L(\\hat{\\Delta}} \\def\\Lin{L)$\ncan be bounded as\n\\begin{eqnarray*}\n\\matnorm{L(\\Delta)}{\\infty} & \\leq & \\matnorm{\\invn{\\ensuremath{\\Theta^*}}\n\\hat{\\Delta}} \\def\\Lin{L \\invn{\\ensuremath{\\Theta^*}}}{\\infty}\\\\ &\\le&\n\\matnorm{\\invn{\\ensuremath{\\Theta^*}}}{\\infty}^{2} \\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty}\\\\\n& \\leq & \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{2} \\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty},\n\\end{eqnarray*}\nwhere the second inequality uses the sub-multiplicativity of the\n$\\ell_\\infty$-operator norm. Combining the pieces yields\n\\begin{eqnarray}\n\\label{EqnCovDevInftyOp}\n\\matnorm{\\hat{\\CovHat} - \\ensuremath{\\ensuremath{\\Sigma}^*}}{\\infty} & \\leq &\n\\matnorm{L(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} + \\matnorm{R(\\hat{\\Delta}} \\def\\Lin{L)}{\\infty} \\;\n\\leq \\; \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{2} \\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty} + \\frac{3}{2} \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^3\n\\|\\hat{\\Delta}} \\def\\Lin{L\\|_\\infty^2.\n\\end{eqnarray}\nConditioned on the event $\\mathcal{A}$, we have the\nbound~\\eqref{EqnPrecInftyOp} on the $\\ell_\\infty$-operator norm\n\\begin{eqnarray*}\n\\matnorm{\\hat{\\Delta}} \\def\\Lin{L}{\\infty} & \\leq & 2 \\ensuremath{K_{\\ensuremath{\\Gamma^*}}} \\Big(1 +\n\\frac{8}{\\mutinco}\\Big) \\, \\ensuremath{\\ensuremath{d}} \\,\n\\ensuremath{\\widebar{\\delta}_f}(\\ensuremath{n},\\ensuremath{p}^{\\tau}).\n\\end{eqnarray*}\nSubstituting this bound, as well as the elementwise $\\ell_\\infty$-norm\nbound~\\eqref{EqnEllinfBound} from Theorem~\\ref{ThmMain}, into the\nbound~\\eqref{EqnCovDevInftyOp} yields the stated claim.\n\n\n\n\\section{Experiments}\n\\label{SecExperiments}\n\nIn this section, we illustrate our results with various experimental\nsimulations, reporting results in terms of the probability of correct\nmodel selection (Theorem~\\ref{ThmModel}) or the $\\ell_\\infty$-error\n(Theorem~\\ref{ThmMain}). For these illustrations, we study the case\nof Gaussian graphical models, and results for three different classes\nof graphs, namely chains, grids, and star-shaped graphs. We also\nconsider various scalings of the quantities which affect the\nperformance of the estimator: in addition the triple $(\\ensuremath{n}, \\ensuremath{p},\n\\ensuremath{\\ensuremath{d}})$, we also report some results concerning the role of the\nparameters $\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}$, $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}$ and $\\ensuremath{\\theta_{\\operatorname{min}}}$ that we have identified in the\nmain theorems. For all results\nreported here, we solved the resulting $\\ell_1$-penalized\nlog-determinant program~\\eqref{EqnGaussMLE} using the \\texttt{glasso}\nprogram of \\citet{FriedHasTib2007}, which builds on the block\nco-ordinate descent algorithm of~\\citet{AspreBanG2008}.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{ccccc}\n\\raisebox{.6in}{\\widgraph{0.25\\textwidth}{chain.eps}}& &\n\\raisebox{.0in}{\\widgraph{0.28\\textwidth}{grid4.eps}}& &\n\\raisebox{.1in}{\\widgraph{0.25\\textwidth}{stargraph.eps}}\\\\\n(a) && (b) && (c)\n\\end{tabular}\n\\caption{Illustrations of different graph classes used in simulations.\n(a) Chain ($\\ensuremath{\\ensuremath{d}} = 2$). (b) Four-nearest neighbor grid ($\\ensuremath{\\ensuremath{d}} =\n4$) and (c) Star-shaped graph ($\\ensuremath{\\ensuremath{d}} \\in \\{1,\\hdots,\\ensuremath{p} - 1\\}$).}\n\\label{FigGraphs}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{FigGraphs} illustrates the three types of graphs used in\nour simulations: chain graphs (panel (a)), four-nearest neighbor\nlattices or grids (panel (b)), and star-shaped graphs (panel (c)).\nFor the chain and grid graphs, the maximal node degree $\\ensuremath{\\ensuremath{d}}$ is\nfixed by definition, to $\\ensuremath{\\ensuremath{d}} =2$ for chains, and $\\ensuremath{\\ensuremath{d}} =4$ for\nthe grids. Consequently, these graphs can capture the dependence of\nthe required sample size $\\ensuremath{n}$ only as a function of the graph\nsize $\\ensuremath{p}$, and the parameters $(\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}$, $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}$,\n$\\ensuremath{\\theta_{\\operatorname{min}}}$). The star graph allows us to vary both $\\ensuremath{\\ensuremath{d}}$ and\n$\\ensuremath{p}$, since the degree of the central hub can be varied between $1$\nand $\\ensuremath{p} -1$. For each graph type, we varied the size of the graph\n$\\ensuremath{p}$ in different ranges, from $\\ensuremath{p} =64$ upwards to $\\ensuremath{p} =\n375$.\n\n\nFor the chain and star graphs, we define a covariance matrix\n$\\ensuremath{\\ensuremath{\\Sigma}^*}$ with entries $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii} = 1$ for all $i =1,\n\\ldots, \\ensuremath{p}$, and $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = \\rho$ for all $(i,j) \\in\n\\ensuremath{E}$ for specific values of $\\rho$ specified below. Note that these\ncovariance matrices are sufficient to specify the full model. For the\nfour-nearest neighbor grid graph, we set the entries of the\nconcentration matrix $\\ensuremath{\\Theta^*}_{ij} = \\omega$ for $(i,j) \\in \\ensuremath{E}$,\nwith the value $\\omega$ specified below. In all cases, we set the\nregularization parameter $\\ensuremath{\\lambda_\\ensuremath{n}}$ proportional to\n$\\sqrt{\\log(\\ensuremath{p})\/\\ensuremath{n}}$, as suggested by Theorems~\\ref{ThmMain}\nand~\\ref{ThmModel}, which is reasonable since the main purpose of\nthese simulations is to illustrate our theoretical results. However,\nfor general data sets, the relevant theoretical parameters cannot be\ncomputed (since the true model is unknown), so that a data-driven\napproach such as cross-validation might be required for selecting the\nregularization parameter $\\ensuremath{\\lambda_\\ensuremath{n}}$.\n\n\n\nGiven a Gaussian graphical model instance, and the number of samples\n$\\ensuremath{n}$, we drew $N = 100$ batches of $\\ensuremath{n}$ independent samples\nfrom the associated multivariate Gaussian distribution. We estimated\nthe probability of correct model selection as the fraction of the $N=\n100$ trials in which the estimator recovers the signed-edge set\nexactly.\n\n\\defK{K}\nNote that any multivariate Gaussian random vector is sub-Gaussian; in\nparticular, the rescaled variates $X_i\/\\sqrt{\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii}}$ are\nsub-Gaussian with parameter $\\ensuremath{\\sigma} = 1$, so that the elementwise\n$\\ell_\\infty$-bound from Corollary~\\ref{CorEllinfSubg} applies.\nSuppose we collect relevant parameters such as $\\ensuremath{\\theta_{\\operatorname{min}}}$ and the\ncovariance and Hessian-related terms $\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}$, $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}$ and\n$\\mutinco$ into a single ``model-complexity'' term $K$ defined\nas\n\\begin{eqnarray}\\label{EqnKdefn}\nK & \\ensuremath{: =} & \\left[(1 + 8 \\mutinco^{-1}) (\\max_{i}\\ensuremath{\\ensuremath{\\Sigma}^*}_{ii})\n\\max\\{\\ensuremath{K_{\\ensuremath{\\Sigma}^*}} \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}}^{3} \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}^{2},\\frac{\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}}{\n\\ensuremath{\\ensuremath{d}}\\, \\ensuremath{\\theta_{\\operatorname{min}}}}\\}\\right].\n\\end{eqnarray}\nThen, as a corollary of Theorem~\\ref{ThmModel}, a sample size of order \n\\begin{eqnarray}\n\\label{EqnCrudeBound}\n\\ensuremath{n} & = & \\Omega\\left( K^{2} \\; \\ensuremath{\\ensuremath{d}}^2 \\, \\tau \\log \\ensuremath{p}\n\\right),\n\\end{eqnarray}\nis sufficient for model selection consistency with probability greater\nthan $1-1\/\\ensuremath{p}^{\\tau-2}$. In the subsections to follow, we\ninvestigate how the empirical sample size $\\ensuremath{n}$ required for model\nselection consistency scales in terms of graph size $\\ensuremath{p}$, maximum\ndegree $\\ensuremath{\\ensuremath{d}}$, as well as the ``model-complexity'' term $K$ defined\nabove.\n \n\\newcommand{.48\\textwidth}{.48\\textwidth}\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\widgraph{.48\\textwidth}{figures\/chainprobvsn.eps} & \\hspace*{.1in} &\n\\widgraph{.48\\textwidth}{figures\/chainprobvsnlogp.eps} \\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{Simulations for chain graphs with varying number of nodes\n$\\ensuremath{p}$, edge covariances $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = 0.10$. Plots of\nprobability of correct signed edge-set recovery plotted versus the\nordinary sample size $\\ensuremath{n}$ in panel (a), and versus the rescaled\nsample size $\\ensuremath{n}\/\\log \\ensuremath{p}$ in panel (b). Each point corresponds\nto the average over $100$ trials. }\n\\label{FigChainProbvsNP}\n\\end{figure}\n\n\n\n\\subsection{Dependence on graph size}\n\nPanel (a) of Figure~\\ref{FigChainProbvsNP} plots the probability of\ncorrect signed edge-set recovery against the sample size $\\ensuremath{n}$ for\na chain-structured graph of three different sizes. For these chain\ngraphs, regardless of the number of nodes $\\ensuremath{p}$, the maximum node\ndegree is constant $\\ensuremath{\\ensuremath{d}} = 2$, while the edge covariances are set\nas $\\ensuremath{\\Sigma}_{ij} = 0.2$ for all $(i,j) \\in \\ensuremath{E}$, so that the\nquantities $(\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\mutinco)$ remain constant. Each of\nthe curve in panel (a) corresponds to a different graph size $\\ensuremath{p}$.\nFor each curve, the probability of success starts at zero (for small\nsample sizes $\\ensuremath{n}$), but then transitions to one as the sample\nsize is increased. As would be expected, it is more difficult to\nperform model selection for larger graph sizes, so that (for instance)\nthe curve for $\\ensuremath{p} = 375$ is shifted to the right relative to the\ncurve for $\\ensuremath{p} = 64$. Panel (b) of Figure~\\ref{FigChainProbvsNP}\nreplots the same data, with the horizontal axis rescaled by $(1\/\\log\n\\ensuremath{p})$. This scaling was chosen because for sub-Gaussian tails, our\ntheory predicts that the sample size should scale logarithmically with\n$\\ensuremath{p}$ (see equation~\\eqref{EqnCrudeBound}). Consistent with this\nprediction, when plotted against the rescaled sample size\n$\\ensuremath{n}\/\\log \\ensuremath{p}$, the curves in panel (b) all stack up.\nConsequently, the ratio $(\\ensuremath{n}\/\\log\\ensuremath{p})$ acts as an effective\nsample size in controlling the success of model selection, consistent\nwith the predictions of Theorem~\\ref{ThmModel} for sub-Gaussian\nvariables.\n\nFigure~\\ref{FigStarProbvsNP} shows the same types of plots for a\nstar-shaped graph with fixed maximum node degree $\\ensuremath{\\ensuremath{d}} = 40$, and\nFigure~\\ref{FigGridProbvsNP} shows the analogous plots for a grid\ngraph with fixed degree $\\ensuremath{\\ensuremath{d}} = 4$. As in the chain case, these\nplots show the same type of stacking effect in terms of the scaled\nsample size $\\ensuremath{n}\/\\log \\ensuremath{p}$, when the degree $\\ensuremath{\\ensuremath{d}}$ and other\nparameters ($(\\mutinco, \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\ensuremath{K_{\\ensuremath{\\Sigma}^*}})$) are held fixed.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{ccc}\n\\widgraph{.48\\textwidth}{figures\/starprobvsn.eps} & \\hspace*{.1in} &\n\\widgraph{.48\\textwidth}{figures\/starprobvsnlogp.eps} \\\\\n(a) & & (b)\n\\end{tabular}\n\\end{center}\n\\caption{Simulations for a star graph with varying number of nodes\n$\\ensuremath{p}$, fixed maximal degree $\\ensuremath{\\ensuremath{d}} = 40$, and edge covariances\n$\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = 1\/16$ for all edges. Plots of probability of\ncorrect signed edge-set recovery versus the sample size $\\ensuremath{n}$ in\npanel (a), and versus the rescaled sample size $\\ensuremath{n}\/\\log \\ensuremath{p}$ in\npanel (b). Each point corresponds to the average over $N = 100$\ntrials.}\n\\label{FigStarProbvsNP}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\widgraph{.48\\textwidth}{figures\/gridprobvsn.eps} &\n\\widgraph{.48\\textwidth}{figures\/gridprobvsnlogp.eps} \\\\\n(a) & (b)\n\\end{tabular}\n\\end{center}\n\\caption{Simulations for $2$-dimensional lattice with\n$4$-nearest-neighbor interaction, edge strength interactions\n$\\ensuremath{\\Theta^*}_{ij} = 0.1$, and a varying number of nodes $\\ensuremath{p}$. Plots\nof probability of correct signed edge-set recovery versus the sample\nsize $\\ensuremath{n}$ in panel (a), and versus the rescaled sample size\n$\\ensuremath{n}\/\\log \\ensuremath{p}$ in panel (b). Each point corresponds to the\naverage over $N = 100$ trials.}\n\\label{FigGridProbvsNP}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\widgraph{.48\\textwidth}{figures\/dvarstarprobvsn.eps} &\n\\widgraph{.48\\textwidth}{figures\/dvarstarprobvsnd.eps} \\\\\n(a) & (b)\n\\end{tabular}\n\\end{center}\n\\caption{ Simulations for star graphs with fixed number of nodes\n$\\ensuremath{p} = 200$, varying maximal (hub) degree $\\ensuremath{\\ensuremath{d}}$, edge\ncovariances $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = 2.5\/\\ensuremath{\\ensuremath{d}}$. Plots of probability of\ncorrect signed edge-set recovery versus the sample size $\\ensuremath{n}$ in\npanel (a), and versus the rescaled sample size $\\ensuremath{n}\/\\ensuremath{\\ensuremath{d}}$ in\npanel (b). }\n\\label{FigStarProbvsND}\n\\end{figure}\n\n\\subsection{Dependence on the maximum node degree} \n\nPanel (a) of Figure~\\ref{FigStarProbvsND} plots the probability of\ncorrect signed edge-set recovery against the sample size $\\ensuremath{n}$ for\nstar-shaped graphs; each curve corresponds to a different choice of\nmaximum node degree $\\ensuremath{\\ensuremath{d}}$, allowing us to investigate the\ndependence of the sample size on this parameter. So as to control\nthese comparisons, the models are chosen such that quantities other\nthan the maximum node-degree $\\ensuremath{\\ensuremath{d}}$ are fixed: in particular, we\nfix the number of nodes $\\ensuremath{p} = 200$, and the edge covariance entries\nare set as $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = 2.5\/\\ensuremath{\\ensuremath{d}}$ for $(i,j) \\in \\ensuremath{E}$ so\nthat the quantities $(\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\mutinco)$ remain constant.\nThe minimum value $\\ensuremath{\\theta_{\\operatorname{min}}}$ in turn scales as $1\/\\ensuremath{\\ensuremath{d}}$. Observe\nhow the plots in panel (a) shift to the right as the maximum node\ndegree $\\ensuremath{\\ensuremath{d}}$ is increased, showing that star-shaped graphs with\nhigher degrees are more difficult. In panel (b) of\nFigure~\\ref{FigStarProbvsND}, we plot the same data versus the\nrescaled sample size $\\ensuremath{n}\/\\ensuremath{\\ensuremath{d}}$. Recall that if all the curves\nwere to stack up under this rescaling, then it means the required\nsample size $\\ensuremath{n}$ scales linearly with $\\ensuremath{\\ensuremath{d}}$. These plots are\ncloser to aligning than the unrescaled plots, but the agreement is not\nperfect. In particular, observe that the curve $\\ensuremath{\\ensuremath{d}}$ (right-most\nin panel (a)) remains a bit to the right in panel (b), which suggests\nthat a somewhat more aggressive rescaling---perhaps\n$\\ensuremath{n}\/\\ensuremath{\\ensuremath{d}}^\\gamma$ for some $\\gamma \\in (1,2)$---is appropriate.\n\nNote that for $\\ensuremath{\\theta_{\\operatorname{min}}}$ scaling as $1\/\\ensuremath{\\ensuremath{d}}$, the sufficient\ncondition from Theorem~\\ref{ThmModel}, as summarized in\nequation~\\eqref{EqnCrudeBound}, is $\\ensuremath{n} = \\Omega(\\ensuremath{\\ensuremath{d}}^2 \\log\n\\ensuremath{p})$, which appears to be overly conservative based on these data.\nThus, it might be possible to tighten our theory under certain\nregimes.\n\n\n\n\\subsection{Dependence on covariance and Hessian terms}\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\nNext, we study the dependence of the sample size required for model\nselection consistency on the model complexity term $K$ defined\nin \\eqref{EqnKdefn}, which is a collection of the quantities\n$\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}$, $\\ensuremath{K_{\\ensuremath{\\Gamma^*}}}$ and $\\mutinco$ defined by the covariance\nmatrix and Hessian, as well as the minimum value\n$\\ensuremath{\\theta_{\\operatorname{min}}}$. Figure~\\ref{FigChainNstarvsGamma} plots the probability\nof correct signed edge-set recovery versus the sample size $\\ensuremath{n}$\nfor chain graphs. Here each curve corresponds to a different setting\nof the model complexity factor $K$, but with a fixed\nnumber of nodes $\\ensuremath{p} = 120$, and maximum node-degree $\\ensuremath{\\ensuremath{d}} = 2$.\nWe varied the actor $K$ by varying the value $\\rho$ of the\nedge covariances $\\ensuremath{\\Sigma}_{ij} = \\rho,\\, (i,j) \\in \\ensuremath{E}$. Notice how\nthe curves, each of which corresponds to a different model complexity\nfactor, shift rightwards as $K$ is increased so that\nmodels with larger values of $K$ require greater number of\nsamples $\\ensuremath{n}$ to achieve the same probability of correct model\nselection. These rightward-shifts are in qualitative agreement with\nthe prediction of Theorem~\\ref{ThmMain}, but we suspect that our\nanalysis is not sharp enough to make accurate quantitative predictions\nregarding this scaling.\n\n\\newcommand{.5\\textwidth}{.5\\textwidth}\n\n\\begin{figure}\n\\begin{center}\n\\widgraph{.5\\textwidth}{figures\/gammavarchainprobvsn.eps} \\\\\n\\end{center}\n\\caption{Simulations for chain graph with fixed number of nodes $\\ensuremath{p}\n= 120$, and varying model complexity $K$. Plot of\nprobability of correct signed edge-set recovery versus the sample size\n$\\ensuremath{n}$.}\n\\label{FigChainNstarvsGamma}\n\\end{figure}\n\n\\subsection{Convergence rates in elementwise $\\ell_\\infty$-norm}\n\nFinally, we report some simulation results on the convergence rate in\nelementwise $\\ell_\\infty$-norm. According to\nCorollary~\\ref{CorEllinfSubg}, in the case of sub-Gaussian tails, if\nthe elementwise $\\ell_\\infty$-norm should decay at rate\n${\\mathcal{O}}(\\sqrt{\\frac{\\log \\ensuremath{p}}{\\ensuremath{n}}})$, once the sample size\n$\\ensuremath{n}$ is sufficiently large. Figure~\\ref{FigStarNormvsBaserate}\nshows the behavior of the elementwise $\\ell_\\infty$-norm for\nstar-shaped graphs of varying sizes $\\ensuremath{p}$. The results reported\nhere correspond to the maximum degree $\\ensuremath{\\ensuremath{d}} = \\lceil 0.1 \\ensuremath{p}\n\\rceil$; we also performed analogous experiments for $\\ensuremath{\\ensuremath{d}} =\n{\\mathcal{O}}(\\log \\ensuremath{p})$ and $\\ensuremath{\\ensuremath{d}} = {\\mathcal{O}}(1)$, and observed\nqualitatively similar behavior. The edge correlations were set as\n$\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij} = 2.5\/\\ensuremath{\\ensuremath{d}}$ for all $(i,j) \\in \\ensuremath{E}$ so that the\nquantities $(\\ensuremath{K_{\\ensuremath{\\Sigma}^*}}, \\ensuremath{K_{\\ensuremath{\\Gamma^*}}}, \\mutinco)$ remain constant. With\nthese settings, each curve in Figure~\\ref{FigStarNormvsBaserate}\ncorresponds to a different problem size, and plots the elementwise\n$\\ell_\\infty$-error versus the rescaled sample size $\\ensuremath{n}\/\\log\n\\ensuremath{p}$, so that we expect to see curves of the form $f(t) =\n1\/\\sqrt{t}$. The curves show that when the rescaled sample size\n$(\\ensuremath{n}\/\\log \\ensuremath{p})$ is larger than some threshold (roughly $40$ in\nthe plots shown), the elementwise $\\ell_\\infty$ norm decays at the\nrate $\\sqrt{\\frac{\\log \\ensuremath{p}}{\\ensuremath{n}}}$, which is consistent with\nCorollary~\\ref{CorEllinfSubg}.\n\n\\begin{figure}\n\\begin{center}\n\\widgraph{.5\\textwidth}{figures\/starcvgrate.eps} \\\\\n\\end{center}\n\\caption{Simulations for a star graph with varying number of nodes\n$\\ensuremath{p}$, maximum node degree $\\ensuremath{\\ensuremath{d}} = \\lceil 0.1 \\ensuremath{p} \\rceil$, edge\ncovariances $\\ensuremath{\\ensuremath{\\Sigma}^*}_{ij}= 2.5\/\\ensuremath{\\ensuremath{d}}$. Plot of the element-wise\n$\\ell_\\infty$ norm of the concentration matrix estimate error\n$\\vecnorm{\\ensuremath{\\widehat{\\Theta}} - \\ensuremath{\\Theta^*}}{\\infty}$ versus the rescaled sample\nsize $\\ensuremath{n}\/\\log (\\ensuremath{p})$.}\n\\label{FigStarNormvsBaserate}\n\\end{figure}\n\n\\section{Discussion}\n\nThe focus of this paper is the analysis of the high-dimensional\nscaling of the $\\ell_1$-regularized log determinant\nproblem~\\eqref{EqnGaussMLE} as an estimator of the concentration\nmatrix of a random vector. Our main contributions were to derive\nsufficient conditions for its model selection consistency as well as\nconvergence rates in both elementwise $\\ell_\\infty$-norm, as well as\nFrobenius and spectral norms. Our results allow for a range of tail\nbehavior, ranging from the exponential-type decay provided by Gaussian\nrandom vectors (and sub-Gaussian more generally), to polynomial-type\ndecay guaranteed by moment conditions. In the Gaussian case, our\nresults have natural interpretations in terms of Gaussian Markov\nrandom fields.\n\nOur main results relate the i.i.d. sample size $\\ensuremath{n}$ to various\nparameters of the problem required to achieve consistency. In\naddition to the dependence on matrix size $\\ensuremath{p}$, number of edges\n$\\ensuremath{s}$ and graph degree $\\ensuremath{\\ensuremath{d}}$, our analysis also illustrates\nthe role of other quantities, related to the structure of the\ncovariance matrix $\\ensuremath{\\ensuremath{\\Sigma}^*}$ and the Hessian of the objective\nfunction, that have an influence on consistency rates. Our main\nassumption is an irrepresentability or mutual incoherence condition,\nsimilar to that required for model selection consistency of the Lasso,\nbut involving the Hessian of the log-determinant objective\nfunction~\\eqref{EqnGaussMLE}, evaluated at the true model. When the\ndistribution of $X$ is multivariate Gaussian, this Hessian is the\nFisher information matrix of the model, and thus can be viewed as an\nedge-based counterpart to the usual node-based covariance matrix We\nreport some examples where irrepresentability condition for the Lasso\nhold and the log-determinant condition fails, but we do not know in\ngeneral if one requirement dominates the other. In addition to these\ntheoretical results, we provided a number of simulation studies\nshowing how the sample size required for consistency scales with\nproblem size, node degrees, and the other complexity parameters\nidentified in our analysis.\n\nThere are various interesting questions and possible extensions to\nthis paper. First, in the current paper, we have only derived\nsufficient conditions for model selection consistency. As in past work\non the Lasso~\\cite{Wainwright2006_new}, it would also be interesting\nto derive a \\emph{converse result}---namely, to prove that if the\nsample size $\\ensuremath{n}$ is smaller than some function of $(\\ensuremath{p},\n\\ensuremath{\\ensuremath{d}}, \\ensuremath{s})$ and other complexity parameters, then regardless\nof the choice of regularization constant, the log-determinant method\nfails to recover the correct graph structure. Second, while this\npaper studies the problem of estimating a fixed graph or concentration\nmatrix, a natural extension would allow the graph to vary over time, a\nproblem setting which includes the case where the observations are\ndependent. For instance, \\citet{ZhoLafWas08} study the estimation of\nthe covariance matrix of a Gaussian distribution in a time-varying\nsetting, and it would be interesting to extend results of this paper\nto this more general setting.\n\n\n\n\\subsection*{Acknowledgements} We thank Shuheng Zhou for helpful comments\non an earlier draft of this work. Work was partially supported by NSF\ngrant DMS-0605165. Yu also acknowledges support from ARO\nW911NF-05-1-0104, NSFC-60628102, and a grant from MSRA.\n\n\n\n\\subsection*{#1} \\end{center}}\n\n\\newcommand{\\ensuremath{\\mathbb{P}}}{\\ensuremath{\\mathbb{P}}}\n\n\n\n\\newcommand{\\ensuremath{\\mathcal{Q}}}{\\ensuremath{\\mathcal{Q}}}\n\n\\newcommand{\\tracer}[2]{\\ensuremath{\\langle \\!\\langle {#1}, \\; {#2}\n\\rangle \\!\\rangle}}\n\n\\newcommand{\\Symcone}[1]{\\ensuremath{\\mathcal{S}^{#1}}}\n\n\\newcommand{\\pvert}[2]{\\ensuremath{#1_{#2}}}\n\n\\newcommand{\\ensuremath{{\\boldsymbol{\\nu}}}}{\\ensuremath{{\\boldsymbol{\\nu}}}}\n\\newcommand{\\ensuremath{\\mathcal{M}}}{\\ensuremath{\\mathcal{M}}}\n\n\n\n\n\\newcommand{\\ensuremath{\\frac{1}{2}}}{\\ensuremath{\\frac{1}{2}}}\n\\newcommand{\\ensuremath{\\operatorname{ext}}}{\\ensuremath{\\operatorname{ext}}}\n\\newcommand{\\ensuremath{\\operatorname{co}}}{\\ensuremath{\\operatorname{co}}}\n\n\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\\newcommand{\\ensuremath{\\ensuremath{\\Partc^*}_{app}}}{\\ensuremath{\\ensuremath{\\Partc^*}_{app}}}\n\\newcommand{\\ensuremath{B}}{\\ensuremath{B}}\n\\newcommand{\\ensuremath{\\Partapp^*}}{\\ensuremath{\\ensuremath{B}^*}}\n\\newcommand{\\ensuremath{APP}}{\\ensuremath{APP}}\n\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\n\\newcommand{\\ensuremath{A}}{\\ensuremath{A}}\n\\newcommand{\\ensuremath{B}}{\\ensuremath{B}}\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\n\\newcommand{\\ensuremath{{\\boldsymbol{\\ensuremath{\\theta}}}}}{\\ensuremath{{\\boldsymbol{\\ensuremath{\\theta}}}}}\n\\newcommand{\\ensuremath{{\\boldsymbol{\\ensuremath{\\theta}}}^*}}{\\ensuremath{{\\boldsymbol{\\ensuremath{\\theta}}}^*}}\n\n\\newcommand{\\myproj}[2]{\\ensuremath{{#1}({#2})}}\n\n\\newcommand{\\ensuremath{\\rho}}{\\ensuremath{\\rho}}\n\\newcommand{\\ensuremath{{\\boldsymbol{\\rho}}}}{\\ensuremath{{\\boldsymbol{\\rho}}}}\n\\newcommand{\\ensuremath{{\\boldsymbol{\\rho_e}}}}{\\ensuremath{{\\boldsymbol{\\rho_e}}}}\n\n\\newcommand{\\ensuremath{\\nu}}{\\ensuremath{\\nu}}\n\\newcommand{\\ensuremath{\\nu}}{\\ensuremath{\\nu}}\n\n\\newcommand{\\support}[2]{\\ensuremath{\\delta^*(#1 \\, | \\, #2)}}\n\\newcommand{\\indicate}[2]{\\ensuremath{\\delta(#1 \\, | \\, #2)}}\n\n\\newcommand{\\ensuremath{\\delta^*}}{\\ensuremath{\\delta^*}}\n\\newcommand{\\ensuremath{\\delta}}{\\ensuremath{\\delta}}\n\n\n\\newcommand{\\ensuremath{x}}{\\ensuremath{x}}\n\\newcommand{\\ensuremath{x^*}}{\\ensuremath{x^*}}\n\n\n\\newcommand{\\ensuremath{y}}{\\ensuremath{y}}\n\\newcommand{\\ensuremath{y^*}}{\\ensuremath{y^*}}\n\n\\newcommand{\\ensuremath{f}}{\\ensuremath{f}}\n\\newcommand{\\ensuremath{f^*}}{\\ensuremath{f^*}}\n\n\n\n\n\n\\newcommand{\\ensuremath{\\ensuremath{{\\mathbb{R}}}^n}}{\\ensuremath{\\ensuremath{{\\mathbb{R}}}^n}}\n\\newcommand{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_+^n}}{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_+^n}}\n\\newcommand{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_{++}^n}}{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_{++}^n}}\n\n\n\\newcommand{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_+}}{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_+}}\n\\newcommand{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_{++}}}{\\ensuremath{\\ensuremath{{\\mathbb{R}}}_{++}}}\n\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\\newcommand{\\ensuremath{\\operatorname{aff}}}{\\ensuremath{\\operatorname{aff}}}\n\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\\newcommand{\\ensuremath{\\mu}}{\\ensuremath{\\mu}}\n\n\n\\newcommand{\\ensuremath{K}}{\\ensuremath{K}}\n\\newcommand{\\ensuremath{K^*}}{\\ensuremath{K^*}}\n\n\\newcommand{\\ensuremath{X}}{\\ensuremath{X}}\n\n\\newcommand{\\ensuremath{c}}{\\ensuremath{c}}\n\n\\newcommand{\\ensuremath{\\preceq_{\\cone}}}{\\ensuremath{\\preceq_{\\ensuremath{K}}}}\n\\newcommand{\\ensuremath{\\prec_{\\cone}}}{\\ensuremath{\\prec_{\\ensuremath{K}}}}\n\\newcommand{\\ensuremath{\\succeq_{\\cone}}}{\\ensuremath{\\succeq_{\\ensuremath{K}}}}\n\\newcommand{\\ensuremath{\\succ_{\\cone}}}{\\ensuremath{\\succ_{\\ensuremath{K}}}}\n\n\\newcommand{\\ensuremath{\\gamma}}{\\ensuremath{\\gamma}}\n\n\n\\newcommand{\\ensuremath{m}}{\\ensuremath{m}}\n\\newcommand{\\ensuremath{\\mathcal{P}}}{\\ensuremath{\\mathcal{P}}}\n\\newcommand{\\mathcal{M}}{\\mathcal{M}}\n\\newcommand{\\mathcal{A}}{\\mathcal{A}}\n\\newcommand{\\mathcal{B}}{\\mathcal{B}}\n\\newcommand{\\mathcal{F}}{\\mathcal{F}}\n\n\n\n\n\\newcommand{\\op}[1]{\\ensuremath{\\operatorname{#1}}}\n\n\\newcommand{\\ensuremath{\\operatorname{SDEF}}}{\\ensuremath{\\operatorname{SDEF}}}\n\n\\newcommand{\\ensuremath{M}}{\\ensuremath{M}}\n\n\\newcommand{\\ensuremath{n}}{\\ensuremath{n}}\n\n\\newcommand{\\ensuremath{\\operatorname{dom}}}{\\ensuremath{\\operatorname{dom}}}\n\n\\newcommand{\\ensuremath{\\Lambda}}{\\ensuremath{\\Lambda}}\n\n\n\\newcommand{\\ensuremath{\\ensuremath{{\\scr{X}}}^{\\spnodenum}}}{\\ensuremath{\\ensuremath{{\\scr{X}}}^{\\ensuremath{n}}}}\n\n\n\\newcommand{\\ensuremath{\\operatorname{LOCAL}}}{\\ensuremath{\\operatorname{LOCAL}}}\n\\newcommand{\\ensuremath{\\operatorname{LOCAL}}}{\\ensuremath{\\operatorname{LOCAL}}}\n\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\n\n\\newcommand{\\ensuremath{\\tau}}{\\ensuremath{\\tau}}\n\\newcommand{\\ensuremath{\\tau}}{\\ensuremath{\\tau}}\n\n\\newcommand{\\ensuremath{\\gamma^*}}{\\ensuremath{\\gamma^*}}\n\\newcommand{\\ensuremath{\\ell^*}}{\\ensuremath{\\ell^*}}\n\n\\newcommand{\\operatorname{maximize}}{\\operatorname{maximize}}\n\\newcommand{\\operatorname{minimize}}{\\operatorname{minimize}}\n\\newcommand{\\operatorname{subject} \\; \\operatorname{to}}{\\operatorname{subject} \\; \\operatorname{to}}\n\n\\newcommand{\\ensuremath{B}}{\\ensuremath{B}}\n\\newcommand{\\ensuremath{\\Partcapp^*}}{\\ensuremath{\\ensuremath{B}^*}}\n\n\\newcommand{\\ensuremath{\\wtil{\\ensuremath{A}}}}{\\ensuremath{\\wtil{\\ensuremath{A}}}}\n\\newcommand{\\ensuremath{\\wtil{\\ensuremath{A}}^*}}{\\ensuremath{\\wtil{\\ensuremath{A}}^*}}\n\n\\newcommand{\\ensuremath{\\mu}}{\\ensuremath{\\mu}}\n\\newcommand{\\ensuremath{\\mu}}{\\ensuremath{\\mu}}\n\\newcommand{\\ensuremath{\\wtil{\\meanpar}}}{\\ensuremath{\\wtil{\\ensuremath{\\mu}}}}\n\\newcommand{\\ensuremath{\\meanpar^{\\operatorname{MF}}}}{\\ensuremath{\\ensuremath{\\mu}^{\\operatorname{MF}}}}\n\n\n\n\n\\newcommand{\\lp}[2]{\\ensuremath{\\|#2\\|_{#1}}}\n\n\\newcommand{\\ensuremath{\\wedge}}{\\ensuremath{\\wedge}}\n\\newcommand{\\ensuremath{\\vee}}{\\ensuremath{\\vee}}\n\n\\newcommand{\\ensuremath{\\operatorname{OUT}}}{\\ensuremath{\\operatorname{OUT}}}\n\\newcommand{\\ensuremath{\\tau}}{\\ensuremath{\\tau}}\n\n\n\n\\makeatletter\n\\long\\def\\@makecaption#1#2{\n \\vskip 0.8ex\n \\setbox\\@tempboxa\\hbox{\\small {\\bf #1:} #2}\n \\parindent 1.5em \n \\dimen0=\\hsize\n \\advance\\dimen0 by -3em\n \\ifdim \\wd\\@tempboxa >\\dimen0\n \\hbox to \\hsize{\n \\parindent 0em\n \\hfil \n \\parbox{\\dimen0}{\\def\\baselinestretch{0.96}\\small\n {\\bf #1.} #2\n \n } \n \\hfil}\n \\else \\hbox to \\hsize{\\hfil \\box\\@tempboxa \\hfil}\n \\fi\n }\n\\makeatother\n\n\n\n\\long\\def\\comment#1{}\n\n\\makeatletter\n\\def\\@cite#1#2{[\\if@tempswa #2 \\fi #1]}\n\\makeatother\n\n\n\\makeatletter\n\\long\\def\\barenote#1{\n \\insert\\footins{\\footnotesize\n \\interlinepenalty\\interfootnotelinepenalty \n \\splittopskip\\footnotesep\n \\splitmaxdepth \\dp\\strutbox \\floatingpenalty \\@MM\n \\hsize\\columnwidth \\@parboxrestore\n {\\rule{z@}{\\footnotesep}\\ignorespaces\n \n #1\\strut}}}\n\\makeatother\n\n\n\\newcommand{\\ensuremath{\\operatorname{TRACT}}}{\\ensuremath{\\operatorname{TRACT}}}\n\\newcommand{\\ensuremath{\\operatorname{FACT}}}{\\ensuremath{\\operatorname{FACT}}}\n\\newcommand{\\ensuremath{{\\mathcal{M}}_e}}{\\ensuremath{{\\mathcal{M}}_e}}\n\n\\newcommand{\\ensuremath{\\ensuremath{H}_{0}}}{\\ensuremath{\\ensuremath{H}_{0}}}\n\n\n\\newcommand{\\ensuremath{\\ensuremath{\\Phi}^*}}{\\ensuremath{\\ensuremath{\\Phi}^*}}\n\n\\newcommand{\\ensuremath{A}}{\\ensuremath{A}}\n\\newcommand{\\ensuremath{\\Partc^*}}{\\ensuremath{\\ensuremath{A}^*}}\n\n\\newcommand{\\ensuremath{\\operatorname{ri}}}{\\ensuremath{\\operatorname{ri}}}\n\n\\newcommand{\\ensuremath{\\operatorname{int}}}{\\ensuremath{\\operatorname{int}}}\n\\newcommand{\\ensuremath{\\operatorname{int}}}{\\ensuremath{\\operatorname{int}}}\n\n\\newcommand{\\ensuremath{\\mathbb{I\\,}}}{\\ensuremath{\\mathbb{I\\,}}}\n\n\\newcommand{\\ensuremath{\\mathcal{M}}}{\\ensuremath{\\mathcal{M}}}\n\n\n\\newcommand{\\ensuremath{\\operatorname{HYPER}}}{\\ensuremath{\\operatorname{HYPER}}}\n\n\n\\newcommand{\\ensuremath{\\begin{bmatrix} 0.4 & 0.1 \\\\ 0.1 & 0.4 \\end{bmatrix}}}{\\ensuremath{\\begin{bmatrix} 0.4 & 0.1 \\\\ 0.1 & 0.4 \\end{bmatrix}}}\n\n\n\\newcommand{\\ensuremath{\\begin{bmatrix} 0.1 & 0.4 \\\\ 0.4 & 0.1 \\end{bmatrix}}}{\\ensuremath{\\begin{bmatrix} 0.1 & 0.4 \\\\ 0.4 & 0.1 \\end{bmatrix}}}\n\n\\newcommand{\\ensuremath{\\begin{bmatrix} 0.5 \\\\ 0.5 \\end{bmatrix}}}{\\ensuremath{\\begin{bmatrix} 0.5 \\\\ 0.5 \\end{bmatrix}}}\n\n\n\n\n\n\\newcommand{\\ensuremath{{\\mathcal{G}}}}{\\ensuremath{{\\mathcal{G}}}}\n\n\\newcommand{\\ensuremath{{\\mathcal{D}}}}{\\ensuremath{{\\mathcal{D}}}}\n\n\\newcommand{\\ensuremath{\\ensuremath{\\theta}^{BP}}}{\\ensuremath{\\ensuremath{\\theta}^{BP}}}\n\\newcommand{\\ensuremath{\\ensuremath{\\theta}^{ML}}}{\\ensuremath{\\ensuremath{\\theta}^{ML}}}\n\\newcommand{\\ensuremath{\\ensuremath{\\theta}^{RE}}}{\\ensuremath{\\ensuremath{\\theta}^{RE}}}\n\n\\newcommand{\\ensuremath{n}}{\\ensuremath{n}}\n\\newcommand{\\ensuremath{g}}{\\ensuremath{g}}\n\\newcommand{\\ensuremath{l}}{\\ensuremath{l}}\n\n\n\\newcommand{\\ensuremath{L}}{\\ensuremath{L}}\n\\newcommand{\\ensuremath{\\wtil{\\Loglike}}}{\\ensuremath{\\wtil{\\ensuremath{L}}}}\n\n\n\n\\newcommand{\\ensuremath{\\mathcal{A}}}{\\ensuremath{{\\mathcal{C}_e}}}\n\\newcommand{\\ensuremath{\\mathcal{B}}}{\\ensuremath{{\\mathcal{C}_m}}}\n\n\n\\newcommand{\\ensuremath{\\varphi}}{\\ensuremath{\\varphi}}\n\\newcommand{\\ensuremath{\\rho}}{\\ensuremath{\\rho}}\n\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\\newcommand{\\ensuremath{\\estim{\\ensuremath{\\theta}}}}{\\ensuremath{\\estim{\\ensuremath{\\theta}}}}\n\\newcommand{\\ensuremath{\\estim{\\ensuremath{\\Theta}}}}{\\ensuremath{\\estim{\\ensuremath{\\Theta}}}}\n\n\\newcommand{\\ensuremath{\\operatorname{COR}}}{\\ensuremath{\\operatorname{COR}}}\n\n\\newcommand{tree-reweighted max-sum$\\;$}{tree-reweighted max-sum$\\;$}\n\\newcommand{tree-reweighted max-sum}{tree-reweighted max-sum}\n\n\\newcommand{tree-reweighted max-product$\\;$}{tree-reweighted max-product$\\;$}\n\\newcommand{tree-reweighted max-product}{tree-reweighted max-product}\n\n\n\n\\newcommand{\\ensuremath{\\wtil{\\ensuremath{P}}}}{\\ensuremath{\\wtil{\\ensuremath{P}}}}\n\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\\newcommand{\\ensuremath{\\mathbf{\\trmaxmarg}}}{\\ensuremath{\\mathbf{\\ensuremath{T}}}}\n\n\\newcommand{\\ensuremath{Q}}{\\ensuremath{Q}}\n\\newcommand{\\ensuremath{\\mathbf{\\newtmarg}}}{\\ensuremath{\\mathbf{\\ensuremath{Q}}}}\n\n\\newcommand{\\ensuremath{\\trmaxmarg^*}}{\\ensuremath{\\ensuremath{T}^*}}\n\\newcommand{\\ensuremath{\\trmaxmargb^*}}{\\ensuremath{\\ensuremath{\\mathbf{\\trmaxmarg}}^*}}\n\n\\newcommand{\\ensuremath{{\\mathcal{H}}}}{\\ensuremath{{\\mathcal{H}}}}\n\n\n\n\\newcommand{\\ensuremath{\\operatorname{MARG}}}{\\ensuremath{\\operatorname{MARG}}}\n\\newcommand{\\ensuremath{\\operatorname{TREE}}}{\\ensuremath{\\operatorname{TREE}}}\n\\newcommand{\\ensuremath{v}}{\\ensuremath{v}}\n\\newcommand{\\ensuremath{u}}{\\ensuremath{u}}\n\\newcommand{\\ensuremath{{\\bf{v}}}}{\\ensuremath{{\\bf{v}}}}\n\\newcommand{\\ensuremath{{\\bf{u}}}}{\\ensuremath{{\\bf{u}}}}\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\\newcommand{\\ensuremath{\\mathbf{\\cbit}}}{\\ensuremath{\\mathbf{\\ensuremath{C}}}}\n\\newcommand{\\ensuremath{p}}{\\ensuremath{p}}\n\n\n\\newcommand{\\ensuremath{\\operatorname{OPT}}}{\\ensuremath{\\operatorname{OPT}}}\n\n\n\\newcommand{0.1}{0.1}\n\\newcommand{0.9}{0.9}\n\n\\newcommand{0.1}{0.1}\n\\newcommand{0.9}{0.9}\n\n\\newcommand{0.4}{0.4}\n\\newcommand{0.6}{0.6}\n\n\\newcommand{\\ensuremath{\\epsilon}}{\\ensuremath{\\epsilon}}\n\n\n\\newcommand{\\begin{itemize}}{\\begin{itemize}}\n\\newcommand{\\end{itemize}}{\\end{itemize}}\n\\newcommand{\\begin{enumerate}}{\\begin{enumerate}}\n\\newcommand{\\end{enumerate}}{\\end{enumerate}}\n\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\n\\newcommand{\\hspace*{.1in}}{\\hspace*{.1in}}\n\\newcommand{\\vspace*{.1in}}{\\vspace*{.1in}}\n\\newcommand{\\vspace*{.2in}}{\\vspace*{.2in}}\n\\newcommand{\\hspace*{.2in}}{\\hspace*{.2in}}\n\n\\newcommand{\\footnotesize}{\\footnotesize}\n\n\n\\newcommand{\\myspec}[2]{\\ensuremath{\\frac{\\ensuremath{T}^*_{#1 #2}}{\\ensuremath{T}^*_{#1} \\, \\ensuremath{T}^*_{#2}}}}\n\\newcommand{\\mysing}[1]{\\ensuremath{\\ensuremath{T}^*_{#1}}}\n\n\n\\newcommand{\\myspectwo}[2]{\\ensuremath{\\ensuremath{\\psi}_{#1 #2}}}\n\n\n\\newcommand{\\ensuremath{\\estim{P}}}{\\ensuremath{\\estim{P}}}\n\\newcommand{\\ensuremath{{\\bf{B}}}}{\\ensuremath{{\\bf{B}}}}\n\\newcommand{\\ensuremath{{\\bf{F}}}}{\\ensuremath{{\\bf{F}}}}\n\\newcommand{embedded trees }{embedded trees }\n\\newcommand{ET }{ET }\n\\newcommand{ET}{ET}\n\\newcommand{\\ensuremath{l}}{\\ensuremath{l}}\n\\newcommand{\\ensuremath{L_x}}{\\ensuremath{L_x}}\n\\newcommand{\\ensuremath{\\wtil{\\ensuremath{{\\mathbf{e}}}}}}{\\ensuremath{\\wtil{\\ensuremath{{\\mathbf{e}}}}}}\n\n\\newcommand{\\ensuremath{i}}{\\ensuremath{i}}\n\\newcommand{\\ensuremath{I}}{\\ensuremath{I}}\n\\newcommand{\\ensuremath{\\omega}}{\\ensuremath{\\omega}}\n\n\\newcommand{\\ensuremath{r}}{\\ensuremath{r}}\n\n\\newcommand{\\ensuremath{{H}}}{\\ensuremath{{H}}}\n\\newcommand{\\ensuremath{H}}{\\ensuremath{H}}\n\\newcommand{\\ensuremath{\\mathbf{0}}}{\\ensuremath{\\mathbf{0}}}\n\\newcommand{\\ensuremath{\\bar{\\gamma}^0}}{\\ensuremath{\\bar{\\gamma}^0}}\n\\newcommand{\\ensuremath{r}}{\\ensuremath{r}}\n\\newcommand{\\qform}{\\ensuremath{A}} \n\\newcommand{\\qformm}{\\ensuremath{a}} \n\\newcommand{pdf$\\;\\,$}{pdf$\\;\\,$}\n\\newcommand{compatibility function$\\;$}{compatibility function$\\;$}\n\\newcommand{compatibility functions$\\;$}{compatibility functions$\\;$}\n\n\n\\newcommand{\\ensuremath{F}}{\\ensuremath{F}}\n\\newcommand{\\ensuremath{\\bar{\\gamma}}}{\\ensuremath{\\bar{\\gamma}}}\n\\newcommand{\\ensuremath{\\mathcal{U}}}{\\ensuremath{\\mathcal{U}}}\n\\newcommand{\\ensuremath{\\bar{\\eta}}}{\\ensuremath{\\bar{\\eta}}}\n\\newcommand{\\ensuremath{\\mathbf{\\bar{\\eta}}}}{\\ensuremath{\\mathbf{\\bar{\\eta}}}}\n\n\\newcommand{\\ensuremath{\\underline{\\Lambda}}}{\\ensuremath{\\underline{\\Lambda}}}\n\\newcommand{\\ensuremath{\\underline{{\\mathcal{R}}}}}{\\ensuremath{\\underline{{\\mathcal{R}}}}}\n\n\\newcommand{\\ensuremath{\\delta(x_s = j)}}{\\ensuremath{\\delta(x_s = j)}}\n\\newcommand{\\dellf}[2]{\\ensuremath{\\delta(x_{#1}={#2})}}\n\n\n\\newcommand{reparameterization }{reparameterization }\n\\newcommand{Reparameterization }{Reparameterization }\n\n\n\\newcommand{\\ensuremath{K}}{\\ensuremath{K}}\n\n\n\n\n\n\n\\newcommand{\\tilt}[2]{\\ensuremath{{#1}_{#2}}}\n\\newcommand{\\Partpl}[1]{\\ensuremath{\\ensuremath{\\Phi}_{#1}}}\n\n\\newcommand{\\begin{flushright} $\\Box$ \\end{flushright}}{\\begin{flushright} $\\Box$ \\end{flushright}}\n\\newcommand{\\ensuremath{\\mathcal{S}}}{\\ensuremath{\\mathcal{S}}}\n\\newcommand{\\ensuremath{c}}{\\ensuremath{c}}\n\n\\newcommand{\\ensuremath{L}}{\\ensuremath{L}}\n\n\\newcommand{\\ensuremath{a}}{\\ensuremath{a}}\n\\newcommand{\\ensuremath{b}}{\\ensuremath{b}}\n\\newcommand{\\ensuremath{c}}{\\ensuremath{c}}\n\n\\newcommand{\\ensuremath{\\mathcal{U}}}{\\ensuremath{\\mathcal{U}}}\n\\newcommand{\\ensuremath{S}}{\\ensuremath{S}}\n\n\\newcommand{\\ensuremath{\\operatorname{supp}}}{\\ensuremath{\\operatorname{supp}}}\n\n\n\\newcommand{\\ensuremath{{\\mathbf{e}}}}{\\ensuremath{{\\mathbf{e}}}}\n\\newcommand{\\ensuremath{{\\mathbf{t}}}}{\\ensuremath{{\\mathbf{t}}}}\n\\newcommand{\\ensuremath{{{\\mathbf{a}}}}}{\\ensuremath{{{\\mathbf{a}}}}}\n\\newcommand{\\ensuremath{{{\\mathbf{b}}}}}{\\ensuremath{{{\\mathbf{b}}}}}\n\\newcommand{\\ensuremath{{{\\mathbf{c}}}}}{\\ensuremath{{{\\mathbf{c}}}}}\n\\newcommand{\\ensuremath{{\\boldsymbol{\\ensuremath{\\phi}}}}}{\\ensuremath{{\\boldsymbol{\\ensuremath{\\phi}}}}}\n\\newcommand{\\ensuremath{{\\boldsymbol{\\ensuremath{\\psi}}}}}{\\ensuremath{{\\boldsymbol{\\ensuremath{\\psi}}}}}\n\n\\newcommand{\\ensuremath{\\ensuremath{\\Phi}}}{\\ensuremath{\\ensuremath{\\Phi}}}\n\\newcommand{\\ensuremath{\\mathcal{P}}}{\\ensuremath{\\mathcal{P}}}\n\n\\newcommand{\\ensuremath{I}}{\\ensuremath{I}}\n\\newcommand{\\ensuremath{H}}{\\ensuremath{H}}\n\n\\newcommand{\\ensuremath{\\mathcal{F}}}{\\ensuremath{\\mathcal{F}}}\n\\newcommand{\\ensuremath{\\mathcal{H}}}{\\ensuremath{\\mathcal{H}}}\n\\newcommand{\\ensuremath{\\mathbb{T}}}{\\ensuremath{\\mathbb{T}}}\n\\newcommand{\\ensuremath{\\mathbb{F}}}{\\ensuremath{\\mathbb{F}}}\n\n\\newcommand{\\ensuremath{\\mathbb{T}_+}}{\\ensuremath{\\mathbb{T}_+}}\n\\newcommand{\\ensuremath{\\mathcal{L}}}{\\ensuremath{\\mathcal{L}}}\n\\newcommand{\\ensuremath{\\mathcal{F}}}{\\ensuremath{\\mathcal{F}}}\n\n\\newcommand{\\ensuremath{\\mathbb{T}}}{\\ensuremath{\\mathbb{T}}}\n\n\\newcommand{\\ensuremath{t}}{\\ensuremath{t}}\n\n\\newcommand{\\ensuremath{\\mathbb{L}}}{\\ensuremath{\\mathbb{L}}}\n\\newcommand{\\ensuremath{\\mathbb{M}}}{\\ensuremath{\\mathbb{M}}}\n\n\n\\newcommand{\\ensuremath{\\mathbb{M}}}{\\ensuremath{\\mathbb{M}}}\n\\newcommand{\\ensuremath{\\operatorname{OPT}}}{\\ensuremath{\\operatorname{OPT}}}\n\\newcommand{\\ensuremath{\\operatorname{conv}}}{\\ensuremath{\\operatorname{conv}}}\n\n\\newcommand{\\ensuremath{F}}{\\ensuremath{F}}\n\\newcommand{\\ensuremath{J}}{\\ensuremath{J}}\n\n\\newcommand{\\ensuremath{e}}{\\ensuremath{e}}\n\n\\newcommand{\\ensuremath{G_{\\operatorname{HYP}}}}{\\ensuremath{G_{\\operatorname{HYP}}}}\n\\newcommand{\\ensuremath{\\mathcal{H}}}{\\ensuremath{\\mathcal{H}}}\n\\newcommand{\\ensuremath{h}}{\\ensuremath{h}}\n\\newcommand{\\ensuremath{\\ensuremath{E}}}{\\ensuremath{\\ensuremath{E}}}\n\\newcommand{\\ensuremath{\\mathcal{T}}}{\\ensuremath{\\mathcal{T}}}\n\\newcommand{\\ensuremath{g}}{\\ensuremath{g}}\n\\newcommand{\\ensuremath{f}}{\\ensuremath{f}}\n\n\\newcommand{\\ensuremath{\\varphi}}{\\ensuremath{\\varphi}}\n\n\n\\newcommand{\\ensuremath{{\\mathcal{T}}}}{\\ensuremath{{\\mathcal{T}}}}\n\\newcommand{\\mathfrak{T}}{\\mathfrak{T}}\n\\newcommand{\\ensuremath{{\\boldsymbol{{\\vec{\\ensuremath{\\mu}}}}}}}{\\ensuremath{{\\boldsymbol{{\\vec{\\ensuremath{\\mu}}}}}}}\n\\newcommand{\\ensuremath{\\dweivec}}{\\ensuremath{\\ensuremath{{\\boldsymbol{{\\vec{\\ensuremath{\\mu}}}}}}}}\n\n\n\\newcommand{\\ensuremath{\\beta}}{\\ensuremath{\\beta}}\n\n\\newcommand{\\ensuremath{\\boldsymbol{\\ensuremath{\\mu}_e}}}{\\ensuremath{\\boldsymbol{\\ensuremath{\\mu}_e}}}\n\\newcommand{\\ensuremath{\\boldsymbol{\\ensuremath{\\mu}_\\hyperedge}}}{\\ensuremath{\\boldsymbol{\\ensuremath{\\mu}_\\ensuremath{h}}}}\n\\newcommand{\\ensuremath{\\dwvec}}{\\ensuremath{\\ensuremath{\\boldsymbol{\\ensuremath{\\mu}_e}}}}\n\n\n\\newcommand{\\ensuremath{\\boldsymbol{\\nu}}}{\\ensuremath{\\boldsymbol{\\nu}}}\n\\newcommand{\\ensuremath{{{\\mathcal{Q}}}}}{\\ensuremath{{{\\mathcal{Q}}}}}\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\\newcommand{\\ensuremath{{\\boldsymbol{\\lambda}}}}{\\ensuremath{{\\boldsymbol{\\lambda}}}}\n\\newcommand{\\ensuremath{x^{(i)}}}{\\ensuremath{x^{(i)}}}\n\\newcommand{\\ensuremath{\\Theta}}{\\ensuremath{\\Theta}}\n\\newcommand{\\ensuremath{\\veparam^*}}{\\ensuremath{\\ensuremath{\\Theta}^*}}\n\\newcommand{\\ensuremath{\\ensuremath{\\mu}}}{\\ensuremath{\\ensuremath{\\mu}}}\n\n\n\\newcommand{\\ensuremath{\\boldsymbol{\\mu}}}{\\ensuremath{\\boldsymbol{\\mu}}}\n\n\\newcommand{\\ensuremath{{\\mathcal{L}}}}{\\ensuremath{{\\mathcal{L}}}}\n\n\\newcommand{\\ensuremath{\\mathcal{U}}}{\\ensuremath{\\mathcal{U}}}\n\n\\newcommand{\\ensuremath{\\Lambda}}{\\ensuremath{\\Lambda}}\n\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\n\\newcommand{\\ensuremath{\\ensuremath{\\theta}_{\\ensuremath{\\mathbf{A}}}}}{\\ensuremath{\\ensuremath{\\theta}_{\\ensuremath{\\mathbf{A}}}}}\n\\newcommand{\\parampl}[1]{\\ensuremath{\\ensuremath{\\theta}_{\\ensuremath{\\mathbf{A}} \\; \\cup \\:\n#1}}}\n\n\\newcommand{\\ensuremath{\\nu}}{\\ensuremath{\\nu}}\n\\newcommand{\\ensuremath{\\Sigma}}{\\ensuremath{\\Sigma}}\n\n\\newcommand{\\ensuremath{\\bar{\\ensuremath{\\theta}}}}{\\ensuremath{\\bar{\\ensuremath{\\theta}}}}\n\n\\newcommand{\\operatorname{or}}{\\operatorname{or}}\n\n\\newcommand{\\ensuremath{\\Upsilon}}{\\ensuremath{\\Upsilon}}\n\\newcommand{\\ensuremath{\\mathbb{{M}}}}{\\ensuremath{\\mathbb{{M}}}}\n\\newcommand{\\ensuremath{\\mathfrak{{P}}}}{\\ensuremath{\\mathfrak{{P}}}}\n\n\\newcommand{{\\operatorname{clique}}}{{\\operatorname{clique}}}\n\\newcommand{{\\operatorname{separator}}}{{\\operatorname{separator}}}\n\\newcommand{{\\operatorname{sep}}}{{\\operatorname{sep}}}\n\n\\newcommand{{\\operatorname{core}}}{{\\operatorname{core}}}\n\\newcommand{{\\operatorname{full}}}{{\\operatorname{full}}}\n\\newcommand{\\ensuremath{Q}}{\\ensuremath{Q}}\n\\newcommand{\\ensuremath{\\wtil{\\qdist}}}{\\ensuremath{\\wtil{\\ensuremath{Q}}}}\n\\newcommand{\\ensuremath{\\wtil{\\qdist}}}{\\ensuremath{\\wtil{\\ensuremath{Q}}}}\n\n\\newcommand{\\ensuremath{\\mathbf{\\vec{\\qdist}}}}{\\ensuremath{\\mathbf{\\vec{\\ensuremath{Q}}}}}\n\n\\newcommand{\\ensuremath{\\qdistvec^*}}{\\ensuremath{\\ensuremath{\\mathbf{\\vec{\\qdist}}}^*}}\n\n\\newcommand{\\ensuremath{\\qdist_{\\ensuremath{\\mathbf{A}}}}}{\\ensuremath{\\ensuremath{Q}_{\\ensuremath{\\mathbf{A}}}}}\n\\newcommand{\\qcorepl}[1]{\\ensuremath{\\ensuremath{Q}_{\\ensuremath{\\mathbf{A}} \\: \\cup \\; #1}}}\n\n\n\\newcommand{\\ensuremath{\\Delta}}{\\ensuremath{\\Delta}}\n\\newcommand{\\ensuremath{\\wtil{\\edset}}}{\\ensuremath{\\wtil{\\ensuremath{\\Delta}}}}\n\n\\newcommand{\\ensuremath{\\mathbf{R}}}{\\ensuremath{\\mathbf{R}}}\n\\newcommand{\\ensuremath{\\wtil{\\mathbf{R}}}}{\\ensuremath{\\wtil{\\mathbf{R}}}}\n\\newcommand{\\ensuremath{\\mathbf{A}}}{\\ensuremath{\\mathbf{A}}}\n\\newcommand{\\ensuremath{\\mathbf{B}}}{\\ensuremath{\\mathbf{B}}}\n\\newcommand{\\ensuremath{\\bullet}}{\\ensuremath{\\bullet}}\n\n\\newcommand{\\ensuremath{\\ensuremath{P}_{\\coreset}}}{\\ensuremath{\\ensuremath{P}_{\\ensuremath{\\mathbf{A}}}}}\n\\newcommand{\\pcorepl}[1]{\\ensuremath{\\ensuremath{P}_{\\ensuremath{\\mathbf{A}}\\: \\cup \\; #1}}}\n\n\n\n\\newcommand{CTD}{CTD}\n\\newcommand{CAD}{CAD}\n\\newcommand{CTD$\\;$}{CTD$\\;$}\n\\newcommand{CAD$\\;$}{CAD$\\;$}\n\n\\newcommand{\\ensuremath{P}}{\\ensuremath{P}}\n\n\n\n\\newcommand{\\ensuremath{\\{0,1\\}^\\ensuremath{N}}}{\\ensuremath{\\{0,1\\}^\\ensuremath{N}}}\n\n\\newcommand{\\ensuremath{\\mathcal{F}}}{\\ensuremath{\\mathcal{F}}}\n\\newcommand{{\\ensuremath{\\ensuremath{\\gamma}^{*}}}}{{\\ensuremath{\\ensuremath{\\gamma}^{*}}}}\n\n\\newcommand{\\ensuremath{\\mathcal{M}_e}}{\\ensuremath{\\mathcal{M}_e}}\n\\newcommand{\\ensuremath{\\mathcal{M}_m}}{\\ensuremath{\\mathcal{M}_m}}\n\n\n\\newcommand{\\ensuremath{{\\mathcal{R}}}}{\\ensuremath{{\\mathcal{R}}}}\n\n\n\\newcommand{\\ensuremath{\\Lambda}}{\\ensuremath{\\Lambda}}\n\n\n\\newcommand{\\ensuremath{F}}{\\ensuremath{F}}\n\\newcommand{\\ensuremath{\\mathcal{F}}}{\\ensuremath{\\mathcal{F}}}\n\n\\newcommand{\\ensuremath{\\ensuremath{\\Phi}_{\\infty}}}{\\ensuremath{\\ensuremath{\\Phi}_{\\infty}}}\n\n\n\\newcommand{\\ensuremath{d}}{\\ensuremath{d}}\n\n\\newcommand{\\ensuremath{\\kappa}}{\\ensuremath{\\kappa}}\n\\newcommand{\\mynorm}[1]{\\ensuremath{\\ensuremath{\\kappa}_{#1}}}\n\n\n\\newcommand{\\ensuremath{{\\mathcal{G}}}}{\\ensuremath{{\\mathcal{G}}}}\n\\newcommand{\\ensuremath{{\\mathcal{G}}}}{\\ensuremath{{\\mathcal{G}}}}\n\n\\newcommand{\\ensuremath{{\\mathcal{G}}}}{\\ensuremath{{\\mathcal{G}}}}\n\\newcommand{\\bigbethe}[2]{\\ensuremath{\\ensuremath{{\\mathcal{G}}}_{{#1}; \\, {#2}}}}\n\\newcommand{\\bigbethenon}[2]{\\ensuremath{\\wtil{\\ensuremath{{\\mathcal{G}}}}_{{#1}; \\, {#2}}}}\n\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\\newcommand{\\ensuremath{{\\mathbb{C}}}}{\\ensuremath{{\\mathbb{C}}}}\n\\newcommand{\\ensuremath{{\\mathbb{D}}}}{\\ensuremath{{\\mathbb{D}}}}\n\\newcommand{\\kullind}[3]{\\ensuremath{D^{#1}({#2}\\; \\| {#3})}}\n\n\\newcommand{\\fullcompose}[1]{\\ensuremath{({\\mathcal{R}} \\circ\\ensuremath{\\Pi})^{#1}}}\n\\newcommand{\\ensuremath{{\\mathcal{Q}}}}{\\ensuremath{{\\mathcal{Q}}}}\n\n\\newcommand{{\\ensuremath{{\\ensuremath{\\theta}^{*}}}}}{{\\ensuremath{{\\ensuremath{\\theta}^{*}}}}}\n\\newcommand{{\\ensuremath{\\ensuremath{\\eparam}^*}}}{{\\ensuremath{\\ensuremath{\\eparam}^*}}}\n\\newcommand{{\\ensuremath{{\\wtil{\\ensuremath{\\eparam}}}}}}{{\\ensuremath{{\\wtil{\\ensuremath{\\eparam}}}}}}\n\n\\newcommand{{\\ensuremath{\\boldsymbol{\\gamma}}}}{{\\ensuremath{\\boldsymbol{\\gamma}}}}\n\\newcommand{{\\ensuremath{\\boldsymbol{\\ensuremath{\\wtil{\\gamma}}}}}}{{\\ensuremath{\\boldsymbol{\\ensuremath{\\wtil{\\gamma}}}}}}\n\n\\newcommand{{\\ensuremath{\\gamma}}}{{\\ensuremath{\\gamma}}}\n\\newcommand{{\\ensuremath{\\ensuremath{\\wtil{\\gamma}}}}}{{\\ensuremath{\\ensuremath{\\wtil{\\gamma}}}}}\n\n\n\\newcommand{\\ensuremath{k}}{\\ensuremath{k}}\n\n\\newcommand{\\ensuremath{\\Theta}}{\\ensuremath{\\Theta}}\n\\newcommand{\\ensuremath{\\Legmap}}{\\ensuremath{\\ensuremath{\\Lambda}}}\n\n\\newcommand{\\breg}[3]{\\ensuremath{B_{#1}(#2; \\; #3)}}\n\\newcommand{\\bregsin}[1]{\\ensuremath{B_{#1}}}\n\n\n\\newcommand{tree-based reparameterization$\\,$}{tree-based reparameterization$\\,$}\n\\newcommand{tree-based reparameterization}{tree-based reparameterization}\n\\newcommand{\\algshorts}{TRP$\\;$} \n\\newcommand{TRP}{TRP}\n\n\\newcommand{\\gmps}{GMP$\\;$} \n\\newcommand{GMP}{GMP}\n\n\n\\newcommand{\\ensuremath{i}}{\\ensuremath{i}}\n\\newcommand{\\ensuremath{i}}{\\ensuremath{i}}\n\\newcommand{\\ensuremath{l}}{\\ensuremath{l}}\n\n\\newcommand{\\ensuremath{\\ensuremath{\\theta}^\\dagger}}{\\ensuremath{\\ensuremath{\\theta}^\\dagger}}\n\\newcommand{\\ensuremath{\\ensuremath{T}^\\dagger}}{\\ensuremath{\\ensuremath{T}^\\dagger}}\n\\newcommand{\\ensuremath{\\bar{\\ensuremath{\\theta}}}}{\\ensuremath{\\bar{\\ensuremath{\\theta}}}}\n\\newcommand{\\ensuremath{{\\wtil{\\ensuremath{\\theta}}}}}{\\ensuremath{{\\wtil{\\ensuremath{\\theta}}}}}\n\\newcommand{\\ensuremath{\\wtil{\\gamma}}}{\\ensuremath{\\wtil{\\gamma}}}\n\n\n\\newcommand{\\ensuremath{\\alpha}}{\\ensuremath{\\alpha}}\n\n\n\\newcommand{\\apmarg}[2]{\\ensuremath{T^*_{#1; #2}}}\n\\newcommand{\\trmarg}[2]{\\ensuremath{P_{#1; #2}}}\n\n\\newcommand{\\ensuremath{\\Lambda}}{\\ensuremath{\\Lambda}}\n\\newcommand{\\ensuremath{\\Psi}}{\\ensuremath{\\Psi}}\n\n\\newcommand{\\ensuremath{\\estim{\\ensuremath{T}}}}{\\ensuremath{\\estim{\\ensuremath{T}}}}\n\\newcommand{\\ensuremath{\\ensuremath{T}^*}}{\\ensuremath{\\ensuremath{T}^*}}\n\\newcommand{\\ensuremath{\\scr{F}}}{\\ensuremath{\\scr{F}}}\n\n\\newcommand{\\ensuremath{{\\scr{M}}}}{\\ensuremath{{\\scr{M}}}}\n\n\\DeclareMathOperator{\\cl}{cl}\n\\DeclareMathOperator{\\is}{is}\n\\DeclareMathOperator{\\markov}{Markov}\n\\DeclareMathOperator{\\wrt}{w.r.t}\n\\DeclareMathOperator{\\suchthat}{s.t.}\n\n\n\\newcommand{M\\\"{o}bius $\\,$}{M\\\"{o}bius $\\,$}\n\n\n\n\n\n\n\n\n\n\n\n\n\\newcommand{\\ensuremath{{\\bf{A}}}}{\\ensuremath{{\\bf{A}}}}\n\n\n\\newcommand{\\ensuremath{{{x}}^F}}{\\ensuremath{{{x}}^F}}\n\\newcommand{\\ensuremath{{{x}}^S}}{\\ensuremath{{{x}}^S}}\n\\newcommand{\\ensuremath{{{y}}^+}}{\\ensuremath{{{y}}^+}}\n\n\\newcommand{\\ensuremath{E}}{\\ensuremath{E}}\n\n\\newcommand{\\ensuremath{\\operatorname{Err}}}{\\ensuremath{\\operatorname{Err}}}\n\n\n\\newcommand{\\ensuremath{\\operatorname{Diff}}}{\\ensuremath{\\operatorname{Diff}}}\n\n\n\\newcommand{\\mathbf{d}}{\\mathbf{d}}\n\n\\newcommand{\\ensuremath{S}}{\\ensuremath{S}}\n\\newcommand{{\\ensuremath{m}}}{{\\ensuremath{m}}}\n\\newcommand{\\ensuremath{N}}{\\ensuremath{N}}\n\\newcommand{\\ensuremath{|\\ensuremath{E}|}}{\\ensuremath{|\\ensuremath{E}|}}\n\\newcommand{\\ensuremath{L}}{\\ensuremath{L}}\n\n\\newcommand{\\ensuremath{i}}{\\ensuremath{i}}\n\\newcommand{\\ensuremath{{\\mathcal{I}}}}{\\ensuremath{{\\mathcal{I}}}}\n\n\\newcommand{\\ensuremath{{\\mathcal{M}}}}{\\ensuremath{{\\mathcal{M}}}}\n\n\\newcommand{\\coeff}[4]{\\ensuremath{C\\big(#1, \\,#2; \\; #3, \\,#4\\big)}}\n\\newcommand{\\bcoeff}[5]{\\ensuremath{D\\big (#1, \\,#2; \\; #3, \\,#4; \\, #5\\big)}}\n\n\\newcommand{\\usetree}[2]{\\ensuremath{\\estim{#1}^{#2}}}\n\\newcommand{\\usecut}[3]{\\ensuremath{\\wtil{#1}^{#2;#3}}}\n\n\\newcommand{\\ensuremath{M}}{\\ensuremath{M}}\n\\newcommand{\\ensuremath{\\mathbf{M}}}{\\ensuremath{\\mathbf{M}}}\n\n\\newcommand{\\ensuremath{\\zeta}}{\\ensuremath{\\zeta}}\n\\newcommand{\\ensuremath{\\boldsymbol{\\zeta}}}{\\ensuremath{\\boldsymbol{\\zeta}}}\n\n\\newcommand{\\linf}[1]{\\ensuremath{\\|{#1}\\|_\\infty}}\n\n\\newcommand{\\ensuremath{\\mathbf{M}}}{\\ensuremath{\\mathbf{M}}}\n\\newcommand{\\ensuremath{\\Gamma}}{\\ensuremath{\\Gamma}}\n\n\\newcommand{\\ensuremath{{\\mathcal{A}}}}{\\ensuremath{{\\mathcal{A}}}}\n\\newcommand{\\ensuremath{{\\mathcal{C}}}}{\\ensuremath{{\\mathcal{C}}}}\n\\newcommand{\\ensuremath{\\psi}}{\\ensuremath{\\psi}}\n\\newcommand{\\ensuremath{\\phi}}{\\ensuremath{\\phi}}\n\\newcommand{\\ensuremath{\\wtil{\\clipot}}}{\\ensuremath{\\wtil{\\ensuremath{\\phi}}}}\n\\newcommand{\\ensuremath{\\varphi}}{\\ensuremath{\\varphi}}\n\\newcommand{\\ensuremath{\\Delta}}{\\ensuremath{\\Delta}}\n\\newcommand{\\ensuremath{{\\mathcal{P}}}}{\\ensuremath{{\\mathcal{P}}}}\n\\newcommand{\\ensuremath{{\\mathcal{P'}}}}{\\ensuremath{{\\mathcal{P'}}}}\n\n\\newcommand{\\ensuremath{\\eta}}{\\ensuremath{\\eta}}\n\\newcommand{\\ensuremath{{\\wtil{\\dualvar}}}}{\\ensuremath{{\\wtil{\\ensuremath{\\eta}}}}}\n\\newcommand{\\ensuremath{\\bar{\\dualvar}}}{\\ensuremath{\\bar{\\ensuremath{\\eta}}}}\n\n\\newcommand{\\ensuremath{{\\boldsymbol{\\eta}}}}{\\ensuremath{{\\boldsymbol{\\eta}}}}\n\\newcommand{\\ensuremath{\\Psi}}{\\ensuremath{\\Psi}}\n\\newcommand{\\ensuremath{\\dualvar^{\\ensuremath{\\scr{T}}}}}{\\ensuremath{\\ensuremath{\\eta}^{\\ensuremath{\\scr{T}}}}}\n\n\n\\newcommand{\\ensuremath{\\mathcal{P}}}{\\ensuremath{\\mathcal{P}}}\n\\newcommand{\\ensuremath{\\mathcal{M}}}{\\ensuremath{\\mathcal{M}}}\n\n\\newcommand{\\ensuremath{\\theta}}{\\ensuremath{\\theta}}\n\\newcommand{\\ensuremath{\\Theta}}{\\ensuremath{\\Theta}}\n\\newcommand{\\ensuremath{\\Theta}}{\\ensuremath{\\Theta}}\n\\newcommand{\\ensuremath{\\eparam}}{\\ensuremath{\\ensuremath{\\theta}}}\n\n\\newcommand{\\ensuremath{\\gamma}}{\\ensuremath{\\gamma}}\n\\newcommand{\\ensuremath{F}}{\\ensuremath{F}}\n\\newcommand{\\ensuremath{z}}{\\ensuremath{z}}\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\\newcommand{\\ensuremath{\\bar{\\nu}}}{\\ensuremath{\\bar{\\nu}}}\n\n\\newcommand{\\ensuremath{{\\scr{M}}}}{\\ensuremath{{\\scr{M}}}}\n\\newcommand{\\ensuremath{{\\scr{X}}}}{\\ensuremath{{\\scr{X}}}}\n\\newcommand{\\ensuremath{{\\scr{Y}}}}{\\ensuremath{{\\scr{Y}}}}\n\\newcommand{\\ensuremath{{\\scr{\\estim{X}}}}}{\\ensuremath{{\\scr{\\estim{X}}}}}\n\n\\newcommand{\\ensuremath{\\statesp^{\\nodenum}}}{\\ensuremath{\\ensuremath{{\\scr{X}}}^{\\ensuremath{N}}}}\n\\newcommand{\\ensuremath{\\ystatesp^{\\nodenum}}}{\\ensuremath{\\ensuremath{{\\scr{Y}}}^{\\ensuremath{N}}}}\n\n\\newcommand{\\ensuremath{\\qstatesp^{\\nodenum}}}{\\ensuremath{\\ensuremath{{\\scr{\\estim{X}}}}^{\\ensuremath{N}}}}\n\n\\newcommand{\\ensuremath{\\scr{T}}}{\\ensuremath{\\scr{T}}}\n\\newcommand{\\ensuremath{e}}{\\ensuremath{e}}\n\\newcommand{\\ensuremath{G}}{\\ensuremath{G}}\n\\newcommand{\\ensuremath{g}}{\\ensuremath{g}}\n\\newcommand{\\ensuremath{\\nu}}{\\ensuremath{\\nu}}\n\\newcommand{\\ensuremath{\\Phi}}{\\ensuremath{\\Phi}}\n\\newcommand{\\ensuremath{\\Psi}}{\\ensuremath{\\Psi}}\n\n\n\\newcommand{\\ensuremath{U}}{\\ensuremath{U}}\n\\newcommand{\\mathbf{U}}{\\mathbf{U}}\n\\newcommand{\\ensuremath{\\Pi}}{\\ensuremath{\\Pi}}\n\\newcommand{{\\mathcal{R}}}{{\\mathcal{R}}}\n\\newcommand{{\\mathcal{I}}}{{\\mathcal{I}}}\n\n\\newcommand{\\ensuremath{P}}{\\ensuremath{P}}\n\\newcommand{\\mathbf{P}}{\\mathbf{P}}\n\n\\newcommand{\\ensuremath{\\mathbf{U}}}{\\ensuremath{\\mathbf{U}}}\n\\newcommand{\\mathbf{V}}{\\mathbf{V}}\n\n\\newcommand{\\ensuremath{{U}}}{\\ensuremath{{U}}}\n\\newcommand{{V}}{{V}}\n\n\n\\newcommand{{\\mathcal{T}}}{{\\mathcal{T}}}\n\\newcommand{{\\mathcal{T}}}{{\\mathcal{T}}}\n\\newcommand{{\\mathcal{C}}}{{\\mathcal{C}}}\n\\newcommand{{\\mathcal{F}}}{{\\mathcal{F}}}\n\n\\newcommand{\\eig}[1]{\\ensuremath{\\lambda_{#1}}}\n\\newcommand{\\ensuremath{\\eig{\\max}}}{\\ensuremath{\\eig{\\max}}}\n\\newcommand{\\ensuremath{\\eig{\\min}}}{\\ensuremath{\\eig{\\min}}}\n\\newcommand{\\eigset}[1]{\\ensuremath{\\sigma\\left({#1}\\right)}}\n\\newcommand{\\eigrad}[1]{\\ensuremath{\\rho\\left({#1}\\right)}}\n\\newcommand{\\eigval}[1]{\\ensuremath{\\lambda_{#1}}}\n\\newcommand{\\eigvalset}[1]{\\ensuremath{\\left\\{\\eigval{i}\\left(#1\\right)\\right\\}}}\n\\newcommand{\\singval}[1]{\\ensuremath{\\sigma_{#1}}}\n\\newcommand{\\singvalset}[1]{\\ensuremath{\\left\\{\\singval{i}\\left(#1\\right)\\right\\}}}\n\\newcommand{\\ensuremath{B}}{\\ensuremath{B}}\n\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\n\\newcommand{\\ensuremath{T}}{\\ensuremath{T}}\n\n\n\\newcommand{\\ensuremath{P}}{\\ensuremath{P}}\n\\newcommand{\\ensuremath{\\exafun}}{\\ensuremath{\\ensuremath{P}}}\n\n\n\\newcommand{\\ensuremath{\\exafun}}{\\ensuremath{\\ensuremath{P}}}\n\n\n\\newcommand{\\ensuremath{\\scr{I}}}{\\ensuremath{\\scr{I}}}\n\n\n\\newcommand{\\condprob}[3]{\\ensuremath{p_{#1}(#2 | #3)}}\n\n\\newcommand{\\sinprob}[2]{\\ensuremath{p_{#1}(#2)}}\n\n\\newcommand{\\pcond}[2]{\\ensuremath{p(#1 \\, | \\, #2 )}}\n\n\n\n\\newcommand{\\ensuremath{P}}{\\ensuremath{P}}\n\\newcommand{\\ensuremath{R}}{\\ensuremath{R}}\n\n\n\\newcommand{\\ensuremath{A}}{\\ensuremath{A}}\n\n\\newcommand{\\ensuremath{{\\scr{A}}}}{\\ensuremath{{\\scr{A}}}}\n\\newcommand{\\ensuremath{{\\scr{B}}}}{\\ensuremath{{\\scr{B}}}}\n\\newcommand{\\ensuremath{{{\\scr{C}}}}}{\\ensuremath{{{\\scr{C}}}}}\n\n\\newcommand{\\ensuremath{A}}{\\ensuremath{A}}\n\\newcommand{\\ensuremath{B}}{\\ensuremath{B}}\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\n\n\n\n\\newcommand{\\ensuremath{\\scr{I}}}{\\ensuremath{\\scr{I}}}\n\\newcommand{\\ensuremath{\\scr{A}}}{\\ensuremath{\\scr{A}}}\n\n\\newcommand{\\ensuremath{\\scr{J}}}{\\ensuremath{\\scr{J}}}\n\\newcommand{\\ensuremath{\\scr{S}}}{\\ensuremath{\\scr{S}}}\n\n\\newcommand{\\ensuremath{{\\mathcal{D}}}}{\\ensuremath{{\\mathcal{D}}}}\n\n\n\n\\newcommand{\\ensuremath{i}}{\\ensuremath{i}}\n\n\n\n\\newcommand{\\ensuremath{\\mu}}{\\ensuremath{\\mu}}\n\n\n\\newcommand{ET }{ET }\n\n\\newcommand{\\pi}{\\pi}\n\\newcommand{\\ensuremath{\\mathcal{A}}}{\\ensuremath{\\mathcal{A}}}\n\n\\newcommand{{\\mathcal{O}}}{{\\mathcal{O}}}\n\\newcommand{\\phi}{\\phi}\n\n\n\n\n\\newcommand{{\\mathcal{C}}}{{\\mathcal{C}}}\n\\newcommand{\\mathbf{C}}{\\mathbf{C}}\n\n\n\\newcommand{{\\mathcal{U}}}{{\\mathcal{U}}}\n\\newcommand{{\\mathbf{S}}}{{\\mathbf{S}}}\n\\newcommand{\\mathbf{\\wtil{S}}}{\\mathbf{\\wtil{S}}}\n\n\\newcommand{\\ensuremath{\\cliqueset_{\\operatorname{sep}}}}{\\ensuremath{\\mathbf{C}_{\\operatorname{sep}}}}\n\n\\newcommand{\\Cut}[1]{\\ensuremath{K_{#1}}}\n\\newcommand{\\Kut}[1]{\\ensuremath{K_{#1}}}\n\\newcommand{\\ensuremath{\\Jestt}}{\\ensuremath{\\Jestt}}\n\\newcommand{\\Jestt}[1]{\\ensuremath{\\widehat{J}_{#1}}}\n\\newcommand{\\ensuremath{\\widehat{J}_{\\operatorname{orig}}}}{\\ensuremath{\\widehat{J}_{\\operatorname{orig}}}}\n\\newcommand{\\ensuremath{\\estim{x}}}{\\ensuremath{\\estim{x}}}\n\\newcommand{\\ensuremath{\\estv_{\\operatorname{MAP}}}}{\\ensuremath{\\ensuremath{\\estim{x}}_{\\operatorname{MAP}}}}\n\\newcommand{\\ensuremath{\\estim{x}}}{\\ensuremath{\\estim{x}}}\n\n\\newcommand{\\ensuremath{\\mathbf{E}}}{\\ensuremath{\\mathbf{E}}}\n\\newcommand{\\ensuremath{\\mathbf{D}}}{\\ensuremath{\\mathbf{D}}}\n\\newcommand{\\ensuremath{\\mathbf{E}_{r}}}{\\ensuremath{\\mathbf{E}_{r}}}\n\n\\newcommand{\\ensuremath{\\Pi}}{\\ensuremath{\\Pi}}\n\n\\newcommand{\\ensuremath{\\theta}}{\\ensuremath{\\theta}}\n\\newcommand{\\ensuremath{\\Theta}}{\\ensuremath{\\Theta}}\n\n\\newcommand{\\ensuremath{\\estim{P}_{full}}}{\\ensuremath{\\estim{P}_{full}}}\n\n\n\\newcommand{\\inprod}[2]{\\ensuremath{\\langle #1 , \\, #2 \\rangle}}\n\n\\newcommand{\\inprodd}[3]{\\ensuremath{\\langle #1 , \\, #2 \\rangle_{#3}}}\n\n\\newcommand{\\covv}[3]{\\ensuremath{\\Sigma_{#3}\\{#1, \\, #2\\}}}\n\n\n\\newcommand{\\kull}[2]{\\ensuremath{D(#1\\; \\| \\; #2)}}\n\n\\newcommand{\\kullinf}[2]{\\ensuremath{D(#1\\, \\| \\, #2)}}\n\n\n\\newcommand{\\normspec}[2]{\\ensuremath{\\|#1\\|_{#2}}}\n\n\n\\newcommand{{\\cal A}}{{\\cal A}}\n\n\\newcommand{\\ensuremath{{y}}}{\\ensuremath{{y}}}\n\\newcommand{\\ensuremath{{c}}}{\\ensuremath{{c}}}\n\\newcommand{\\ensuremath{{v}}}{\\ensuremath{{v}}}\n\n\\newcommand{\\ensuremath{c}}{\\ensuremath{c}}\n\\newcommand{\\ensuremath{{\\bf{c}}}}{\\ensuremath{{\\bf{c}}}}\n\\newcommand{\\ensuremath{\\mathcal{I}}}{\\ensuremath{\\mathcal{I}}}\n\n\\newcommand{\\ensuremath{\\mathbf{1}}}{\\ensuremath{\\mathbf{1}}}\n\\newcommand{\\ensuremath{\\mathbf{0}}}{\\ensuremath{\\mathbf{0}}}\n\\newcommand{\\ensuremath{\\mathcal{L}}}{\\ensuremath{\\mathcal{L}}}\n\n\n\\newcommand{\\chooser}[2]{\\ensuremath{\\big(\\frac{#1}{#2}\\big)}}\n\n\n\n\n\n\\newcommand{\\ensuremath{G}}{\\ensuremath{G}}\n\\newcommand{\\ensuremath{H}}{\\ensuremath{H}}\n\\newcommand{\\ensuremath{V}}{\\ensuremath{V}}\n\\newcommand{\\ensuremath{E}}{\\ensuremath{E}}\n\\newcommand{\\ensuremath{{F}}}{\\ensuremath{{F}}}\n\n\n\\newcommand{\\ensuremath{P}}{\\ensuremath{P}}\n\n\\newcommand{\\ensuremath{\\wtil{\\graph}}}{\\ensuremath{\\wtil{\\ensuremath{G}}}}\n\\newcommand{\\ensuremath{\\wtil{\\graph}}}{\\ensuremath{\\wtil{\\ensuremath{G}}}}\n\\newcommand{\\ensuremath{\\mathbf{\\wtil{C}}}}{\\ensuremath{\\mathbf{\\wtil{C}}}}\n\n\\newcommand{\\ensuremath{r}}{\\ensuremath{r}}\n\\newcommand{\\ensuremath{c}}{\\ensuremath{c}}\n\\newcommand{\\ensuremath{n}}{\\ensuremath{n}}\n\\newcommand{\\ensuremath{v}}{\\ensuremath{v}}\n\n\n\n\n\\newcommand{\\ensuremath{\\mathcal{T}}}{\\ensuremath{\\mathcal{T}}}\n\n\n\\newcommand{\\ensuremath{\\omega}}{\\ensuremath{\\omega}}\n\\newcommand{\\ensuremath{w}}{\\ensuremath{w}}\n\n\\newcommand{\\ensuremath{h}}{\\ensuremath{h}}\n\n\\newcommand{\\ensuremath{Z}}{\\ensuremath{Z}}\n\n\\newcommand{\\ensuremath{\\backslash}}{\\ensuremath{\\backslash}}\n\\newcommand{\\ensuremath{K}}{\\ensuremath{K}}\n\n\\newcommand{\\ensuremath{\\bar{\\gamma}}}{\\ensuremath{\\bar{\\gamma}}}\n\\newcommand{\\childa}[1]{\\ensuremath{\\alpha_{#1}}}\n\\newcommand{\\ensuremath{\\bar{\\alpha_1}}}{\\ensuremath{\\bar{\\alpha_1}}}\n\\newcommand{\\ensuremath{\\bar{\\alpha_2}}}{\\ensuremath{\\bar{\\alpha_2}}}\n\\newcommand{\\ensuremath{\\bar{\\alpha_4}}}{\\ensuremath{\\bar{\\alpha_4}}}\n\\newcommand{\\ancest}[2]{\\ensuremath{{#1} \\wedge {#2}}}\n\\newcommand{\\ensuremath{\\operatorname{Ch}}}{\\ensuremath{\\operatorname{Ch}}}\n\n\\newcommand{\\ensuremath{\\operatorname{pa}}}{\\ensuremath{\\operatorname{pa}}}\n\\newcommand{\\ensuremath{\\Delta}}{\\ensuremath{\\Delta}}\n\n\\newcommand{\\ensuremath{\\operatorname{parent}}}{\\ensuremath{\\operatorname{parent}}}\n\\newcommand{\\ensuremath{\\operatorname{is}}}{\\ensuremath{\\operatorname{is}}}\n\\newcommand{\\ensuremath{\\operatorname{of}}}{\\ensuremath{\\operatorname{of}}}\n\n\\newcommand{\\future}[1]{\\ensuremath{{\\bf{y}}^{+}_{#1}}}\n\\newcommand{\\past}[1]{\\ensuremath{{\\bf{y}}^{-}_{#1}}}\n\\newcommand{\\lamplus}[2]{\\ensuremath{\\lambda^{+}_{#1}(#2({#1}))}}\n\\newcommand{\\lamhatplus}[3]{\\ensuremath{\\widehat{\\lambda}^{+}_{#1}(#2({#3}))}}\n\\newcommand{\\aplus}[2]{\\ensuremath{{#1}^{+}_{#2}}}\n\\newcommand{\\aplushat}[2]{\\ensuremath{\\widehat{#1}^{+}_{#2}}}\n\n\\newcommand{compplus}{compplus}\n\\newcommand{comphat}{comphat}\n\n\n\n\\newcommand{\\ensuremath{{\\bf{y}}}}{\\ensuremath{{\\bf{y}}}}\n\\newcommand{\\probd}[2]{\\ensuremath{p(#1 \\, | \\, #2 )}}\n\n\n\n\\newcommand{\\ensuremath{x}}{\\ensuremath{x}}\n\\newcommand{\\ensuremath{y}}{\\ensuremath{y}}\n\n\n\n\\newcommand{\\invn}[1]{\\ensuremath{{#1}^{-1}}}\n\\newcommand{\\ensuremath{\\mathcal{K}}}{\\ensuremath{\\mathcal{K}}}\n\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\\newcommand{\\ensuremath{\\beta}}{\\ensuremath{\\beta}}\n\n\\newcommand{\\ensuremath{\\log_2}}{\\ensuremath{\\log_2}}\n\n\\newcommand{\\lambda_{max}}{\\lambda_{max}}\n\\newcommand{\\lambda_{min}}{\\lambda_{min}}\n\\newcommand{\\earb}[1]{\\lambda_{#1}}\n\n\\newcommand{\\ensuremath{\\stackrel}{\\mathcal{P}}{\\longrightarrow}}{\\ensuremath{\\stackrel}{\\mathcal{P}}{\\longrightarrow}}\n\\newcommand{\\ensuremath{\\stackrel}{\\mathitalic{d}}{\\longrightarrow}}{\\ensuremath{\\stackrel}{\\mathitalic{d}}{\\longrightarrow}}\n\n\\newcommand{\\pderiv}[2]{\\ensuremath{\\frac{\\partial{#1}}{\\partial{#2}}}}\n\\newcommand{\\pderivtwo}[3]{\\ensuremath{\\frac{\\partial^2{#1}}{\\partial{#2}\n\\, \\partial{#3}}}}\n\\newcommand{\\pderivtwod}[2]{\\ensuremath{\\frac{\\partial^2{#1}}{\\partial{#2}^2}}}\n\n\\newcommand{\\pdif}[2]{\\ensuremath{\\frac{\\partial{#1}}{\\partial{#2}}}}\n\\newcommand{\\pdifftwo}[3]{\\ensuremath{\\frac{\\partial^2{#1}}{\\partial{#2}\n\\, \\partial{#3}}}}\n\\newcommand{\\pdiff}[2]{\\ensuremath{\\frac{\\partial^2{#1}}{\\partial{#2}^2}}}\n\\newcommand{\\pdifn}[3]{\\ensuremath{\\frac{\\partial^{#1}{#2}}{\\partial{#3}^{#1}}}}\n\n\n\\newcommand{\\ensuremath{\\mathcal{L}}}{\\ensuremath{\\mathcal{L}}}\n\n\\newcommand{\\ensuremath{\\mathit{Ra}}}{\\ensuremath{\\mathit{Ra}}}\n\n\n\\newcommand{\\ensuremath{(0,1)^{\\df(\\meparam)}}}{\\ensuremath{(0,1)^{\\ensuremath{d}(\\ensuremath{\\eparam})}}}\n\\newcommand{\\ensuremath{(0,1)^{\\df(\\eparam)}}}{\\ensuremath{(0,1)^{\\ensuremath{d}(\\ensuremath{\\theta})}}}\n\n\\newcommand{\\ensuremath{{\\mathbb{E}}}}{\\ensuremath{{\\mathbb{E}}}}\n\n\n\\newcommand{\\mcite}[2]{\\citetext{#1 \\citealp{#2}}}\n\\newcommand{\\incite}[2]{\\citealp{#1,\\citetext{#2}}}\n\n\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\n\\newcommand{\\pnorm}[2]{\\|\\, #1 \\,\\|_{#2}}\n\n\\newcommand{\\fastgraph}[2]{\\includegraphics[scale=#1]{#2}}\n\n\\newcommand{\\widgraph}[2]{\\includegraphics[keepaspectratio,width=#1]{#2}}\n\n\\newcommand{\\epsgraph}[2]{\\epsfig{file=#2,width=#1}}\n\n\\newcommand{\\scr}[1]{\\ensuremath{\\mathcal{#1}}}\n\n\\newcommand{\\int_{0}^{\\infty}}{\\int_{0}^{\\infty}}\n\\newcommand{\\ensuremath{\\overset{d}{=}}}{\\ensuremath{\\overset{d}{=}}}\n\\newcommand{\\ensuremath{\\overset{\\cdot}{=}}}{\\ensuremath{\\overset{\\cdot}{=}}}\n\\newcommand{\\ensuremath{\\triangleq}}{\\ensuremath{\\triangleq}}\n\\newcommand{\\ensuremath{: =}}{\\ensuremath{: =}}\n\\newcommand{\\ensuremath{= :}}{\\ensuremath{= :}}\n\n\\newcommand{\\ensuremath{d}}{\\ensuremath{d}}\n\\newcommand{\\ensuremath{S}}{\\ensuremath{S}}\n\n\\newcommand{\\ensuremath{d_H}}{\\ensuremath{d_H}}\n\\newcommand{\\ensuremath{\\operatorname{APP}}}{\\ensuremath{\\operatorname{APP}}}\n\n\\newcommand{x}{\\ensuremath{{{\\mathbf{x}}}}}\n\\newcommand{\\ensuremath{{{\\mathbf{X}}}}}{\\ensuremath{{{\\mathbf{X}}}}}\n\\newcommand{\\ensuremath{{{\\mathbf{\\estim{x}}}}}}{\\ensuremath{{{\\mathbf{\\estim{x}}}}}}\n\\newcommand{\\ensuremath{{\\mathbf{X}}}}{\\ensuremath{{\\mathbf{X}}}}\n\\newcommand{\\ensuremath{{\\mathbf{Y}}}}{\\ensuremath{{\\mathbf{Y}}}}\n\n\\newcommand{\\empav}[1]{\\bar{#1}}\n\n\\newcommand{\\ensuremath{x^*}}{\\ensuremath{x^*}}\n\n\\newcommand{\\ensuremath{x}}{\\ensuremath{{\\bf{x}}}}\n\n\\newcommand{{\\bf{u}}}{{\\bf{u}}}\n\\newcommand{\\ensuremath{Z}}{\\ensuremath{Z}}\n\\newcommand{{\\bf{y}}}{{\\bf{y}}}\n\\newcommand{{\\bf{u}}}{{\\bf{u}}}\n\\newcommand{{\\bf{z}}}{{\\bf{z}}}\n\\newcommand{{\\bf{w}}}{{\\bf{w}}}\n\\newcommand{{\\bf{v}}}{{\\bf{v}}}\n\n\\newcommand{{\\bf{Y}}}{{\\bf{Y}}}\n\\newcommand{{\\bf{U}}}{{\\bf{U}}}\n\\newcommand{{\\bf{Z}}}{{\\bf{Z}}}\n\\newcommand{{\\bf{W}}}{{\\bf{W}}}\n\\newcommand{{\\bf{V}}}{{\\bf{V}}}\n\n\n\\newcommand{{{x}}}{{{x}}}\n\\newcommand{{{y}}}{{{y}}}\n\\newcommand{{{z}}}{{{z}}}\n\\newcommand{{{w}}}{{{w}}}\n\\newcommand{{{u}}}{{{u}}}\n\\newcommand{{{v}}}{{{v}}}\n\n\\newcommand{{{X}}}{{{X}}}\n\\newcommand{{{Y}}}{{{Y}}}\n\\newcommand{{{Z}}}{{{Z}}}\n\\newcommand{{{W}}}{{{W}}}\n\\newcommand{{{U}}}{{{U}}}\n\\newcommand{{{V}}}{{{V}}}\n\n\\newcommand{\\vect}[1]{{\\bf{#1}}}\n\n\n\\newcommand{\\ensuremath{J}}{\\ensuremath{J}}\n\n\\newcommand{\\ensuremath{u}}{\\ensuremath{u}}\n\\newcommand{\\ensuremath{z}}{\\ensuremath{z}}\n\\newcommand{\\ensuremath{z}}{\\ensuremath{z}}\n\n\n\\newcommand{\\ensuremath{\\left\\{2^m}{\\ensuremath{\\left\\{2^m}\n\\psi^{m}_n(2^m t - n)\\right\\}}\n\\newcommand{\\ensuremath{x^{m}_n}}{\\ensuremath{x^{m}_n}}\n\\newcommand{\\ensuremath{\\psi(2^m t - n)}}{\\ensuremath{\\psi(2^m t - n)}}\n\n\\newcommand{\\updmn}[1]{\\ensuremath{{#1}^{m}_{n}}}\n\\newcommand{\\updn}[3]{\\ensuremath{{#1}^{#2}_{#3}}}\n\n\\newcommand{\\ensuremath{w}}{\\ensuremath{w}}\n\\newcommand{\\ensuremath{W}}{\\ensuremath{W}}\n\n\\newcommand{\\ensuremath{\\zeta}}{\\ensuremath{\\zeta}}\n\\newcommand{\\ensuremath{v}}{\\ensuremath{v}}\n\n\\newcommand{\\ensuremath{A}}{\\ensuremath{A}}\n\\newcommand{\\ensuremath{B}}{\\ensuremath{B}}\n\\newcommand{\\ensuremath{C}}{\\ensuremath{C}}\n\\newcommand{\\ensuremath{D}}{\\ensuremath{D}}\n\n\\newcommand{\\wavdet}[2]{\\ensuremath{x^{#1}_{#2}}}\n\n\n\\newcommand{\\begin{center}}{\\begin{center}}\n\\newcommand{\\end{center}}{\\end{center}}\n\n\\newcommand{\\begin{itemize}}{\\begin{itemize}}\n\\newcommand{\\end{itemize}}{\\end{itemize}}\n\n\\newcommand{\\begin{enumerate}}{\\begin{enumerate}}\n\\newcommand{\\end{enumerate}}{\\end{enumerate}}\n\n\n\\newcommand{\\begin{slide}}{\\begin{slide}}\n\\newcommand{\\begin{slide*}}{\\begin{slide*}}\n\n\\newcommand{\\comsld}[2]{\\begin{slide}[#1,#2]}\n\\newcommand{\\comspord}[2]{\\begin{slide*}[#1,#2]}\n\n\\newcommand{\\end{slide}}{\\end{slide}}\n\\newcommand{\\end{slide*}}{\\end{slide*}}\n\n\n\\newcommand{\\ensuremath{\\msca \\gvec}}{\\ensuremath{\\ensuremath{z} {\\bf{u}}}}\n\\newcommand{\\ensuremath{\\msca \\gsca}}{\\ensuremath{\\ensuremath{z} \\ensuremath{u}}}\n\n\\newcommand{\\ensuremath{\\sqrt{\\ssca} \\gvec}}{\\ensuremath{\\sqrt{\\ensuremath{z}} {\\bf{u}}}}\n\\newcommand{\\ensuremath{\\sqrt{\\ssca} \\gsca}}{\\ensuremath{\\sqrt{\\ensuremath{z}} \\ensuremath{u}}}\n\n\\newcommand{\\mbox{$\\xvec \\, \\edist \\, \\atgvec$}}{\\mbox{$x \\, \\ensuremath{\\overset{d}{=}} \\, \\ensuremath{\\msca \\gvec}$}}\n\\newcommand{\\mbox{$\\xsca \\, \\edist \\, \\atgsca$}}{\\mbox{${{x}} \\, \\ensuremath{\\overset{d}{=}} \\, \\ensuremath{\\msca \\gsca}$}}\n\n\\newcommand{\\mbox{$\\xvec \\, \\edist \\, \\satgvec$}}{\\mbox{$x \\, \\ensuremath{\\overset{d}{=}} \\, \\ensuremath{\\sqrt{\\ssca} \\gvec}$}}\n\\newcommand{\\mbox{$\\xsca \\, \\edist \\, \\satgsca$}}{\\mbox{${{x}} \\, \\ensuremath{\\overset{d}{=}} \\, \\ensuremath{\\sqrt{\\ssca} \\gsca}$}}\n\n\n\\newcommand{\\normal}[1]{\\ensuremath{\\mathcal{N}(0,#1)}}\n\\newcommand{\\normall}[2]{\\ensuremath{\\mathcal{N}(#1,#2)}}\n\n\\newcommand{\\opt}[1]{\\ensuremath{{#1}^{*}}}\n\\newcommand{\\estim}[1]{\\ensuremath{\\widehat{#1}}}\n\\newcommand{\\bls}[1]{\\ensuremath{\\widehat{#1}_{BLS}}}\n\\newcommand{\\map}[1]{\\ensuremath{\\widehat{#1}_{MAP}}}\n\\newcommand{\\wtil}[1]{\\ensuremath{\\widetilde{#1}}}\n\n\\newcommand{\\inv}[1]{\\ensuremath{\\big(#1\\big)^{-1}}}\n\\renewcommand{\\t}[1]{\\ensuremath{[#1]^{{\\scriptscriptstyle\\top}}}}\n\n\\newcommand{\\ensuremath{f}}{\\ensuremath{f}}\n\n\\newcommand{r}{r}\n\\newcommand{\\ensuremath{{\\mathbb{R}}}}{\\ensuremath{{\\mathbb{R}}}}\n\\newcommand{\\ensuremath{{\\mathbb{R^{+}}}}}{\\ensuremath{{\\mathbb{R^{+}}}}}\n\\newcommand{\\ensuremath{{\\mathbb{N}}}}{\\ensuremath{{\\mathbb{N}}}}\n\\newcommand{\\ensuremath{{\\mathbb{Z}}}}{\\ensuremath{{\\mathbb{Z}}}}\n\\newcommand{{\\mathbf Z}}{{\\mathbf Z}}\n\n\\newcommand{\\ensuremath{\\int_0^{\\infty}}}{\\ensuremath{\\int_0^{\\infty}}}\n\\newcommand{\\ensuremath{\\int_{-\\infty}^{\\infty}}}{\\ensuremath{\\int_{-\\infty}^{\\infty}}}\n\\newcommand{\\intar}[2]{\\ensuremath{\\int_{#1}^{#2}}}\n\n\\newcommand{\\ensuremath{{\\mathcal{L}}}}{\\ensuremath{{\\mathcal{L}}}}\n\n\\newcommand{\\student}[3]{\\ensuremath{ (1+ \\frac{#1^2}{2 #2^2})^{-#3}}}\n\\newcommand{\\gamd}[1]{\\ensuremath{Z({#1})}}\n\\newcommand{\\symgam}[3]{\\ensuremath{\\left({#3}\/({#3} + {#1}^2)\\right)^{#2}}}\n\\newcommand{\\gausss}[2]{\\ensuremath{\\frac{1}{\\sqrt{2\\pi} #2} \\exp{(-\\frac{#1^2}{2 #2^2})}}}\n\\newcommand{\\gaussm}[3]{\\ensuremath{\\frac{1}{\\sqrt{2\\pi}\n|#2|^{\\frac{#3}{2}}} \\exp{\\big(-\\frac{1}{2} #1'#2^{-1} #1 \\big)}}}\n\n\\newcommand{\\dif}[2]{\\ensuremath{\\frac{\\partial #1}{\\partial #2}}}\n\\newcommand{\\diff}[2]{\\ensuremath{\\frac{\\partial^2 #1}{\\partial #2^2}}}\n\\newcommand{\\dift}[3]{\\ensuremath{\\frac{\\partial^2 #1}{\\partial #2 \\partial #3}}}\n\n\\newcommand{\\odif}[2]{\\ensuremath{\\frac{d #1}{d #2}}}\n\\newcommand{\\odiff}[2]{\\ensuremath{\\frac{d^2 #1}{d #2^2}}}\n\\newcommand{\\odift}[3]{\\ensuremath{\\frac{d #1}{d #2 d #3}}}\n\n\\newcommand{H-ss}{H-ss}\n\\newcommand{\\beta}{\\beta}\n\\newcommand{\\pcon}[2]{p(#1 | #2)}\n\\newcommand{\\scaleform}[3]{\\frac{1}{#2} {#3}(\\frac{#1}{#2})}\n\n\n\\newcommand{\\ensuremath{\\lambda}}{\\ensuremath{\\lambda}}\n\\newcommand{\\ensuremath{\\alpha}}{\\ensuremath{\\alpha}}\n\n\\newcommand{\\charfunc}[2]{\\ensuremath{\\phi_{#1}{(#2)}}}\n\\newcommand{\\charf}[1]{\\ensuremath{\\phi_{#1}}}\n\n\\newcommand{\\laptran}[2]{\\ensuremath{\\psi_{#1}{(#2)}}}\n\n\\newcommand{\\gaussd}[2]{\\ensuremath{\\frac{1}{\\sqrt{2 \\pi #1}}\n\\exp{(-\\frac{#2^2}{2 #1})}}}\n\n\n\n\n\n\\newcommand{\\ensuremath{^{\\frac{1}{2}}}}{\\ensuremath{^{\\frac{1}{2}}}}\n\\newcommand{\\ensuremath{w}}{\\ensuremath{w}}\n\\newcommand{\\ensuremath{x}}{\\ensuremath{x}}\n\\newcommand{\\ensuremath{u}}{\\ensuremath{u}}\n\n\\newcommand{\\ensuremath{\\; \\:}}{\\ensuremath{\\; \\:}}\n\n\\newcommand{\\iden}[1]{\\ensuremath{I_{#1}}}\n\\newcommand{\\zeros}[1]{\\ensuremath{0_{#1}}}\n\n\n\\newcommand{{I}t\\^{o}\\hspace*{3pt}}{{I}t\\^{o}\\hspace*{3pt}}\n\\newcommand{{C}sisz\\'{a}r\\hspace*{3pt}}{{C}sisz\\'{a}r\\hspace*{3pt}}\n\\newcommand{{C}sisz\\'{a}r}{{C}sisz\\'{a}r}\n\n\n\n\n\\DeclareMathOperator{\\modd}{mod}\n\\DeclareMathOperator{\\probab}{Pr}\n\\DeclareMathOperator{\\diag}{diag}\n\\DeclareMathOperator{\\var}{var}\n\\DeclareMathOperator{\\stable}{stable}\n\\DeclareMathOperator{\\Sigma}{cov}\n\\DeclareMathOperator{\\argmax}{argmax}\n\\DeclareMathOperator{\\trace}{trace}\n\\DeclareMathOperator{\\abs}{abs}\n\\DeclareMathOperator{\\floor}{floor}\n\\DeclareMathOperator{\\spanner}{span}\n\\DeclareMathOperator{\\vol}{vol}\n\\DeclareMathOperator{\\child}{child}\n\\DeclareMathOperator{\\parent}{parent}\n\\DeclareMathOperator{\\ensuremath{\\scr{T}}}{tree}\n\\DeclareMathOperator{\\loopy}{loop}\n\\DeclareMathOperator{\\textrm{sign}}{sign}\n\\DeclareMathOperator{\\for}{for}\n\\DeclareMathOperator{\\all}{all}\n\\DeclareMathOperator{\\some}{some}\n\\DeclareMathOperator{\\cum}{cum}\n\\DeclareMathOperator{\\toeplitz}{toeplitz}\n\n\\renewcommand{\\Re}{\\mathbb{R}}\n\\newcommand{\\tr}[2]{\\left<#1,#2\\right>}\n\\def\\textrm{logdet}{\\textrm{logdet}}\n\\def\\textrm{sign}{\\textrm{sign}}\n\\def\\textrm{Cmax}{\\textrm{Cmax}}\n\n\\def\\tilde{\\Theta}^{*}{\\tilde{\\Theta}^{*}}\n\\def{\\tilde{\\Theta}^{*}}^{-1}{{\\tilde{\\Theta}^{*}}^{-1}}\n\\def\\tilde{\\Delta}{\\tilde{\\Delta}}\n\\def\\widetilde{W}{\\widetilde{W}}\n\\def\\widetilde{Z}{\\widetilde{Z}}\n\\def\\widetilde{R}{\\widetilde{R}}\n\\newcommand{\\matnorm}[2]{| \\! | \\! | #1 | \\! | \\! |_{{#2}}}\n\\newcommand{\\vecnorm}[2]{\\| #1 \\|_{{#2}}}\n\\renewcommand{\\sp}[2]{{#1}^{(#2)}}\n\\def\\Theta^{*}{\\Theta^{*}}\n\\def\\Theta^{*}{\\Theta^{*}}\n\\def\\Theta^{*^{-1}}{\\Theta^{*^{-1}}}\n\\newcommand\\mvect[1]{\\overline{#1}}\n\\newcommand{\\ensuremath{E}}{\\ensuremath{E}}\n\\newcommand{{\\ensuremath{{\\Eset^{c}}}}}{{\\ensuremath{{\\ensuremath{E}^{c}}}}}\n\n\\def\\Theta{\\Theta}\n\\def\\Sigma{\\Sigma}\n\n\\def\\theta^{*}_{\\min}{\\theta^{*}_{\\min}}\n\\def\\frac{1}{\\lambda_n}{\\frac{1}{\\lambda_n}}\n\\newcommand\\Op[1]{\\mathcal{O}_{P}\\left(#1\\right)}\n\\newcommand\\Oa[1]{\\mathcal{O}\\left(#1\\right)}\n\\newcommand\\Cl[1]{\\textrm{Cl}(#1)}\n\\renewcommand\\Vec{\\textrm{Vec}}\n\\newcommand\\ThetaWitness{\\widetilde{\\Theta}}\n\n\n\n\\defC_1{C_1}\n\\defC_2{C_2}\n\\defC{C}\n\\defC_1{C_1}\n\\defC_2{C_2}\n\\defC_3{C_3}\n\\defC_4{C_4}\n\\defL{L}\n\\defK{K}\n\\defc_{*}{c_{*}}\n\n\\defK{K}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbbqx b/data_all_eng_slimpj/shuffled/split2/finalzzbbqx new file mode 100644 index 0000000000000000000000000000000000000000..e67bbe986a9e7ee0cbda05a22d9e76ba03296947 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbbqx @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nThe authors of Ref. \\cite{zhao} investigated the effects of the variation of the mass parameter $a$ on the thick branes. They used a real scalar field, which has a potential of the $\\phi^{6}$ model, as the background field of the thick branes. It was found that the number of the bound states (in the case without gravity) or the resonant states (in the case with gravity) increases with the parameter $a$. That work considered the simplest Yukawa coupling $\\eta\\bar{\\Psi}\\phi\\Psi$, where $\\eta$ is the coupling constant. The authors stated that as the value of $a$ is increasing, the maximum of the matter energy density splits into two new maxima, and the distance of the new maxima increases and the brane gets thicker. The authors also stated that the brane with a big value of $a$ would trap fermions more efficiently.\n\nIn this paper, we reinvestigated the effect of the variation of the mass parameter $a$ on the thick branes, because the above investigation does not analyze the zero mode in details and contains some misconceptions. We only focus attention in the case with gravity. We find that the variation of $a$ on the thick brane is associated to the phenomenon of brane splitting. From the static equation of motion, we analyze the asymptotic behavior of $A(y)$ and find that the zero mode for left-handed fermions can be localized on the brane depending on the value for the coupling constant $\\eta$ and the mass parameter $a$. We also show that as the value of $a$ is increasing the simplest Yukawa coupling does not support the localization of fermions on the brane, as incompletely argued in Ref. \\cite{zhao}.\n\n\n\n\\section{Thick brane with gravity}\n\nThe action for our system is described by \\cite{cam}\n\\begin{equation}\\label{ac1}\n S=\\int d^4xdy\\sqrt{ -g}\\left[\\frac{1}{4}\\,R-\\Lambda-\\frac{1}{2}g^{MN}\\partial_{M}\\phi\\partial^{N}\\phi-V(\\phi) \\right],\n\\end{equation}\n\\noindent where $M,N=0,1,2,3,4$, $\\Lambda$ is the 5D bulk cosmological constant and the scalar potential $V(\\phi)$ is given by \\cite{zhao}\n\\begin{equation}\\label{pot1}\n V(\\phi)=a\\phi^{2}-b\\phi^{4}+c\\phi^{6}\\,,\n\\end{equation}\n\n\\noindent where $a,b,c>0$. There are three minima for $V(\\phi)$, one is at $\\phi^{(1)}=0$ (local minima) corresponding to a disordered bulk phase and the other two are at $\\phi^{(2)}=-\\phi^{(3)}=v$ (global minima) with\n\\begin{equation}\\label{v}\n v=\\sqrt{\\frac{\\sqrt{b^{2}-3ac}}{3c}+\\frac{b}{3c}}\\,.\n\\end{equation}\n\n\\noindent They are degenerated and correspond to ordered bulk phases. As $a=a_{c}$ ($a_{c}=b^{2}\/4c$), $V(\\phi^{(1)})=V(\\phi^{(2)})=V(\\phi^{(3)})$, $V(\\phi)$ has three degenerated global minima. For the case with gravity, the critical value of $a$ is not $a_{c}$ but a smaller effective critical value $a_{*}$. In this case, $a_{c}=a_{*}=0.837$ \\cite{zhao}. The line element in this model is considered as\n\\begin{equation}\\label{metric}\n ds^{2}=g_{ab}dx^{a}dx^{b}=\\mathrm{e}^{2A(y)}\\eta_{\\mu\\nu}dx^{\\mu}dx^{\\nu}+dy^{2},\n\\end{equation}\n\n\\noindent where $\\mu,\\nu=0,1,2,3$, $\\eta_{\\mu\\nu}=\\mathrm{diag}(-1,1,1,1)$, and $\\mathrm{e}^{2A}$ is the so-called warp factor. We suppose that $A=A(y)$ and $\\phi=\\phi(y)$.\n\n\n\\subsection{Effects of the variation of the mass parameter $a$ on the thick brane}\n\nFor this model, the equations of motion are\n\\begin{equation}\\label{em1b}\n \\phi^{\\prime\\prime}=-4A^{\\prime}\\phi^{\\prime}+\\frac{dV(\\phi)}{d\\phi},\n\\end{equation}\n\\begin{equation}\\label{em2b}\n A^{\\prime\\prime}+2A^{\\prime}\\,^{2}=-\\frac{1}{3}\\,\\phi^{\\prime}\\,^{2}\n \\frac{2}{3}\\,(V+\\Lambda)\\,,\n\\end{equation}\n\\begin{equation}\\label{em3b}\n A^{\\prime}\\,^{2}=\\frac{1}{6}\\,\\phi^{\\prime}\\,^{2}-\\frac{1}{3}\\,(V+\\Lambda)\\,.\n\\end{equation}\n\n\\noindent It is possible to rewrite (\\ref{em2b}) and (\\ref{em3b}) as\n\\begin{equation}\\label{em3c}\nA^{\\prime\\prime}=-\\frac{2}{3}\\,\\phi^{\\prime}\\,^{2}\\,.\n\\end{equation}\n\n\\noindent The boundary conditions can be read as follows\n\\begin{equation}\\label{bc1}\n A(0)=A^{\\prime}(0)=\\phi(0)=0\\,,\n\\end{equation}\n\\begin{equation}\\label{bc2}\n \\phi(+\\infty)=-\\phi(-\\infty)=v.\n\\end{equation}\n\n\nThe matter energy density has the form\n\\begin{equation}\\label{de}\n T_{00}=\\rho(y)=\\mathrm{e}^{2A(y)}\\left[ \\frac{1}{2}\\,\\left( \\frac{d\\phi}{dy\n \\right)^{2}+V\\left(\\phi\\right) \\right].\n\\end{equation}\n\n\\noindent At this point, it is also instructive to analyze the matter energy of the toy model\n\\begin{equation}\\label{ephi}\n E_{\\phi}=\\int^{\\infty}_{-\\infty}dy\\,T_{00}\\,,\n\\end{equation}\n\n\\noindent substituting (\\ref{de}) in (\\ref{ephi}), we get\n\\begin{equation}\\label{ephi2}\n E_{\\phi}=\\int^{\\infty}_{-\\infty}dy\\,\\mathrm{e}^{2A(y)}\\left[ \\frac{1}{2}\\,\\left( \\frac{d\\phi}{dy\n \\right)^{2}+V\\left(\\phi\\right) \\right]\\,,\n\\end{equation}\n\n\\noindent using (\\ref{em3b}) and (\\ref{em3c}), we obtain the value of the matter energy given by\n\\begin{equation}\\label{ephi3}\n E_{\\phi}=\\frac{3}{2}\\,\\left[\\mathrm{e}^{2A(-\\infty)}A^{\\prime}(-\\infty)\n \\mathrm{e}^{2A(\\infty)}A^{\\prime}(\\infty) \\right]-\\Lambda\\int^{\\infty}_{-\\infty}d\n \\mathrm{e}^{2A(y)}.\n\\end{equation}\n\n\\noindent As $\\Lambda=0$, the value of the matter energy depends on the asymptotic behavior of the warp factor. If $y\\rightarrow\\pm\\infty$ then $\\phi^{\\prime}(\\pm\\infty)=0$ and by the analysis to Eq. (\\ref{em2b}), we can see that $A(\\pm\\infty)\\propto -|\\,y|$. Therefore, $\\mathrm{e}^{2A(\\pm\\infty)}\\rightarrow0$ and the value of the matter energy is zero. This fact is the same to the case of branes with generalized dynamics \\cite{arro}.\n\n\\noindent The scalar curvature (or Ricci scalar) is given by\n\\begin{equation}\\label{ricci}\n R=-4(5A^{\\prime}\\,^{2}+2A^{\\prime\\prime}).\n\\end{equation}\n\n\\noindent The profiles of the matter energy density is shown in Fig. (\\ref{fde}) for some values of $a$. Figure (\\ref{fde}) clearly shows that for $a=0$ the matter energy density has not a single-peak around $y=0$. The core of the brane is localized at $y=0$ for $a=0$, because this region has a positive matter energy density. On the other hand, as the value of $a$ is increasing, we can see that the single brane splits into two sub-branes and as $a\\rightarrow a_{*}$ each sub-brane is a thick brane. This phenomenon is so-called of brane splitting \\cite{angel}. From the peak of the matter energy density is evident know where the core of the branes are located. Therefore, the brane does not get thicker with the increases of the value of the mass parameter $a$, as argued in Ref. \\cite{zhao}. The profiles of the matter energy density and the Ricci scalar are shown in Fig. (\\ref{desc}) for $a=0.8$. Note that the presence of regions with positive Ricci scalar is connected to the capability to trap matter near to the core of the brane \\cite{alm} and it reinforces the conclusion of the analyzes from the matter energy density. Also note that far from the brane, $R$ tends to a negative constant, characterizing the $AdS_{5}$ limit from the bulk.\n\n\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=7cm, angle=0]{de.eps}\n\\end{center}\n\\caption{The profiles of the energy density for $b=2$, $c=1$, $a=0$ (thin line), $a=0.8$ (dashed line) and $a=0.836$ (dotted line).} \\label{fde}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=7cm, angle=0]{deera08.eps}\n\\end{center}\n\\caption{The profiles of the matter energy density (thin line) and Ricci scalar (thick line) for $b=2$, $c=1$ and $a=0.8$.} \\label{desc}\n\\end{figure}\n\n\\subsection{Fermion localization}\n\nThe action for a Dirac spinor field coupled with the scalar fields by a general Yukawa coupling is\n\\begin{equation}\\label{ad}\n S=\\int d^{5}x\\sqrt{|\\,g|}\\left[ i\\bar{\\Psi}\\Gamma^{M}\\nabla_{M}\\Psi-\\eta\\bar{\\Psi}F(\\phi)\\Psi \\right]\\,,\n\\end{equation}\n\\noindent where $\\eta$ is the positive coupling constant between fermions and the scalar field. Moreover,\nwe are considering the covariant derivative $\\nabla_{M}=\\partial_{M}+\\frac{1}{4}\\,\\omega^{\\bar{A}\\bar{B}}_{M}\\Gamma_{\\bar{A}}\\Gamma_{\\bar{B}}$\\,,\nwhere $\\bar{A}$ and $\\bar{B}$, denote the local Lorentz indices and $\\omega^{\\bar{A}\\bar{B}}_{M}$ is\nthe spin connection. Here we consider the field $\\phi$ as a background field. The equation of motion is\nobtained as\n\\begin{equation}\\label{dkp}\ni\\,\\Gamma ^{M }\\nabla_{M}\\Psi-\\eta F(\\phi)\\Psi =0.\n\\end{equation\n\nAt this stage, it is useful to consider the fermionic current. The conservation law for $J^{M}$ follows from the standard procedure and it becomes\n\\begin{equation}\\label{corr}\n\\nabla_{M}J^{M}=\\bar{\\Psi}\\left(\\nabla_{M}\\Gamma^{M}\\right)\\Psi\\,,\n\\end{equation}\n\\noindent where $J^{M}=\\bar{\\Psi}\\Gamma^{M}\\Psi$. Thus, if\n\\begin{equation}\\label{cj0}\n \\nabla_{M}\\Gamma^{M}=0\\,,\n\\end{equation}\n\\noindent then four-current will be conserved. The condition (\\ref{cj0}) is the purely geometrical assertion that the curved-space gamma matrices are covariantly constant.\n\n\\noindent Using the same line element (\\ref{metric}) and the representation for\ngamma matrices $\\Gamma^{M}=\\left( \\mathrm{e}^{-A}\\gamma^{\\mu},-i\\gamma^{5}\\right)$,\nthe condition (\\ref{cj0}) is trivially satisfied and therefore the current is conserved.\n\nThe equation of motion (\\ref{dkp}) becomes\n\\begin{equation}\\label{em}\n\\left[ i\\gamma^{\\mu}\\partial_{\\mu}+\\gamma^{5}\\mathrm{e}^{A}(\\partial_{y}+2\\partial_{y}A\n-\\eta\\,\\mathrm{e}^{A}F(\\phi) \\right]\\Psi=0.\n\\end{equation}\n\\noindent Now, we use the general chiral decomposition\n\\begin{equation}\\label{dchiral}\n \\Psi(x,y)=\\sum_{n}\\psi_{L_{n}}(x)\\alpha_{L_{n}}(y)+\\sum_{n}\\psi_{R_{n}}(x)\\alpha_{R_{n}}(y),\n\\end{equation}\n\\noindent with $\\psi_{L_{n}}(x)=-\\gamma^{5}\\psi_{L_{n}}(x)$ and $\\psi_{R_{n}}(x)=\\gamma^{5}\\psi_{R_{n}}(x)$.\nWith this decomposition $\\psi_{L_{n}}(x)$ and $\\psi_{R_{n}}(x)$ are the left-handed and\nright-handed components of the four-dimensional spinor field, respectively. After applying\n(\\ref{dchiral}) in (\\ref{em}), and demanding that $i\\gamma^{\\mu}\\partial_{\\mu}\\psi_{L_{n}}=m_{n}\\psi_{R_{n}}$\nand $i\\gamma^{\\mu}\\partial_{\\mu}\\psi_{R_{n}}=m_{n}\\psi_{L_{n}}$, we obtain two equations\nfor $\\alpha_{L_{n}}$ and $\\alpha_{R_{n}}$\n\\begin{equation}\\label{ea1}\n \\left[ \\partial_{y}+2\\partial_{y}A+\\eta F(\\phi) \\right]\\alpha_{L_{n}}=m_{n}\\mathrm{e}^{-A}\\alpha_{R_{n}}\\,,\n\\end{equation}\n\\begin{equation}\\label{ea2}\n \\left[ \\partial_{y}+2\\partial_{y}A-\\eta F(\\phi) \\right]\\alpha_{R_{n}}=-m_{n}\\mathrm{e}^{-A}\\alpha_{L_{n}}\\,.\n\\end{equation}\n\\noindent Inserting the general chiral decomposition (\\ref{dchiral}) into the action (\\ref{ad}), using (\\ref{ea1}) and (\\ref{ea2}) and also requiring that the result take the form of the standard four-dimensional action for the massive chiral fermions\n\\begin{equation}\\label{ad2}\n S=\\sum_{n}\\int d^{4}x\\, \\bar{\\psi}_{n}\\left( \\gamma^{\\mu}\\partial_{\\mu}-m_{n} \\right)\\psi_{n},\n\\end{equation}\n\\noindent where $\\psi_{n}=\\psi_{L_{n}}+\\psi_{R_{n}}$ and $m_{n}\\ge0$, the functions $\\alpha_{L_{n}}$ and $\\alpha_{R_{n}}$ must obey the following orthonormality conditions\n\\begin{equation}\\label{orto}\n \\int_{-\\infty}^{\\infty}dy\\,\\mathrm{e}^{3A}\\alpha_{Lm}\\alpha_{Rn}=\\delta_{LR}\\delta_{mn}.\n\\end{equation}\n\n\\noindent Implementing the change of variables\n\\begin{equation}\\label{cv}\n z=\\int^{y}_{0}\\mathrm{e}^{-A(y^{\\,\\prime})}dy^{\\,\\prime},\n\\end{equation}\n\n\\noindent $\\alpha_{L_{n}}=\\mathrm{e}^{-2A}L_{n}$ and $\\alpha_{R_{n}}=\\mathrm{e}^{-2A}R_{n}$, we get\n\\begin{equation}\\label{sleft}\n -L_{n}^{\\prime\\prime}(z)+V_{L}(z)L_{n}=m_{n}^{2}L_{n}\\,,\n\\end{equation}\n\\begin{equation}\\label{sright}\n -R_{n}^{\\prime\\prime}(z)+V_{L}(z)R_{n}=m_{n}^{2}R_{n}\\,,\n\\end{equation}\n\\noindent where\n\\begin{eqnarray}\n V_{L}(z) &=& \\eta^{2}\\mathrm{e}^{2A}F^{2}(\\phi)-\\eta\\partial_{z}\\left( \\mathrm{e}^{A}F(\\phi) \\right),\\label{vefa} \\\\\n V_{R}(z) &=& \\eta^{2}\\mathrm{e}^{2A}F^{2}(\\phi)+\\eta\\partial_{z}\\left( \\mathrm{e}^{A}F(\\phi) \\right)\\label{vefb}.\n\\end{eqnarray}\n\\noindent Using the expressions $\\partial_{z}A=\\mathrm{e}^{A(y)}\\partial_{y}A$ and $\\partial_{z}F=\\mathrm{e}^{A(y)}\\partial_{y}F$, we can recast the potentials (\\ref{vefa}) and (\\ref{vefb}) as a function of $y$ \\cite{yu}-\\cite{yo}\n\\begin{eqnarray}\n V_{L}(z(y)) &=& \\eta\\mathrm{e}^{2A}\\left[ \\eta F^{2}-\\partial_{y}F-F\\partial_{y}A(y) \\right],\\label{vya} \\\\\n V_{R}(z(y)) &=& V_{L}(z(y))|_{\\eta\\rightarrow-\\eta}\\,.\\label{vyb}\n\\end{eqnarray}\n\n\\noindent It is worthwhile to note that we can construct the Schr\\\"{o}dinger potentials\n$V_{L}$ and $V_{R}$ from eqs. (\\ref{vya}) and (\\ref{vyb}).\n\nAt this stage, it is instructive to state that with the change of variable (\\ref{cv})\nwe get a geometry to be conformally flat\n\\begin{equation}\\label{cm}\n ds^{2}=\\mathrm{e}^{2A(z)}\\left( \\eta_{\\mu\\nu}dx^{\\mu}dx^{\\nu} +dz^{2}\\right).\n\\end{equation}\n\\noindent Now we focus attention on the condition (\\ref{cj0}) for the line element (\\ref{cm}). In this case we obtain\n\\begin{equation}\\label{mc}\n \\nabla_{M}\\Gamma^{M}= i(\\partial_{z}A(z))\\mathrm{e}^{-A(z)}\\gamma^{5}.\n\\end{equation}\n\n\\noindent Therefore, the current is no longer conserved for the line element (\\ref{cm}) \\cite{arro}. It is known that, in general, the reformulation of the theory in a new conformal frame leads to a different, physically inequivalent theory. This issue has already a precedent in cosmological models \\cite{val}.\n\n\\noindent Under this arguments, we only use the change of variable (\\ref{cv}) to have a qualitative analysis of the potential profiles (\\ref{vya}) and (\\ref{vyb}), which is a fundamental ingredient for the fermion localization on the brane.\n\nNow we focus attention on the calculation of the zero mode. Substituting $m_{n}=0$ in (\\ref{ea1}) and (\\ref{ea2}) and using $\\alpha_{L_{n}}=\\mathrm{e}^{-2A}L_{n}$ and $\\alpha_{R_{n}}=\\mathrm{e}^{-2A}R_{n}$, respectively, we get\n\\begin{equation}\\label{mzL}\n L_{0}\\propto \\exp \\left[-\\eta\\int_{0}^{y}dy^{\\prime}F(\\phi) \\right],\n\\end{equation}\n\\begin{equation}\\label{mzR}\n R_{0}\\propto \\exp \\left[\\eta\\int_{0}^{y}dy^{\\prime}F(\\phi) \\right].\n\\end{equation}\n\\noindent This fact is the same to the case of two-dimensional Dirac equation \\cite{luis}. At this point is worthwhile to mention that the normalization of the zero mode and the existence of a minimum for the effective potential at the localization on the brane are essential conditions for the problem of fermion localization on the brane. This fact was already reported in \\cite{yo}.\n\nIn order to guarantee the normalization condition (\\ref{orto}) for the left-handed fermion zero mode (\\ref{mzL}), the integral must be convergent, \\textit{i.e}\n\\begin{equation}\\label{cono}\n \\int^{\\infty}_{-\\infty}dy\\exp\\left[ -A(y)-2\\eta\\int^{y}_{0}dy\\,^{\\prime}F(\\phi(y\\,^{\\prime})) \\right]<\\infty.\n\\end{equation}\n\\noindent This result clearly shows that the normalization of the zero mode is decided by the asymptotic behavior of $F(\\phi(y))$. Furthermore, from (\\ref{vya}) and (\\ref{vyb}), it can be observed that the effective potential profile depends on the $F(\\phi(y))$ choice. This fact implies that the existence of a minimum for the effective potential $V_{L}(z(y))$ or $V_{R}(z(y))$ at the localization on the brane is decided by $F(\\phi(y))$. This point will be more clear when it is considered a specific Yukawa coupling. Therefore, the behavior of $F(\\phi(y))$ plays a leading role for the fermion localization on the brane \\cite{yo}. Having set up the two essential conditions for the problem of fermion localization on the brane, we are now in a position to choice some specific forms for Yukawa couplings.\n\n\\subsection{Zero mode and fermion localization}\n\nFrom now on, we mainly consider the simplest case $F(\\phi)=\\phi$. First, we consider the\nnormalizable problem of the solution. In this case, we only need to consider the asymptotic behavior of the integrand in (\\ref{cono}). It becomes\n\\begin{equation}\\label{in}\n I\\rightarrow\\mathrm{exp}\\left[ -A(\\pm\\infty)-2\\eta\\int^{y}_{0}\\phi(y^{\\prime})dy^{\\prime} \\right]\\,.\n\\end{equation}\n\n\\noindent By the analysis from eq. (\\ref{em2b}), we obtain that $A(\\pm\\infty)\\rightarrow-\\sqrt{|V(\\pm v)|\/3}\\,|\\,y|$. For the integral $\\int dy\\phi$, we only need to consider the asymptotic behavior of $\\phi$ for $y\\rightarrow\\pm\\infty$ \\cite{liu} and as $\\phi(\\pm\\infty)=\\pm v$ the equation (\\ref{in}) becomes\n\\begin{equation}\\label{int}\n I\\rightarrow\\mathrm{exp}\\left[ -2\\left( \\eta\\,v-\\sqrt{|V(\\pm v)|\/12} \\right)|\\,y| \\right]\\,.\n\\end{equation}\n\n\\noindent This result clearly shows that the zero mode of the left-handed fermions is normalized only for $\\eta>\\frac{1}{v}\\,\\sqrt{\\frac{|V(\\pm v)|}{12}}$. Now, under the change $\\eta\\rightarrow-\\eta$ ($L_{0}\\rightarrow R_{0}$) we obtain that the right-handed fermions can not be a normalizable zero mode. The shape of the potentials $V_{L}$ and $V_{R}$ are shown in Fig. (\\ref{pe}) for some values of $a$. Figure \\ref{pe}(a) shows that the effective potential $V_{L}$, is indeed a volcano-like potential for $a=0$. As $a$ increases the well structure of $V_{L}$ gets a double well. Figure \\ref{pe}(b) shows that the potential $V_{R}$ has also a well structure, but the minimum of $V_{R}$ is always positive, therefore the potential does not support a zero mode. The shapes of the matter energy density, $V_{L}$ potential and $|\\,L_{0}|^{2}$ are shown in Fig. \\ref{a}. The Fig. \\ref{a}(a) ($a=0$) shows that the zero mode is localized on the brane. On the other hand, Fig. \\ref{a}(b) ($a=0.836$) clearly shows that the normalizable zero mode is localized between the two sub-branes, as a consequence the zero mode is not localized on the brane. Therefore, we can conclude that the zero mode of the left-handed fermions is localized on the brane only as $0\\leq a \\approx a_{*}$.\n\n\\begin{figure}[ht]\n \\begin{minipage}[b]{0.40 \\linewidth}\n \\fbox{\\includegraphics[width=\\linewidth]{vl.eps}}\\\\\n \\end{minipage}\\hfill\n \\begin{minipage}[b]{0.40 \\linewidth}\n \\fbox{\\includegraphics[width=\\linewidth]{vr.eps}}\\\\\n \\end{minipage}\n \\caption{Potential profile: (a) $(V_{L}(y))_{A}$ (left) and (b) $(V_{R}(y))_{A}$ (right) for $\\eta=1$, $b=2$, $c=1$, $a=0$ (thin line), $a=0.8$ (dashed line) and $a=0.836$ (dotted line) }\\label{pe}\n \\end{figure}\n\n\n\\begin{figure}[ht]\n \\begin{minipage}[b]{0.40 \\linewidth}\n \\fbox{\\includegraphics[width=\\linewidth]{mzervla0_3.eps}}\\\\\n \\end{minipage}\\hfill\n \\begin{minipage}[b]{0.40 \\linewidth}\n \\fbox{\\includegraphics[width=\\linewidth]{mzervla0836_3.eps}}\\\\\n \\end{minipage}\n \\caption{The profiles of the Ricci scalar (thin line), $V_{L}(y)$ (thick line) and $|\\,L_{0}|^{2}$ (dashed line) for $\\eta=1$, $b=2$, $c=1$; (a) $a=0$ (left) and (b) $a=0.836$ (right).}\\label{a}\n \\end{figure}\n\n\\section{Conclusions}\n\nWe have reinvestigated the effects of the variation of the mass parameter $a$ on the thick branes as well as the localization of fermions. We showed that the variation of $a$ is associated to the phenomenon of brane splitting, therefore the brane does not get thicker with the increases of the value of $a$, as argued in Ref. \\cite{zhao}. We can conclude that the appearance of two sub-branes is associated to phase transition for $a=a_{*}$ (a disordered phase between two ordered phases). Also, we showed that the value of the matter energy depends on the asymptotic behavior of the warp factor. From the static equation of motion we have analyzed the asymptotic behavior of $A(y)$ and showed that the zero mode of the left-handed fermions for the simplest Yukawa coupling $\\eta\\bar{\\Psi}\\phi\\Psi$ is normalizable under the condition $\\eta>\\frac{1}{v}\\,\\sqrt{\\frac{|V(\\pm v)|}{12}}$ and it can be trapped on the brane only for $0\\leq a \\approx a_{*}$, because the zero mode has a single-peak at the localization of the brane. We also showed that as $a\\rightarrow a_{*}$ the zero mode has a single-peak between the two sub-branes and as a consequence the normalizable zero mode is not localized on the brane. Therefore, the brane with a big value of $a$ would not trap fermions more efficiently, in opposition to what was adverted in Ref. \\cite{zhao}. This work completes and revises the analyzing of the research in Ref. \\cite{zhao}, because in that work does not analyze the zero mode in full detail and contain some misconceptions.\n\nAdditionally, we showed that the change of variable $dz=\\mathrm{e}^{-A(y)}dy$ leads to a non conserved current, because the curved-space gamma matrices are not covariantly constant. An interesting issue will be investigate the effects of non-conserved current on resonances modes and bear out the main conclusion of Ref. \\cite{zhao}.\n\n\n\\begin{acknowledgments}\nThis work was supported by means of funds provided by CAPES.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction}\n\nComplex contact manifolds are still has many open problems. The importance of this subject is not only the complex version of real contact manifolds also one can find many important informations about complex manifolds, K\\\"ahler manifolds. In addition there are some applications in theoretical physics \\cite{kholodenko2013applications}. Complex contact manifolds has an old history same as real contact manifolds, researchers could not give their attention to the subject. When we look at the 1980s there are very important improvements in the Riemannian geometry of complex contact manifolds. Ishihara and Konishi constructed tensorial relations for a complex almost contact structure and they also presented normality \\cite{ishihara1979real,ishihara1980complex,ishihara1982complex}. The Riemann geometry of complex almost contact metric manifolds could be divided into three notions;\n\\begin{enumerate}\n\t\\item \\textbf{\\textit{IK-normal complex contact metric manifolds}} : A complex contact manifold has a normal contact structure in the sense of Ishihara-Konishi. This type of manifolds were studied in \\cite{ishihara1979real,ishihara1980complex,ishihara1982complex,imada2014construction,imada2015complex,turgut2018h}.\n\t\\item \\textbf{\\textit{Normal complex contact metric manifolds}}: A complex contact manifold has a normal contact structure in the sense of Korkmaz. This type of manifolds were studied in \\cite{korkmaz2000,Korkmaz2003,blair2006corrected,blair2009special,blair2011bochner,blair2012homogeneity,blair2012symmetry,vanli2015curvature,turgut2017conformal,vanli2017complex}.\n\t\\item \\textbf{\\textit{Complex Sasakian manifolds}}: A complex contact manifold with a globally complex contact form and has a normal contact structure in the sense of Korkmaz. This type of manifolds were studied in \\cite{foreman2000complex,fetcu2006harmonic,fetcuadapted}.\n\\end{enumerate}\nIn this work we study on third type of these manifolds. Firstly we adopted the definition of a complex Sasakian manifold by consider the definition of a real Sasakian manifold. Later we give some fundamental equations and we obtain curvature properties. Finally we examine some flatness conditions. We use a general tensor which is defined in \\cite{shaikh2018some} and is called by $ B- $tensor. We prove that a complex Sasakian manifold could not be $ B- $flat. \n\n\n\\section{Preliminaries}\nIn this section we give fundamental facts on complex contact manifolds. For details reader could be read \\cite{foreman1996variational,korkmaz2000,blair2010riemannian}.\n\\begin{definition}\n\tLet $N$ be a complex manifold of odd complex dimension $2p+1$ covered by an\nopen covering $\\mathcal{C}=\\left\\{ \\mathcal{A}_{i}\\right\\} $ consisting of\ncoordinate neighborhoods. If there is a holomorphic $1$-form $\\eta _{i}$\non each $\\mathcal{A}_{i}\\in \\mathcal{C}$ in such a way that for any \n\\mathcal{A}_{i},\\mathcal{A}_{j}\\in \\mathcal{C}$ and for a holomorphic function $f_{ij}$ on $\\mathcal{A}_{i}\\cap \\mathcal{A}_{j}\\neq \\emptyset $\n\\begin{equation*}\n\\eta _{i}\\wedge (d\\eta _{j})^{p}\\neq 0\\text{ in }\\mathcal{A}_{i}\\text{, }\n\\end{equation*\n\\begin{equation*}\n\\eta _{i}=f_{ij}\\eta _{j},\\text{ }\\mathcal{A}_{i}\\cap \\mathcal{A\n_{j}\\neq \\emptyset ,\n\\end{equation*\nthen the set $\\left\\{ \\left( \\eta _{i},\\mathcal{A\n_{i}\\right) \\mid \\mathcal{A}_{i}\\in \\mathcal{C}\\right\\} $ of local structures is called complex contact structure and with this structure $N$ is called a complex contact manifold.\n\\end{definition}\n\n\nThe complex contact structure determines a non-integrable distribution $H_i$ by the equation $\\eta_i =0$ such as \n\\begin{equation*}\nH_{i}=\\{X_{P}:\\eta_{i}(X_{P})=0, X_{P}\\in T_{P}N\\}.\n\\end{equation*}\nand a holomorphic vector field $ \\xi_i $\nis defined by \n\n\\begin{equation*}\n\\eta_i(\\xi_i)=1\n\\end{equation*}\nand a complex line bundle is defined by $ E_i=Span\\{\\xi_i\\} $.\\\\\n\\qquad Let $ T^c(N) $ be complexified of tangent bundle of $ (N,J,\\eta_i) $ and let define vector fields \n\\begin{eqnarray*}\n\tU_i=\\xi_i+\\bar{\\xi_i}\\ \\ \\ \\ \\ \\ \\ V_i=-i(\\xi_i+\\bar{\\xi_i})\n\\end{eqnarray*}\nand 1-forms\n\\begin{eqnarray*}\n\tu_i=\\frac{1}{2}(\\eta_i+\\bar{\\eta_i}) \\ \\ \\ \\ \\ \\ v_i=\\frac{1}{2}i(\\eta_i-\\bar{\\eta_i}).\n\\end{eqnarray*}\nTherefore we get \n\\begin{enumerate}\n\t\\item $ V_i=-JU_i $ and $ v_i=u_i\\circ J $\n\t\\item $ U_i=JV_i $ and $ u_i=-v_i\\circ J $\n\t\\item $ u_i(U_i)=v_i(V_i)=1 $ and $ u_i(V_i)=v_i(V_i)=0 $.\n\\end{enumerate}\nThe complexified $ H_i $ and $ E_i $ is defined by \n\\begin{eqnarray*}\n{\tH_i}^c&=&\\{W\\in T^c(N)\\arrowvert u(W)=v(W)=0\\}\\\\\n{\tE_i}^c&=&Span\\{U,V\\}. \n\\end{eqnarray*}\nWe use notation $ \\mathcal{H} $ and $ \\mathcal{V} $ for the union of $ {\tH_i}^c $ and $ E_i^c $ respectively. $ \\mathcal{H} $ is called horizontal distribution and $ \\mathcal{V} $ is called vertical distribution \nand we can write \n\\begin{equation} \\label{TN=H+V}\nTN=\\mathcal{H}\\oplus\\mathcal{V}.\n\\end{equation}\n Ishihara and Konishi \\cite{ishihara1980complex} proved that $N$ admits always a complex contact structure of $ C^{\\infty} $.\n\\begin{definition} \\label{complexalmostscontact}\n\tLet $N$ be a complex manifold with complex structure $ J $, Hermitian metric $ g $ and $\\mathcal{C=}\\left\\{ \n\t\\mathcal{A}_{i}\\right\\} $ be open covering of $N$ with coordinate\n\tneighbourhoods $\\{\\mathcal{A}_{i}\\mathcal{\\}}.$ If $N$ satisfies the\n\tfollowing two conditions then it is called a \\textit{complex almost contact\n\t\tmetric manifold}:\n\t\n\t1. In each $\\mathcal{A}_{i}$ there exist $1$-forms $u_{i}$ and \n\tv_{i}=u_{i}\\circ J$, with dual vector fields $U_{i}$ and $V_{i}=-JU_{i}$ and \n\t$(1,1)$ tensor fields $G_{i}$ and $H_{i}=G_{i}J$ such that \n\t\\begin{equation} \\label{G^2veH^2}\n\tH_{i}^{2}=G_{i}^{2}=-I+u_{i}\\otimes U_{i}+v_{i}\\otimes V_{i}\n\t\\end{equation\n\t\\begin{equation*} \\label{H=GJ}\n\tG_{i}J=-JG_{i},\\quad GU_{i}=0,\\quad\n\t\\end{equation*\n\t\\begin{equation*} \\label{g(GX,Y)=-g(X,GY)}\n\tg(X,G_{i}Y)=-g(G_{i}X,Y).\n\t\\end{equation*\n\t2.On $\\mathcal{A}_{i}\\cap \\mathcal{A}_{j}\\neq \\emptyset $ we have \n\t\\begin{eqnarray*}\n\t\tu_{j} &=&au_{i}-bv_{i},\\quad v_{j}=bu_{i}+av_{i},\\; \\\\\n\t\tG_{j} &=&aG_{i}-bH_{i},\\quad H_{j}=bG_{i}+aH_{i}\n\t\\end{eqnarray*\n\twhere $a$ and $b$ are functions on $\\mathcal{U}_{i}\\cap \\mathcal{U}_{j}$\n\twith $a^{2}+b^{2}=1$ \\cite{ishihara1979real,ishihara1980complex}.\n\\end{definition}\nBy direct computation we have \n\\begin{eqnarray} \nH_{i}G_{i} &=&-G_{i}H_{i}=J_{i}+u_{i}\\otimes V_{i}-v_{i} \\otimes U_{i} \\label{HG=-GH} \\label{HG,GH}\\\\\nJ_{i}H_{i} &=&-H_{i}J_{i}=G_{i} \\notag \\\\\nG_{i}U_{i} &=&H_{i}U_{i}=H_{i}V_{i}=0 \\notag \\\\\nu_{i}G_{i} &=&v_{i}G_{i}=u_{i}H_{i}=v_{i}H_{i}=0 \\notag \\\\\nJ_{i}V_{i} &=&U_{i}, \\ g(U_{i},V_{i})=0 \\notag\\\\\ng(H_{i}X,Y) &=&-g(X,H_{i}Y) \\notag.\n\\end{eqnarray}\n\n\nBy the local contact form $\\eta $ is $u-iv$ to within a\nnon-vanishing complex-valued function multiple and the local fields $G$ and \n\\ H$ are related to $du$ and $dv$ by \n\\begin{eqnarray*}\n\tdu(X,Y) &=&g(X,GY)+(\\sigma \\wedge v)(X,Y),~~~ \\\\\n\tdv(X,Y) &=&g(X,HY)-(\\sigma \\wedge u)(X,Y)\n\\end{eqnarray*}\nwhere $\\sigma (X)=g(\\nabla _{X}U,V)$, $\\nabla $ being the Levi-Civita\nconnection of $g$ \\cite{ishihara1980complex}. $ \\sigma $ is called IK-connection \\cite{foreman1996variational}.\n\nIshihara and Konishi \\cite{ishihara1980complex} study on normality of complex almost contact metric manifolds. They defined local tensors\n\\begin{eqnarray*}\n\tS(X,Y) &=&[G,G](X,Y)+2g(X,GY)U-2g(X,HY)V \\\\\n\t&&+2(v(Y)HX-v(X)HY)+\\sigma (GY)HX \\\\\n\t&&-\\sigma (GX)HY+\\sigma (X)GHY-\\sigma (Y)GHX,\n\\end{eqnarray*}\n\\begin{eqnarray*}\n\tT(X,Y) &=&[H,H](X,Y)-2g(X,GY)U+2g(X,HY)V \\\\\n\t&&+2(u(Y)GX-u(X)GY)+\\sigma (HX)GY \\\\\n\t&&-\\sigma (HY)GX+\\sigma (X)GHY-\\sigma (Y)GHX\n\\end{eqnarray*}\nwhere\n\\begin{equation*}\n\\lbrack G,G](X,Y)=(\\nabla _{GX}G)Y-(\\nabla _{GY}G)X-G(\\nabla\n_{X}G)Y+G(\\nabla _{Y}G)X\n\\end{equation*\nis the Nijenhuis torsion of $G$. \n\\begin{definition}\n\\cite{ishihara1980complex} A complex{\\color{white}$ \\eta $}almost{\\color{white}$ \\eta $}contact{\\color{white}$ \\eta $}metric{\\color{white}$ \\eta $}manifold is called IK-Normal{\\color{white}$ \\eta $} if $ S=T=0 $. \n\\end{definition}\nAn IK-Normal manifold has K\\\"ahler structure. In other words a non-K\\\"ahler complex almost contact metric manifold is not to be IK-Normal such Iwasawa manifold. Iwasawa manifold is not K\\\"ahler and it is compact manifold which has symplectic structure. Also Baikousis et al. \\cite{baikoussis1998holomorphic} obtained complex almost contact structure on Iwasawa manifold. Korkmaz gave a weaker definition for normality and Iwasawa manifold is normal in the sense of this definition. \n\\begin{definition} [Korkmaz's Definition, \\cite{korkmaz2000} ]\n\tA complex almost contact metric manifold is called normal if\n\t\n\t$\\qquad S(X,Y)=T(X,Y)=0$ \\ for all $X,Y$ in $\\mathcal{H},$ and $\\ $\n\t\n\t$\\qquad S\\left( X,U\\right) =T\\left( X,V\\right) =0$ for all $X.$\n\\end{definition} \n\nAlso for arbitrary vector fields $ X$ on $N$ we have \\cite{ishihara1980complex,foreman1996variational,korkmaz2000} \n\\begin{align}\n\\nabla _{X}U &=-GX+\\sigma (X)V,~~ \\label{nablaXU} ,\n\\ \\ \\ \\nabla _{X}V =-HX-\\sigma (X)U,~~ \\\\\n\\nabla _{U}U &=\\sigma (U)V,~~~\\nabla _{U}V=-\\sigma (U)U \\label{nablaUU} , \\ \\ \n\\nabla _{V}U =\\sigma (V)V,~~~~\\nabla _{V}V=-\\sigma (V)U.~~ \n\\end{align}\n\n\\begin{theorem}\n\tA complex almost\n\tcontact metric manifold is normal if and only\n\tif the covariant derivative of \\ $G$ and $H$ have the following\n\tforms: \n\t\\begin{eqnarray}\n\t(\\nabla _{X}G)Y &=&\\sigma (X)HY-2v(X)JY-u\\left( Y\\right) X\n\t\\label{Yeni normallik G} \\\\\n\t&&-v(Y)JX+v(X)\\left( 2JY_{0}-\\left( \\nabla _{U}J\\right) GY_{0}\\right) \\notag\n\t\\\\\n\t&&+g(X,Y)U+g(JX,Y)V \\notag \\\\\n\t&&-d\\sigma (U,V)v(X)\\left( u(Y)V-v(Y)U\\right) \\notag\n\t\\end{eqnarray}\n\tan\n\t\\begin{eqnarray}\n\t(\\nabla _{X}H)Y &=&-\\sigma (X)GY+2u(X)JY+u(Y)JX \\label{Yeni normallik H} \\\\\n\t&&-v(Y)X+u(X)\\left( -2JY_{0}-\\left( \\nabla _{U}J\\right) GY_{0}\\right) \\notag\n\t\\\\\n\t&&-g(JX,Y)U+g(X,Y)V \\notag \\\\\n\t&&+d\\sigma (U,V)u(X)\\left( u(Y)V-v(Y)U\\right) \\notag\n\t\\end{eqnarray}\n\twhere $X=X_{0}+u(X)U+v(X)V$ and $Y=Y_{0}+u(Y)U+v(Y)V,X_{0},Y_{0}\\in $ \n\t\\mathcal{H}$ \\cite{vanli2015curvature}.\n\\end{theorem}\nFrom this theorem on a normal complex contact metric manifold we have\n\n\\begin{eqnarray*}\n\t(\\nabla _{X}J)Y &=&-2u\\left( X\\right) HY+2v(X)GY+u(X)\\left( 2HY_{0}+\\left(\n\t\\nabla _{U}J\\right) Y_{0}\\right) \\\\\n\t&&+v(X)\\left( -2GY_{0}+\\left( \\nabla _{U}J\\right) JY_{0}\\right) .\n\\end{eqnarray*}\n\nAs we have seen, there are two normality notions for a \\NCM. The other kind of \\NCM s is complex Sasakian manifold. This type of manifolds are normal due to Korkmaz's definition and they were studied in \\cite{foreman2000complex,fetcu2006harmonic,fetcuadapted}. The fundamental difference of this type from others is globally definition of complex contact form. Kobayashi \\cite{kobayashi1959remarks} proved that a complex contact form is globally defined if and only if first Chern class of $ N $ vanishes. Also Foreman obtained following result; \n\\begin{lemma}\n\tLet $ (N,\\eta_i) $ be a complex contact manifold. If $ \\eta_i $ is globally defined i.e $ f_{ij}=1 $ then $ \\sigma=0 $ \\cite{foreman2000complex}. \n\\end{lemma}\n A complex Sasakin manifold is defined as follow; \n\\begin{definition}\n\tLet $\\left(N,G,H,J,U,V,u,v,g\\right) $ be a normal complex contact metric\n\tmanifold and $\\eta =u-iv$ is globally defined. If fundemental 2- forms \n\t\\widetilde{G}$ and $\\widetilde{H}$ is defined by \n\t\\begin{equation*}\n\t\\widetilde{G}\\left( X,Y\\right) =du(X,Y)\\text{ and \\ }\\widetilde{H}\\left(\n\tX,Y\\right) =dv\\left( X,Y\\right)\n\t\\end{equation*\n\tthen $ N $ is called a complex Sasakian manifold, where $X,Y$ are vector fields on \n\t$N$.\n\\end{definition}\nThus we get following result;\n\\begin{theorem}\n\tLet $ N $ be a \\NCM. Then $N$ is complex Sasakian if and only\n\tif\\bigskip \n\t\\begin{eqnarray*}\n\t(\\nabla _{X}G)Y&=&-2v(X)HGY-u(Y)X-v(Y)JX+g(X,Y)U+g(JX,Y)V \\\\\n\t(\\nabla _{X}H)Y &=&-2u(X)HGY+u(Y)JX-v(Y)X-g(JX,Y)U+g(X,Y)V.\n\t\\end{eqnarray*}\n\\end{theorem}\nThis result was also given by Ishihara-Konishi\\cite{ishihara1980complex}. But should have been considered that $ N $ is not K\\\"ahler in here. So,we have \n\\begin{eqnarray*}\n\t(\\nabla _{X}J)Y &=&-2u(X)HY+2v(X)GY.\n\\end{eqnarray*}\nFrom (\\ref{nablaXU} ) on a complex Sasakin manifold we get \n\\begin{eqnarray}\n\\nabla _{X}U =-GX,~~ \\ \\ \\ \\nabla _{X}V =-HX. \\label{NablaX-U}\n\\end{eqnarray}\nOn the other hand we have \n\\begin{corollary}\n\tAn IK-normal complex contact metric manifold could not be complex Sasakian \\cite{turgut2018h}. \n\\end{corollary} \n This result support that the geometry of complex Sasakian manifolds has some different properties. \n\\section{Curvature Properties of Complex Sasakian Manifolds}\nIn the Riemannian geometry of contact manifolds curvature properties have an important position. We use these relations for future works. The curvature relations of a complex almost contact metric manifolds were given in \\cite{foreman1996variational}, an IK-Normal manifold were given in \\cite{ishihara1980complex} and a normal complex contact metric manifold were given in \\cite{korkmaz2000,vanli2015curvature}. In this section by taking advantage from these curvature properties, we present curvature relations for complex Sasakian manifold.\nLet $ N $ be a complex Sasakian manifold. Then for $ X,Y \\in \\Gamma(TN) $ we have, \n\\begin{align}\nR\\left( U,V\\right) V&=R\\left( V,U\\right) U=0 \\label{SasakianR(UVV)} \\\\\nR(X,U)U&=X+u(X)U+v(X)V\t\\label{SasakianR(XUU)} \\\\\nR(X,V)V&=X-u(X)U-v(X)V\t\\label{SasakianR(XVV)} \\\\\nR(X,U)V&=-3JX-3u(X)V+3v(X)U \\label{SasakianR(XUV)}\\\\\nR(X,V)U&=0 \\label{SasakianR(XVU)}\\\\\nR(X,Y)U&=v(X)JY-v(Y)JX+2v(X)u(Y)V-2v(Y)u(X)V \\label{SasakianR(XYU)}\\\\\n&+u(Y)X-u(X)Y-2g(JX,Y)V \\notag \\\\\nR(X,Y)V&=3u(X)JY-3u(Y)JX-2u(X)v(Y)U+2u(Y)v(X)U \\label{SasakianR(XYV)} \\\\\n&+v(Y)X-v(X)Y+2g(JX,Y)U \\notag \\\\\nR(U,V)X&=JX+u(X)V-v(X)U\\label{SasakianR(UVX)} \\\\\tR(X,U)Y&=-2v(Y)v(X)U+2u(Y)v(X)V-g(Y,X)U \\label{SasakianR(XUY)}\\\\\n&+u(Y)X+g(JY,X)V \\notag \\\\\nR(X,V)Y&=3u(Y)JX+2u(Y)u(X)V+3g(JY,X)U\\label{SasakianR(XVY)} \\\\&-2v(Y)u(X)U-g(Y,X)V+v(Y)X-2u(X)JY \\notag .\n\\end{align}\n For $X,Y,Z,W$ horizontal vector fields we have \\cite{vanli2015curvature} \n\\begin{equation}\\label{r(gx,gy,gz,gw)}\ng(R(GX,GY)GZ,GW)=g(R(HX,HY)HZ,HW)=g(R(X,Y)Z,W)\n\\end{equation}\nand we have \n\t\\begin{align*}\ng(R(X,GX)GX,X)&+g(R(X,HX)HX,X)+g(R(X,JX)JX,X)=-6g\\left( X,X\\right)\\\\\ng(R(X,GX)Y,GY) &=g(R(X,Y)X,Y)+g(R(X,GY)X,GY)-2g(GX,Y)^{2} \\\\\t\n&-4g(HX,Y)^{2}-2g(X,Y)^{2} +2g(X,X)g(Y,Y)-4g(JX,Y)^{2}\\\\\ng(R(X,HX)Y,HY) &=g(R(X,Y)X,Y)+g(R(X,HY)X,HY)-2g(HX,Y)^{2} \\\\\n&-4g(GX,Y)^{2}-2g(X,Y)^{2}+2g(X,X)g(Y,Y)-4g(JX,Y)^{2} \\\\\ng(R(X,HX)JX,GX)&=-g(R(X,HX)HX,X)-4g(X,X)^{2}\\\\\ng(R(X,JX)HX,GX)&=g(R(X,JX)JX,X)-2g(X,X)^{2}.\n\\end{align*}\n\\begin{align*}\ng(R(GX,HX)HX,GX)&=g(R(X,JX)JX,X) \\\\\ng(R(GX,JX)JX,GX)&=g(R(X,HX)HX,X)\\\\\ng(R(JX,JY)JY,JX)&=g(R(X,Y)Y,X)\\\\\ng(R(X,Y)JX,JY)&=g(R(X,Y)Y,X)+4g(X,GY)^{2}+4g(X,HY)^{2}\\\\\ng(R(Y,JX)JX,Y)&=g(R(X,JY)JY,X) \\\\\ng(R(X,JY)JX,Y)&=g(R(X,JY)JY,X)+4g(X,HY)^{2}+4g(X,GY)^{2} \\\\\ng(R(X,JX)JY,Y)&=-g(R(JX,JY)X,Y)-g(R(JY,X)JX,Y) \\\\\ng(R(X,JX)JY,Y)&=g(R(X,Y)Y,X)+g(R(X,JY)JY,X)\\\\\n&+8\\left(g(X,GY)^{2}+g(X,HY)^{2}\\right). \n\\end{align*}\nWe have an nice relation as follow;\n\\begin{corollary}\n\tFor a unit horizontal vector $ X $ on $N$ we have\n\t\\begin{equation}\n\tk\\left( X,GX\\right) +k\\left( X,HX\\right) +k\\left( X,JX\\right) =6. \\label{sectionalrelation}\n\t\\end{equation}\n\\end{corollary}\nThis relation also valid for an IK-normal complex contact metric manifold \\cite{imada2014construction}. \n\\begin{corollary}\n\tLet $N$ be a complex Sasakian manifold and $ X $ be a unit horizontal vector field on $N$. Then for the sectional curvature $ k $ we have \n\t\\begin{equation*}\n\tk(U,V)=0\\, \\ \\ \\text{and} \\ \\ \\ k(X,U)=1.\n\t\\end{equation*}\n\\end{corollary}\nTurgut Vanl\\i\\ and Unal \\cite{vanli2015curvature} presented properties of Ricci curvature tensor\nof a normal{\\color{white}$ \\theta $}complex{\\color{white}$ \\theta $}contact{\\color{white}$ \\theta $} metric manifold. For complex Sasakian case \nwe have following relations; \n \n\t\\begin{eqnarray} \\label{Ricciler}\n\t\t\\rho (U,U) &=&\\rho (V,V)=4p,\\text{ \\ }\\rho (U,V)=0\\\\\n\t\t\\rho (X,U) &=&4pu(X) \\notag,\\ \n\t\t\\rho \\left( X,V\\right) =4pv(X) \\notag\\\\\n\t\t\\rho (X,Y) &=&\\rho (GX,GY)+4p\\left( u(X)u(Y)+v(X)v(Y)\\right) \n\t\t\\label{Ricci(X,Y)=Ricci(GX,GY)+} \\notag\\\\\n\t\t\\rho (X,Y) &=&\\rho (HX,HY)+4p\\left( u(X)u(Y)+v(X)v(Y)\\right) \\notag\n\t\\end{eqnarray}\nwhere $ X,Y \\in \\Gamma(TN) $.\\par \nThe sectional curvature of Riemann manifold give us important information about the geometry of the manifold. In complex manifold we have holomorphic sectional curvature which is the curvature of section is spanned by $ X $ and $ JX $. Similarly in contact manifold we have $ \\phi- $sectional curvature which is the curvature of section is spanned by $ X $ and $ \\phi X $ \\cite{blair2010riemannian}. For complex contact case we have sectional curvature, holomorphic sectional curvature and $ \\mathcal{GH}- $sectional curvature which was given by Korkmaz \\cite{korkmaz2000} as below: \n\\begin{definition}\n\t\\cite{korkmaz2000} Let $ N $ be a \\NCM. $X$ be an unit horizontal vector field on $N$ and $a^{2}+b^{2}=1$. A $\\mathcal{GH-}$section is a plane which is spanned by $X$ and $Y=aGX+bHX$ and the sectional curvature of this plane is called $\\mathcal{GH-}$\\textit{sectional curvature } is defined by \n\t\\mathcal{GH}_{a,b}\\left( X\\right) =k\\left( X,aGX+bHX\\right) ,$ where \n\tk(X,Y)$\\ is the sectional curvature of the plane section spanned by $X$ and $Y$.\n\\end{definition}\n$\\mathcal{GH-}$\\textit{sectional curvature } is denoted by $ \\mathcal{GH}_{a,b} $ and we assume that it does not depend the choice of $a$ and $b$. So we\nwill use $\\mathcal{GH}\\left( X\\right) $ notation. \\par\n\nLet $ N $ be a complex Sasakian manifold. Since the complex contact form is globally defined $ \\mathcal{GH}_{a,b} $ does not depend the choice of $a$ and $b$ in naturally. Also we have \\cite{korkmaz2000}\n\\begin{equation}\nk(X,JX)=\\mathcal{GH}\\left( X\\right) +3. \\label{k(X,JX)=GH(X)+3}\n\\end{equation}\nThus from (\\ref{sectionalrelation}) we obtain\n\t\\begin{equation}\nk\\left( X,GX\\right) +k\\left( X,HX\\right) +\\mathcal{GH}\\left( X\\right) =3. \n\\end{equation}\n\\begin{example}\n\tThe well known example of normal complex contact metric manifold complex Heisenberg group. A globally defined complex contact form and complex almost contact structure on complex Heisenberg group was given by Baikousis et al. \\cite{baikoussis1998holomorphic}. Korkmaz obtained normality of complex Heisenberg group. Thus a complex Heisenberg group is an example of complex Sasakian manifolds. For details about complex Heisenberg group see \\cite{blair2006corrected,foreman2000complex,korkmaz2000,vanli2015curvature}. \n\\end{example}\n\\begin{example}\n\tAn other example of complex Sasakian manifolds were given by Foreman \\cite{foreman2000complex}. Foreman obtained an example from hyperk\\\"ahler manifold. For details see \\cite{foreman2000complex}. \n\\end{example}\n\\section{Flatness on complex Sasakian manifolds}\nA real Sasakian manifold can not be flat, i.e its Riemannian curavture could not be zero identically \\cite{de2009complex}. We obtain same results for complex Sasakin manifolds. \n\\begin{theorem}\n\tA complex Sasakian manifold can not be flat.\n\\end{theorem}\n\n\\begin{proof}\n\tSuppose that a complex Sasakian manifold is flat. Then $R(X,Y)Z=0$\n\tand from that $\\rho \\left( X,Y\\right) =0.$ On the other hand from (\\ref{Ricciler}) we have \n\t\\begin{equation*}\n\t4nu(X)=0.\n\t\\end{equation*}\n\tThis is not possible. So the manifold can not be flat.\n\\end{proof}\n Shaikh and Kundu \\cite{shaikh2018some} generalized curvature tensors and gave the definition of B-tensor. They proved some equivalence relations between most of certain curvature conditions. \n\\begin{definition}\n\t$N$ be a complex Sasakian manifold . $ (0, 4) $ tensor $ B $ of $N$ is given by\n\t\\begin{eqnarray} \\label{B-TENSOR}\n\tB(X,Y,Z,W) &=& a_{0}R(X,Y,Z,W) + a_{1}R(X,Z,Y,W)\\\\\n\t&+& a_{2}\\rho(Y,Z)g(X,W)+a_{3}\\rho(X,Z)g(Y,W)\\notag\\\\\n\t&+&a_{4}\\rho(X,Y)g(Z,W)+a_{5}\\rho(X,W)g(Y,Z)\\notag \\\\\n\t&+&a_{6}\\rho(Y,W)g(X,Z)+a_{7}\\rho(Z,W)g(X,Y) \\notag \\\\\n\t&+& \\tau\\{a_{8}g(X,W)g(Y,Z)+a_{9}g(X,Z)g(Y,W) \\notag\\\\\n\t&+&_{10}g(X,Y)g(Z,W)\\} \\notag\n\t\\end{eqnarray}\n\twhere $ a_{i} $'s are scalars on $ M $ and $ X,Y,Z,W \\in \\Gamma(TN) $.\n\\end{definition}\nFor different value of $ a_{i} $, $ B $ is became projective, conformal, concircular, quasi-conformal and conharmonic etc. curvature tensors (see \\cite{shaikh2018some}). Therefore flatness of B-tensor also determine flatness of these tensors. Let's define $N$ is $ B-$flat if $ B=0 $. Then we have following. \n\n\\begin{theorem}\n\tA complex Sasakian manifold can not be $ B-$flat. \n\\end{theorem}\n\\begin{proof}\n\tAssume that $ B=0 $. Let chose $Y=Z=U$ and $ X=W=X\\in \\mathcal{H} $ then we have \n\t\\begin{eqnarray*}\n\t\t0 &=& a_{0}R(X,U,U,X) + a_{1}R(X,U,U,X) \\\\\n\t\t&+&a_{2}\\rho(U,U)g(X,X)+a_{5}\\rho(X,X)g(U,U)\\\\\n\t\t&+& \\tau\\{a_{8}g(X,X)g(U,U)\\}.\n\t\\end{eqnarray*}\n\tand from (\\ref{SasakianR(XUU)}) and (\\ref{Ricciler}) we get\n\\begin{align}\n\\rho(X,X)=-\\frac{a_0+a_1+4pa_2+\\tau a_8}{a_5} \\label{B-tens\u00f6rro(X,X)}.\n\\end{align}\n\tTherefore, since $ \\rho(X,X) $ is constant $ \\rho(X,X)=\\rho(Y,Y) $. \n\tOn the other hand for unit and orthogonal horizontal vector fields $ X,Y $ by choosing $Y=Z=Y \\in \\mathcal{H}$ and $ X=W=X\\in \\mathcal{H} $ under $B=0 $ condition we have \n\t\\begin{equation*}\n\tR(X,Y,Y,X)=-\\frac{a_{2}\\rho(Y,Y)+a_{5}\\rho(X,X)+a_{8}\\tau}{a_{0}+a_{1}}\n\t\\end{equation*}\n\tand thus we get \n\t\t\\begin{align*}\n\tR(X,Y,Y,X)=\\frac{a_{2}a_{0}+a_{1}a_{2}+a_{0}a_{5}+a_{1}a_{5}\n\t\t+4pa_{2}a_{2}^2+4pa_{2}a_{5}+\\tau a_{8}a_{2}}{a_{5}(a_{0}+a_{1})}.\n\t\\end{align*}\n\tThis shows us the sectional curvature is independent of $ X $ and $ Y $ and then $ k(X,JX)=\\mathcal{GH}(X) $. But from (\\ref{k(X,JX)=GH(X)+3}) there is a contradiction. So $ N $ could not be $ B- $flat. \n\\end{proof}\nIn \\cite{turgut2017conformal} two of presented authors showed that a normal{\\color{white}$ \\theta $}complex{\\color{white}$ \\theta $}contact{\\color{white}$ \\theta $}manifold is not be conformal,concircular, quasi-conformal and conharmonic flat.\n The other notion of flatness is $ \\phi-T $-flatness for a $(1,1)- $tensor $ T $. Now we examine this notion for a complex Sasakian manifold. \n\\begin{definition}\n\tLet $N$ be a complex Sasakian manifold. If \n\t\\begin{equation}\n\tG^{2}(\\mathcal{B}(GX,GY)GZ)=0\\text{ and }H^{2}(\\mathcal{B}(HX,HY)HZ)=0\n\t\\label{gh-conformal condition}\n\t\\end{equation\n\tfor $T,Y,Z$ vector fields on $N$ then $N$ is called a $\\mathcal{GH}-$ $ \\mathcal{B}-$flat.\n\\end{definition}\n\\begin{theorem}\n\tA complex Sasakian manifold can not be $\\mathcal{GH}-$ $ \\mathcal{B}-$flat. \n\\end{theorem}\n\\begin{proof}\n\tLet $ N $ be a $ \\mathcal{B}-$flat complex Sasakian manifold. For $ X,Y, Z,W\\in \\Gamma(\\mathcal{H}) $ we have \n\t\\begin{eqnarray*}\ng(B(GX,GY)GZ, GW)&&=B(GX,GY,GZ,GW)\\\\\n &&= a_{0}R(GX,GY,GZ,GW) + a_{1}R(GX,GY,GZ,GW)\\\\\n&&+ a_{2}\\rho(GY,GZ)g(GX,GW)+a_{3}\\rho(GX,GZ)g(GY,GW)\\notag\\\\\n&&+a_{4}\\rho(GX,GY)g(GZ,GW)+a_{5}\\rho(GX,GW)g(GY,GZ)\\notag \\\\\n&&+a_{6}\\rho(GY,GW)g(GX,GZ)+a_{7}\\rho(GZ,GW)g(GX,GY) \\notag \\\\\n&&+ \\tau\\{a_{8}g(GX,GW)g(GY,GZ)+a_{9}g(GX,GZ)g(GY,GW) \\notag\\\\\n&&+a_{10}g(GX,GY)g(GZ,GW)\\}. \\notag\n\\end{eqnarray*}\nFrom (\\ref{r(gx,gy,gz,gw)}) and (\\ref{Ricciler}) we get \n\\begin{eqnarray*}\ng(B(GX,GY)GZ, GW)=B(GX,GY,GZ,GW)=B(X,Y,Z,W).\n\\end{eqnarray*}\nAlso we know from Theorem 4, $ B\\neq 0 $, thus we have \n\\begin{equation*}\ng(B(GX,GY)GZ, GW)=-g(GB(GX,GY)GZ, W)\\neq 0.\n\\end{equation*}\nBecause of $ W \\in \\Gamma(T\\mathcal{H}) $ then $GB(GX,GY)GZ\\neq0$ and so $G^2B(GX,GY)GZ\\neq0$. \n\\end{proof}\nBy this theorems we obtain that on a complex Sasakian manifold Weyl conformal curvature tensor, projective curvature tensor, concircular curvature tensor, conharmonic curvature tensor, quasi conformal curvature\ntensor, pseudo projective curvature\ntensor, quasi-concircular curvature tensor, pseudo\nquasi conformal curvature tensor, M-projective curvature tensor, $ W_{i} $-curvature tensor, $ i = 1, 2, . . . ,9 $, $ W_{i}* $\n-curvature, $ T $ -curvature tensor (see \\cite{shaikh2018some} for details on tensors) can not vanish. In the other words a complex Sasakian manifold can not transform to flat space under any transformation such as conformal, projective, concircular etc.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nMore than fifty years have passed since the first detection of a cosmic ray with an energy $\\,\\raisebox{-0.13cm}{$\\stackrel{\\textstyle> 10^{20}${\\rm eV} \\cite{Linsley}, yet the nature of UHECRs and the identity of their sources remain a mystery. \nThe nature of UHECRs sources depends on their composition, and we focus here on sources capable of producing UHE protons.\nDifficulties with the leading source candidates, AGNs and GRBs, led Farrar and Gruzinov \\cite{fg09} \nto propose that UHECRs, if protons, must be produced in a new class of powerful AGN transients, as could arise from the tidal disruption of a star or an extreme accretion disk instability around a supermassive black hole at a galactic center. \nFollowing the detection of a relativistic outflow from the tidal disruption event (TDE) \\textit{Swift}\\ J1644+57 \\cite{Burrows+11,Bloom+11,Zauderer+11}\nand from \n\\textit{Swift}\\ J2058+05 \\cite{Cenko+12} we confront here the viability of a scenario in which UHECRs are protons produced in the jets of tidal disruption events (TDEs). We begin by recalling the requirements for UHECR acceleration. Then, we use observations and modeling of \\textit{Swift}\\ J1644+57, \na likely example of a TDE jet seen in ``blazar-mode'' (i.e., looking down the axis of the jet), to test whether individual TDE jets satisfy the Hillas criterion necessary for accelerating protons to $10^{20}${\\rm eV}. Finally, using the recently measured TDE rate \\cite{vfRate14}, we examine whether TDEs can account for the observed UHECR energy injection rate and whether they provide a sufficiently large number of active sources to explain the lack of strong clustering in the arrival direction distribution \\cite{augerSrcDen13}. \n\n\n\\section{Conditions on Sources of UHECRs}\nIn order for a CR to be confined during the acceleration process, its Larmor radius must remain smaller than the size, $R$, of the accelerating system. This places a strict lower bound on $B R$ for UHECR acceleration known as the Hillas criterion, valid for any acceleration mechanism that involves magnetic fields \\cite{hillas84}: \n\\begin{equation}\\label{conf}\nB R\\,\\raisebox{-0.13cm}{$\\stackrel{\\textstyle> 3\\times 10^{17} ~ \\Gamma ^{-1} \\, Z^{-1} \\, E_{20} ~ {\\rm Gauss \\,cm},\n\\end{equation}\nwhere $B$ is the magnetic field, $\\Gamma$ is the bulk Lorentz factor of the jet,\n $Z$ is the charge of the UHECR, and $E$ the CR's energy with $E_{20} \\equiv E\/ 10^{20} {\\rm eV}$. Eq. (\\ref{conf}) implies a lower bound on the total Poynting luminosity required to accelerate protons to UHE, for which the total bolometric luminosity can be taken as a surrogate \\cite{fg09}:\n\\begin{equation}\\label{lumi}\n L_{\\rm bol}\n \\approx {1\\over 6}c\\, \\Gamma ^4 \\, (B R)^2 \\, \\,\\raisebox{-0.13cm}{$\\stackrel{\\textstyle> \\, 10^{45}\\, \\Gamma ^2\\, (E_{20}\/Z)^2 \\, {\\rm erg\/s}.\n\\end{equation}\nIt was shown in \\cite{fg09} that if the conditions Eqs. (\\ref{conf},\\ref{lumi}) are met in an AGN-like jet, cooling and interaction with photons prior to escape from the accelerator are not the limiting factors in the maximum energy. The strong dependence on $Z$ of Eqs. (\\ref{conf},\\ref{lumi}) indicates that the constraints on the UHECR sources are very different for protons or nuclei: for protons the sources must be amongst the most luminous known EM sources, while for nuclei the requirements are much more modest. \n\nDenoting the energy injection rate of UHECRs in the range $10^{18}-10^{20}$ eV by ${\\dot E}_{_{UCR}} \\equiv {\\dot E}_{44} \\,\\, 10^{44} \\,{\\rm erg} \\, {\\rm Mpc}^{-3} \\, {\\rm yr}^{-1}$, for continuous sources the density of sources implied by Eq. (\\ref{lumi}) is \n\\begin{equation} \\label{num}\nn_{\\rm src} \\approx 3 \\times 10^{-9} \\frac{{\\dot E}_{44}}{\\epsilon_{_{UCR}} \\, \\Gamma ^2\\, (E_{20}\/Z)^2} ~ {\\rm Mpc}^{-3},\n\\end{equation}\nwhere $\\epsilon_{_{UCR}}\\equiv L_{_{UCR}}\/L_{\\rm bol}$ is the luminosity in UHECRs relative to the Poynting luminosity. From the observed UHECR spectrum, \\cite{katz+GRB-UHECR09} estimates ${\\dot E}_{44} = 2.3$ to 4.5 for source spectra $\\sim E^{-2} \\, {\\rm to} \\, E^{-2.5}$, with O(1) uncertainty. Thus one continuous source with $\\epsilon_{_{UCR}} \\sim 1$ within the GZK distance ($\\approx 200$ Mpc) would be sufficient to produce the entire observed flux. \nHowever, the lack of clustering \\cite{augerSrcDen13} in the arrival directions of UHECRs with energies above 70 EeV, implies a constraint on the density of sources whose stringency depends on the characteristic maximum deflections of the UHECRs: for $ 30^{\\circ} $ ($3^{\\circ}$), the source density must be greater than \n\\begin{equation} \\label{nlimsrc}\nn_{\\rm min} = 2 \\times 10^{-5} ~ ( 7 \\times 10^{-4}) ~ {\\rm Mpc}^{-3}. \n\\end{equation}\nThus efficient continuous protonic sources are incompatible with the source abundance requirement \n\\cite{fg09,WL09,MuraseTakami09}. Note that if deflections in the Galactic and intergalactic magnetic fields are so strong that UHECR arrival directions do not reflect the direction of their sources, the bound Eq. (\\ref{nlimsrc}) does not apply. However in this case, the observed correlation \\cite{TAaniso14,augerAniso14} with local structure would not be explained.\n\nEq. (\\ref{num}) can be reconciled with the observed minimum source density, Eq. (\\ref{nlimsrc}), if UHECR production is very inefficient with $\\epsilon_{_{UCR}} \\ll 1$. However, inefficiency is not a solution, since even the weakest bound in Eq. (\\ref{nlimsrc}) requires $\\approx 700$ sources within the GZK distance, whereas powerful steady sources (of any kind) with luminosity larger than $10^{45}$ erg\/s are rare \\cite{fg09}. Another way out is via an acceleration mechanism which does not involve magnetic field confinement and thus evades the luminosity requirement Eq. (\\ref{lumi}). However an efficient mechanism of this kind is not known. \n\n\nTransient sources can evade the previous conundrum. They must satisfy the Hillas confinement condition embodied in Eqs. (\\ref{conf}) and (\\ref{lumi}), and furthermore the energy injection condition sets a limit on the energy that must be released in UHECRs in a single transient event:\n\\begin{equation}\n\\label{EUCR}\n{\\cal E}_{_{UCR}} \\equiv {\\dot E}_{_{UCR}} \\, \/ \\, \\Gamma_{_{\\rm UCRtran}} = 10^{51}\\,{\\rm erg} ~ {\\dot{E}}_{44} ~ \\Gamma_{_{\\rm UCRtran, -7}}^{-1},\n\\end{equation} \nwhere $\\Gamma_{_{\\rm UCRtran}} \\equiv \\Gamma_{_{\\rm UCRtran, -7}} \\, 10^{-7}$ Mpc$^{-3}$ yr$^{-1}$ is the rate that the UHECR-producing transients take place. In addition, the number of sources contributing at a given time must be large enough. Deflections in the extragalactic magnetic field spread out the arrival times of UHECRs from an individual transient event. \nIn the approximation that the deflections are small and many, the resultant characteristic arrival time spread is \\cite{waxmanME}:\n\\begin{equation}\n\\label{tau}\n\\tau \\approx \\frac{ D^{2} Z^2 \\langle B^2 \\lambda \\rangle}{9 E^{2}} = \n3 \\times 10^{5}\\, {\\rm yr} \n\\left(\\frac{D_{100} \\, B_{\\rm nG}}{E_{20}\/Z}\\right)^{2} \\lambda_{\\rm Mpc}, \n\\end{equation}\nwhere $D_{100}$ is the distance of the source divided by 100 Mpc, $B_{\\rm nG}$ is the rms random extragalactic magnetic field strength in $nG$, and $\\lambda_{\\rm Mpc}$ is its coherence length in Mpc. To be compatible with the Auger bound,\nthe number density $n_{\\rm eff}$ of contributing transient sources at a given time, must satisfy\n\\begin{equation}\n\\label{neff}\nn_{\\rm eff} \\approx \\tau_{5} ~ \\Gamma_{_{\\rm UCRtran, -7}} \\, 10^{-2}\\,{\\rm Mpc}^{-3} \\, \\geq n_{\\rm min} = 2 \\times 10^{-5} ~ {\\rm Mpc}^{-3},\n\\end{equation}\nwhere $ \\tau_{5} \\,10^{5}\\, {\\rm yr}$ is the mean time delay averaged over sources.\n \nIn order for any transient UHECR source type to be viable it must therefore satisfy three requirements, the Hillas condition (Eqs. (\\ref{conf}) or (\\ref{lumi})) and also Eqs. (\\ref{EUCR}) and (\\ref{neff}). \nThe classical transient source candidate, GRBs, \\cite{waxman95} satisfy easily the first and third conditions, but \nthere is debate whether their energy output is sufficient to satisfy the second condition, Eq. (\\ref{EUCR}), unless their UHECR energy output exceeds significantly their photon output \\cite{fg09,eichler+GRB-UHECR10,katz+GRB-UHECR09}. This, and the lack of the expected high energy neutrinos \\cite{IceCubeNatureGRBs12}, makes GRBs less favorable source candidates.\n\nA number of TDE flare candidates have been detected and followed up in real-time: \\textit{Swift}\\ J1644+57 \\cite{Burrows+11,Bloom+11,Zauderer+11}, PS1-10jh \\cite{gezari+Nature12} and PS1-11af \\cite{chornock+14}. Two candidates had been found earlier in archival SDSS data \\cite{vf11} and one was subsequently found in archival Swift data \\textit{Swift}\\ J2058.4+0516 \\cite{cenkoSw2058}. (Still earlier TDE candidates were put forward in \\cite{donley02,Gezari08}, but AGN-flare background could not be well characterized for those observations, so the origins of the flares were uncertain.) \n \n\\section{TDEs as UHECR sources}\n\nIn this section we address whether TDE jets are good source candidates for the protonic UHECR scenario.\nWe begin with the Hillas condition, Eq. (\\ref{conf}), which must be satisfied for any UHECR source that is based on EM acceleration. Here the recent observations of \\textit{Swift}\\ J1644+57 provide us with an example of a TDE jet with very good multi-wavelength follow-up, enabling the Hillas criterion (Eq. \\ref{conf}) to be directly checked.\n\n\\subsection{\\textit{Swift}\\ J1644+57 and the Hillas criterion}\n\n\\textit{Swift}\\ J1644+57 was detected on March 25th 2011 by the {\\it Swift} satellite.\nIts location at the nucleus of an inactive galaxy made it immediately a strong TDE candidate. It is uniquely suitable for testing whether TDE jets can satisfy the Hillas condition, Eq. (\\ref{conf}), because of the thorough multi-wavelength monitoring from its inception for more than 600 days, which has enabled detailed modeling of the conditions in its jet. \nThe observations \\cite{Burrows+11,Bloom+11,Zauderer+11} revealed two different\nemission sites: an inner emission region where the X-rays are emitted, and an outer region\nwhere the radio emission is produced. \n\n Basic models for the TDE emission follow the ideas of a gamma-ray bursts, c.f., e.g. \n\\cite{piran04}: the central engine of the TDE produces a pair of relativistic jets via the\nBlandford-Znajeck \\cite{bz77} process, with internal dissipation shocks within the jets accelerating\nparticles and producing the X-ray emission at relatively short distances from the central engine.\nAt larger distances, the outflow interacts with the surrounding matter; this slows it down and produces the radio emission, in an afterglow-like manner \\cite{Zauderer+11}. In the following we consider both the X-ray and radio-emitting regions as possible sites for UHECR acceleration. \nWe discuss first the radio emitting region where the situation is clearer, as we can use simple equipartition arguments. We then discuss the situation within the X-ray emitting jet. \n\nThe conditions within the radio emitting region have been analyzed using relativistic equipartition considerations (see the Appendix). In the relativistic regime the equipartition analysis depends on the geometry of the emitting regions and there are two possible solutions \\cite{Barniol+13b} shown in Fig. \\ref{fig:J1644_radio}.\nFor the most reasonable geometry -- a narrow jet with an opening angle 0.1 -- the equipartition value of $B R$ is slightly larger than $10^{17}$ Gauss cm. Given \nthat the equipartition estimates \nyield a lower limit on the energy, this can be considered compatible with the Hillas condition. A more detailed model that takes into account the inter-relation between the X-ray jet and the radio emitting electrons \\cite{Kumar+13} yields $B R$ larger by a factor of a few than the simple equipartition estimate \n\\cite{Barniol+13b} (see Fig. \\ref{fig:J1644_radio}); at early times $B R \\approx 3 \\times 10^{17}$ Gauss cm. \nThus, the outer radio emitting region of the \\textit{Swift}\\ J1644+57 TDE jet appears likely to have had conditions for UHECR acceleration. \n\n\\begin{figure}[b!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{Figure1cropped.eps}\n\\caption{ The equipartition values of $ B R $ as a function of time, for different relativistic models for the TDE \\textit{Swift}\\ J1644+57: a narrow jet with an opening angle $0.1$ (red dots), a wide jet (green dots) and the detailed model of \\cite{Kumar+13} (solid line). } \\label{fig:J1644_radio} \n\\end{figure}\n\nDuring the first two days, the X-ray luminosity of \\textit{Swift}\\ J1644+57 fluctuated reaching isotropic equivalent peak luminosities of $\\approx 10^{48}$ ergs\/sec. Models of the X-ray emission as arising from a relativistic blob of plasma \\cite{Burrows+11} yield \nestimates of $B R$ ranging from a few $\\times 10^{15}$ to $5 \\times 10^{17}$ Gauss cm, depending on \n the dominant energy of the jet (Poynting flux or baryonic), the emission mechanism (synchrotron or external IC), the position of the emission region and the relative contributions of the disk and the jet. \nIt is possible but not certain that the X-ray emitting regions in \\textit{Swift}\\ J1644+57 also had conditions for UHECR acceleration. If so, from this point of view TDEs resemble powerful AGNs in the way they satisfy their Hillas condition, with UHECR acceleration being possible in two different regions. \n\n\\subsection{Energy budget and source abundance}\nWith a total energy of $\\approx 10^{54} \\,(M_{*}\/M_{\\odot})$ erg available in a tidal disruption event, only a small fraction -- $ 10^{-3} \\, (\\langle M_{*}\\rangle\/M_{\\odot}) \\, {\\dot{E}}_{44} \\, \\Gamma_{_{\\rm UCRtran, -7}}^{-1}$, where $M_{*}$ is the mass of the tidally disrupted star -- needs to go to UHECR production in order to satisfy Eq. (\\ref{EUCR}), if the rate of TDEs producing jets capable of accelerating UHECRs is adequate. \n\nThe rate of TDEs in {\\it inactive} galaxies has recently been determined based on the discovery of two TDEs in a search of SDSS Stripe 82 \\cite{vfRate14}. In volumetric terms\n\\begin{equation} \\label{TDErate}\n\\Gamma_{\\rm TDE} = (0.4-0.8) \\cdot 10^{-7\\pm 0.4}\\,{\\rm Mpc}^{-3}\\,{\\rm yr}^{-1},\n\\end{equation}\nwhere the statistical uncertainty is in the exponent and the prefactor range reflects the light curve uncertainty.\nThis result is roughly consistent with earlier theoretical \\cite{WangMerritt04} and observational estimates (within their uncertainties) \\cite{Gezari+09}. However, more refined estimates are needed because not all TDEs have jets and only a fraction of those may be capable of producing UHECRs; in the following we ignore jets weaker than the \\textit{Swift}\\ events which satisfy the Hillas criteria. \n\nAn estimate of the UHECR-producing TDE rate is obtained from the observed TDEs with jets, \\textit{Swift}\\ J1644+57 and J2058+05. Burrows et al. \\cite{Burrows+11} estimate that the observation of one event per seven years by \\textit{Swift}\\ corresponds to an all sky rate of $0.08-3.9$ events per year up to the detection distance of $z=0.8$ which contains a co-moving volume of $\\approx 100\\, {\\rm Gpc}^{3}$. Their subsequent archival discovery of J2058+05 increased this rate by a factor of two and reduced somewhat the uncertainty range, leading to an estimated rate of $ \\approx 3 \\times 10^{-11}$ Mpc$^{-3}$ yr$^{-1}$ of TDE events with jets pointing towards us, or\n\\begin{equation}\n\\label{SwiftJetRate}\n\\Gamma_{_{\\rm UCRtran, -7}} \\approx 3 \\, f_b^{-1} \\, 10^{-4},\n\\end{equation}\nwhere $f_b$ is the beaming factor. \n\nIn J1644+57, the total EM isotropic equivalent emitted energy in X-rays was $ 3 \\times 10^{53}$ergs \\cite{Burrows+11,Bloom+11}; assuming that this is about 1\/3 of the total bolometric EM energy, they estimate that the total isotropic equivalent EM energy injection rate is $ \\approx 10^{54}$ ergs. Cenko et al. \\cite{Cenko+12} estimate a similar total EM isotropic equivalent energy for J2058+05. \nSince the isotropic equivalent energy is a factor $f_{b}^{-1}$ larger than the true emitted energy, the beaming factor cancels between rate and energy factors, yielding an estimated \nEM energy injection rate of $\\approx 3 \\times 10^{43}$ erg \\, Mpc$^{-3}$ yr$^{-1}$ without relying on knowledge of the beaming factor. This falls short of the required energy injection rate for UHECRs, which is $2-4 \\times 10^{44}$ erg Mpc$^{-3}$ yr$^{-1}$. Interestingly a similar situation arises when comparing the observed EM GRB flux with the needed UHECR energy injection rate \n\\cite{fg09,eichler+GRB-UHECR10} (but see however \\cite{katz+GRB-UHECR09}). The crux question here, is the relation between the emission observed in a particular EM band and the energy production in UHECRs. The emission mechanisms and relevant particle energies are so different, that it is far from evident how to relate them.\n\nWe can also estimate the energy of the jet from the energy needed to produce the radio signal, using equipartition arguments and thus deriving a lower limit on the actual energy. Different assumptions about conditions within the radio-emitting region lead to different energy estimates. \nFrom \\cite{Barniol+13b}, we have an estimate of the minimal isotropic equivalent total energy of leptons and magnetic field that is capable of producing the radio emission produced in J1644+57, $\\approx 10^{51}$ erg; including the energy of the accompanying protons increases the estimated minimal energy by a factor $\\sim 5$. Unlike the relativistic inner jet producing the X-rays, the outflow has slowed to being only mildly relativistic with a low Lorentz factor by the time it produces the observed radio, so that the radio emission is isotropic even when it arises from a jet. Therefore, the factor $f_b^{-1}$ in the true rate of jetted-TDEs, Eq. (\\ref{SwiftJetRate}), does not cancel out and the implied energy injection rate (including the protonic contribution) is of order $ 2 \\times 10^{44} (f_b\/10^{-3})^{-1} {\\rm erg} \\, {\\rm Mpc}^{-3}\\,{\\rm yr}^{-1} $, roughly what is needed as a UHECR flux. \n\nThe above discussion suggests that TDE jets alone could satisfy the needed UHECR injection rate, but underlines the importance of investigating what relationship should be expected between the EM and UHECR spectra and total luminosity. \n\nThe above discussion produced two independent, compatible estimates of the rate of TDEs with jets. Using the \\textit{Swift}\\ observed rate and estimating the beaming factor to be $f_{b} \\approx 10^{-2} - 10^{-3}$ based on the Lorentz factor of $\\sim 10-20$ estimated for \\textit{Swift}\\ J1644+57 \\cite{Burrows+11}, Eq. (\\ref{SwiftJetRate}) gives $\\Gamma_{_{\\rm UCRtran, -7}}^{-1} \\gtrsim 3 \\cdot 10^{-2}$. Or, taking the total rate from \\cite{vfRate14} and a jet fraction $\\sim 0.1-0.2$ based on the fraction of TDEs detected in the radio \\cite{vV+Radio13}, gives $ \\Gamma_{_{\\rm UCRtran, -7}} \\lesssim 0.4$. Using the lower estimate, \nEq. (\\ref{neff}) gives $n_{\\rm eff} \\approx 3 \\, \\tau_{5} \\,10^{-4}~ {\\rm Mpc}^{-3}$, comfortably compatible with the Auger source density limit in the low-deflection scenario, $n_{\\rm min} = 2 \\times 10^{-5} ~ {\\rm Mpc}^{-3}$. \n\n\n\\section*{Composition}\n\nThe reader can ask why it is interesting to consider the possibility that UHECRs are predominantly or exclusively protonic, in view of the observed depth-of-shower-maximum distribution of AUGER which favors a predominantly mixed composition of intermediate mass nuclei, if interpreted with current hadronic interaction models tuned at the LHC \\cite{augerXmaxMeas14,augerXmaxComp14}. First, the recently finalized Auger analysis \\cite{augerXmaxComp14} finds a protonic component persisting to highest energies. Second, a mixed composition requires a very hard injection spectrum incompatible with shock acceleration \\cite{shahamPiran13,Aloisio+13} and a composition at the source which has been argued to be strange and unlikely \\cite{shahamPiran13}. Third, a predominantly proton composition is of particular interest because both Auger and TA find evidence of correlations between UHECRs above 55 EeV and the local matter distribution \\cite{TAaniso14,augerAniso14}, although anisotropy {\\it per se} does not exclude mixed composition, particularly for the case of a single, nearby source \\cite{Piran11,rfICRC13,kfs14}. The final and most compelling virtue of a predominantly proton composition is that it naturally explains the shape of the spectrum from below $10^{18}$ eV to the highest energies, including the observed ``dip'' structure around $10^{18.5}$ eV and the cutoff at highest energies, without needing an ad-hoc and fine-tuned transitional component between Galactic and extragalactic cosmic rays \\cite{berezGGdip05,AlBerezGaz12} with additional parameters to tune the composition and maximum energy by hand. \n\nThe reader might also ask whether it is legitimate to set aside inferences on composition from current hadronic interaction models; the answer is ``yes''. The nucleon-nucleon CM energy in the collision of a $10^{18}$ eV proton with the air -- 140 TeV -- is a factor 20 above current the LHC CM energy, so that the models must be extrapolated far into uncharted territory. Furthermore, detailed comparisons by the Auger collaboration of the model predictions with a variety of observed shower properties reveals several discrepancies, including that the models underpredict the muon content of the ground shower by 30\\% or more \\cite{ICRC13topdown,augerHorizMuons14}, and that the model which does the best with respect to the muons at ground level (EPOS-LHC) is in the most serious contradiction with the observed depth of muon production in the atmosphere \\cite{augerMPD14}.\n\n\\section*{Summary}\n \n\nTo conclude, we have shown that a scenario in which UHECRs are predominantly or purely protons can be realized, with acceleration occurring in transient AGN-like jets created in stellar tidal disruption events. A well-studied example of such a TDE-jet, \\textit{Swift}\\ J1644+57, displays inner and outer emission sites in which collisionless shocks satisfy the Hillas criteria.\nThus we propose that, like in AGN models and GRB models, the basic shock acceleration mechanism is applicable for UHECR acceleration in TDE jets. As shown in \\cite{fg09}, the conditions in such jets are such that the radiation fields within the outflow are not large enough to cool the UHECRs before they escape. Thus both the outer and inner emission regions in TDEs may in principle be viable UHECR sources. \n\nWe also investigated whether the total observed flux of UHECRs is compatible with the UHECR injection rate that can be expected for TDE jets; although a more thorough theoretical understanding of the UHECR acceleration mechanism is needed for a definitive conclusion, present evidence indicates the energetics are satisfactory. Finally, we showed that the effective number of sources predicted in the protonic-UHECRs-from-TDE-jets scenario, is compatible with the even the most stringent Auger bound, i.e., the case that typical deflections are less than $3^{\\circ}$.\n\nUnlike for GRBs, the TDE-jet model for UHECR production cannot be tested directly by association of an observed transient event with a signature of UHECRs. In the case of GRBs, prompt neutrinos are produced via photoproduction of charged pions in the source, which arrive approximately simultaneously with the gammas. (The UHECRs themselves arrive 10's or 100's of thousands of years after the gammas or neutrinos, due to the magnetic deflections discussed previously.) By contrast, the level of prompt neutrino production in a TDE-jet is much lower, because the radiation field in a TDE jet is less, inhibiting photopion production. Moreover the duration of UHECR production in a TDE jet is weeks or months, so even the prompt neutrinos are broadly spread in arrival times. \n\nThe conjecture of predominantly protonic composition, the role of transients in UHECR production, and the TDE-jet model can be tested purely observationally, as follows. \\\\\n\\noindent $\\bullet$ Whether UHECRs are protons or nuclei can in principle be determined without relying on hadronic interaction models to infer composition, by detecting or placing sufficiently strong limits on VHE photons and neutrinos produced during the propagation, as their spectra distinguish UHECR protons from nuclei. If UHECRs are predominantly protonic, as shown above (updating earlier arguments \\cite{fg09,WL09,MuraseTakami09}) their primary sources must be transients, with TDE jets the leading candidate. \n\\\\\n\\noindent $\\bullet$ Whether sources are continuous or transient can be determined from the spectrum of UHECRs from a single source, because UHECRs arriving at a given epoch from a transient have similar values of rigidity $\\equiv E\/Z$ rather than displaying a power-law spectrum \\cite{waxmanME}, c.f., Eq. (\\ref{tau}). \n\\\\\n\\noindent $\\bullet$ \nIf sources are confirmed to be transients, the presence or absence of VHE neutrinos accompanying the transient EM outburst, will distinguish between GRB and TDEs being the sources. \n\\\\\n\\noindent $\\bullet$ As far as is presently known, the galaxies hosting TDEs are generally representative of all galaxies; if so, UHECR arrival directions would correlate (only) with the large scale structure, after taking into account Galactic and extragalactic magnetic deflections. However if the rate of TDEs with jets is enhanced in active galactic nuclei, as conjectured in \\cite{fg09}, an enhanced correlation of UHECRs with AGNs relative to random galaxies could potentially be seen.\n\n\\noindent{\\bf Acknowledgements:}\nWe thank Rodolfo Barniol-Duran for discussions and for help preparing the figure, and Kohta Murase for helpful discussions. The research of GRF was supported in part by NSF-PHY-1212538; she thanks the Racah Institute for its hospitality during the initial stages of this work. GRF is a member of the Pierre Auger Collaboration and acknowledges valuable interactions with Auger colleagues. The research of TP was supported by the ERC advanced research grant ``GRBs'' the I-CORE (grant No 1829\/12) and a grant from the Israel Space Agency SELA; he thanks the \nLagrange institute de Paris for hospitality while this research was concluded. \n\n\\section{Appendix: Details of \\textit{Swift}\\ J1644+57 Analyses}\n\n\\noindent {\\bf The Radio Emitting Region:}\\\\\nRadio observations of Sw1644 began 0.9 days after the onset of the trigger \\cite{Zauderer+11} and lasted for about $ 600$ days \\cite{Berger+12,Zauderer+13}. Wide frequency coverage began at 5 days. At that time, the peak of the spectral energy distribution (SED) was at $ \\nu_p \\approx 345$ GHz, with a peak flux $F_p = 35$ mJy. The peak frequency and flux decreased to $\\sim 5$ GHz and $0.5$ mJy at 570 days. We begin by discussing the classical equipartition method of interpreting radio observations \\cite{Pacholczyk1970,ScottReadhead77,Chevalier98,kumarNarayan09}, and its relativistic generalization \\cite{Barniol+13a}. A direct application to the Sw1644 observations \\cite{Zauderer+11} is likely overly naive, as we discuss subsequently.\n\nThe radio emitting region is characterized by four unknowns: the size, $R$, the magnetic field, $B$, the total number of emitting electrons, $N$, and their typical Lorentz factor, $\\gamma_e$. The total energy of the emitting region is the sum of the electrons' energy $N m_e c^2 \\gamma_e$ and the magnetic field energy $B^2 R^3 \/6$, the baryons' energy being unimportant in the Newtonian case. \nIdentifying the spectral peak as the self absorption frequency \nand using the standard expressions for the synchrotron frequency, synchrotron flux and the self-absorbed flux (see e.g. \\cite{Barniol+13a} for details) one can eliminate 3 of the 4 parameters and express, e.g., $B$, $N$ and $\\gamma_e$ in terms of $R$ and the observables $\\nu_p$, $F_p$ and $z$. One then \nobtains $ E = C_1 \/R_{17}^6 + C_2 R_{17}^{11}$, where the first term is the electrons' energy and the second the magnetic field energy. The constants $C_1$ and $C_2$ are given in term of the observables: \n$C_1 = 4.4 \\times 10^{50} \\,{\\rm erg} \\, ( F_{p}^4 \\, d_{28}^8 \\, \\nu_{p,10}^{-7} \\, \\, (1+z)^{-11} ) $ and \n$C_2 = 2.1 \\times 10^{46} \\,{\\rm erg} \\, ( F_{p}^{-4} \\, d_{28}^{-8} \\, \\nu_{p,10}^{10} \\, (1+z)^{14} ), $ where $d_{28}(z)$ is the luminosity distance in units of $10^{28}$cm and $F_p$ is the peak flux measured in mJy. \n\nThe energy is minimized when the electrons' energy is roughly equal\nto the magnetic energy, or put differently, when the system is in equipartition. \nThe size of the system is strongly constrained, as the energy is a very steep function of $R$ both above and below the minimum. \nAs we have three equations and four unknowns we can choose a different independent variable. For our purpose $B R$ is most suitable and in this case one obtains $ E = \\tilde C_1 \/( B R)^{6\/5} + \\tilde C_2 (B R)^{11\/5}$. This dependence is less steep and hence the resulting value for $BR$ is less constrained by these considerations. If $E$ is an order of magnitude above the minimal value, $B R$ can be a factor-10 lower or a factor-3 higher than at equipartition. \n\nWhen applying the equipartition considerations to Sw1644 one has to take into account that the outflow is relativistic \nin this case. The relativistic equipartition estimates are somewhat more complicated than the\nNewtonian ones. A detailed equipartiton formalism for relativistic outflows was developed recently by \\cite{Barniol+13a}.\nLike in the Newtonian case, the total energy depends very steeply on $R$, as\n$ E = \\hat C_1 \/R^6 + \\hat C_2 R^{11}$, where $\\hat C_1$ and $\\hat C_2$ depend on the observed quantities\nbut now also on the outflow Lorentz factor, $\\Gamma$, and on the specific geometry of the emitting region (see \\cite{Barniol+13a} for details). \nNote that here the kinetic energy of the baryons within the relativistic outflow should also be included in the total energy of the system. The bulk Lorentz factor, $\\Gamma$, can be determined using time of arrival arguments; the geometrical factors involved have to be guessed. Given the very steep dependence of the total energy on $R$, $R$ is still well constrained by the energy-minimization, equipartition considerations. Using these arguments \\cite{Barniol+13b} find that for the most reasonable geometry -- a narrow jet with an opening angle 0.1 -- the equipartition value of $B R$ in Sw1644 is slightly larger than $10^{17}$ Gauss cm \n(see Fig. \\ref{fig:J1644_radio}). Given \nthat the equipartition estimates give a lower limit on the energy (that assumes maximal efficiency), this can be considered compatible with the Hillas condition. Furthermore and independently, at the time of the last observations the minimal (equipartition) energy is $0.8 \\times 10^{51}$ ergs \\cite{Barniol+13b}, which is consistent with the energy required to account for the observed UHECR spectrum given the rate of TDEs, estimated above.\n\nHowever the naive equipartition analysis can be doubted, since applying it leads to the conclusion that the energy of the jet increases by about a factor of 20 from the initial observations at around 5 days, to the final observations. The apparent increase required in the jet energy appears in other approaches as well, as noticed first\nby \\cite{Berger+12}, who analyzed the data based on GRB afterglow modeling. This interpretation requires an energy supply to the radio emitting region. Such an energy supply is inconsistent with the continous decrease in the X-ray luminosity during this period \\cite{Berger+12}, which supposedly reflects the activity of the inner engine and the accretion rate. \n\nKumar et al. \\cite{Kumar+13} suggested that the puzzling behavior comes about because the X-ray jet passes through the radio emitting region, causing the radio emitting electrons to be continuously cooled via Inverse Compton (IC) scattering with these X-ray photons. This efficient IC cooling decreases the observed synchrotron radio flux relative to the equipartition estimate, causing the equipartition analysis to yield a lower energy than the true energy content of the system, resulting in an erroneously-low inferred value for $B R$. At later times the X-ray flux diminishes, the IC cooling ceases, and the synchrotron flux increases, consistent with the late-time observations and obviating the need for an increase of the \nenergy of the jet. \\cite{Kumar+13} estimate $B R$ to be larger by a factor of a few than the simple equipartition estimate \n\\cite{Barniol+13b}. The results from their analysis are shown as the solid line in Fig. \\ref{fig:J1644_radio}; at early times $B R \\approx 3 \\times 10^{17}$ Gauss cm. This is just the value needed to accelerate $10^{20}$ eV UHECRs.\n\nTo summarize, even with naive application of equipartition, the values of the total energy and $B R$ within the radio emitting region of TDE J1644 are marginally compatible with those needed to accelerate UHECRs. Given \nthat the equipartition estimates give a lower limit on the energy (i.e., assumes maximal efficiency), and that the more physically satisfactory modeling of \\cite{Kumar+13} yields a larger estimate comfortably compatible with accelerating protons to UHE, we conclude that the outer region of the Sw 1644 TDE jet likely has conditions for UHECR acceleration. \n\n\\noindent{\\bf The X-ray emitting region:} \\\\\nThe isotropic equivalent X-ray luminosity is $\\approx 10^{48}$ ergs\/sec. If this were coming from the accretion disk, then by the argument of \\cite{fg09} we would conclude that Eq. (\\ref{lumi}) was easily satisfied within the jet. However since in \\textit{Swift}\\ J1644 we are viewing the jet close to its axis, there can be additional contributions to the X-ray emission, which must be modeled before $B R$ in the inner jet region can be inferred. Unfortunately, the X-ray observations are much less constraining on the conditions within the X-ray emitting regions than are the radio observations on the conditions in the outer region. The X-ray spectrum is a power law; if combined with the NIR observation it yields a steep slope (steeper than 1\/3). The Fermi upper limits on the GeV emission suggest a suppression of the high energy emission due to photon-photon opacity. Overall, only a single component has been observed, with no clear evidence of the peak frequency and only an upper limit on the high energy IC component. Therefore there is significant freedom in modeling this emission. \n\nRef. \\cite{Burrows+11} models the emission (following the blazer emission model of \\cite{GhiselliniTavecchio09}) as arising from a relativistic blob of plasma. They \nhave put forward three models for this spectrum. \nThe models differ in the dominant energy of the jet (Poynting flux or baryonic), the emission mechanims (synchrotron or external IC), the position of the emission region and the relative contributions of the disk and the jet. \nAccording to these models the X-rays are generated between $10^{14}$ and $10^{16}$ cm from the black hole. Estimates of $BR$ range from a few $\\times 10^{15}$ for model 3 to $5 \\times 10^{17}$ for model 2.\n\n\n\\defAstrophys.\\ J.{Astrophys.\\ J.}\n\\defNature{Nature}\n\\defAstrophys.\\ J. Lett.{Astrophys.\\ J. Lett.}\n\\defAstrophys.\\ J.{Astrophys.\\ J.}\n\\defAstron.\\ Astrophys.{Astron.\\ Astrophys.}\n\\defPhys. Rev. D{Phys. Rev. D}\n\\defPhys.\\ Rep.{Phys.\\ Rep.}\n\\defMonth. Not. RAS {Month. Not. RAS }\n\\defAnnual Rev. Astron. \\& Astrophys.{Annual Rev. Astron. \\& Astrophys.}\n\\defAstron. \\& Astrophys. Rev.{Astron. \\& Astrophys. Rev.}\n\\defAstronom. J.{Astronom. J.}\n\\defJCAP{JCAP}\n\n\n\\bibliographystyle{apsrev4-1.bst}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nElemental cerium is stark example of the complicating effects of magnetism in metals. In principle a simple system, cerium at room temperature and ambient pressure presents a monatomic Bravais lattice (the face-centered-cubic $\\gamma$ phase). However, under the application of modest pressures, cerium undergoes a spectacular first-order transition to a low temperature $\\alpha$ phase. Both $\\alpha$ and $\\gamma$ are isostructural f.c.c. phases, but differ by $\\sim 15 \\%$ in volume, and have markedly different magnetic susceptibilities and resistivities.\\cite{bestreview} There are competing explanations for the transition, which variously invoke a ``Kondo Volume Collapse''\\cite{KVC} mechanism, a ``Mott Transition'',\\cite{MT} or some entropic mechanism\\cite{entropy,jpcm_celath}. In all cases however, it is generally agreed that the large volume $\\gamma$ phase contains relatively larger localized magnetic moments (evinced by the higher susceptibility) and relatively less itinerant electrons (evinced by the higher resistivity). Beyond that, the nature of the transisiton remains contentious.\n\nOne of the main complications in the study of cerium is that although the {$\\gamma - \\alpha$} transition occurs on cooling over a wide range of pressures, it is avoided at ambient pressure. In that case, an intervening dhcp phase forms from the $\\gamma$ phase, and transforms into the $\\alpha$ phase only at lower temperature.\\cite{bestreview} There are commonly two methods used to resolve this problem. Measurements are either performed under pressure on pure cerium, or else the cerium is alloyed with other elements known to suppress the dhcp phase (typically thorium). Results from these two avenues of investigation are not always consistent - for example, it has been claimed that the phonon entropy plays little role in the {$\\gamma - \\alpha$} transition in a {Ce$_{0.9}$Th$_{0.1}$} alloy,\\cite{neutron1,neutron2} while studies on pure cerium under pressure have returned the opposite result.\\cite{puretron,PNASxray} Nevertheless, the alloying of cerium has generated a family of related compounds, in which the order of the transition, the magnitude of the volume collapse, and the transition temperature can be smoothly varied.\\cite{fiskthompson,lashtricrit}\n\nOne such alloy, {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}, is of particular interest in regards to spin-lattice coupling at the phase transition. Alloying with thorium suppresses the dhcp phase, while alloying with lanthanum causes the transition to become continuous and supresses the critical temperature.\\cite{fiskthompson, lashtricrit} It has been reported that cooling {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$} in high magnetic fields can suppress the resistivity change associated with the volume collapse to lower temperature, and that the application of pulsed magnetic fields can induce a large moment below the critical temperature.\\cite{jpcm_celath} These measurements map out a phase boundary for the {$\\gamma - \\alpha$} transition as a function of magnetic field, and suggest that the volume collapse can be fully suppressed by $\\mu_{0} H \\lesssim 56$ Tesla. Here we report a direct study of the structure of {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}, using synchrotron x-ray powder diffraction in pulsed magnetic fields as high as 28 Tesla. Lattice parameter and structural correlation length are extracted throughout the $H - T$ phase diagram.\n\n\n\n\\section{Experiment}\n\nSamples of {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$} were prepared at Los Alamos by arc-melting elemental cerium, lanthanum and thorium in a zirconium gettered argon atmosphere. The button was melted and flipped 15 times to ensure mixing. The final melt was cooled in a round-bottomed trough, which was 5 mm wide. The resulting $\\sim$ 25 mm long sample was unidirectionally rolled $\\sim$ 10 times to achieve a finished thickness of 50 microns. A section of the as-rolled material was heat treated at 450$^\\circ$C for 1 hour to remove the rolling texture before a final sample, 2 mm in diameter, was cut from the sheet with a razor blade. The sample was subsequently mounted inside a blind hole in a sapphire bracket, and sealed with a sapphire cap and stycast epoxy, to ensure good thermal contact and prevent oxidization. The sapphire bracket was anchored to the cold finger of the double-funnel solenoid pulsed magnet at the Advanced Photon Source (APS), which facilitates diffraction at up to 22$^{\\circ}$ of scattering angle, to temperatures as low as $\\sim 5$ K and in magnetic fields as high as $\\sim$ 30 Tesla.\\cite{solenoid1,solenoid2} This system features a resistive magnet coil (of Tohoku design) in a liquid nitrogen bath, connected to a 40 kJ capacitor bank. The bank is capable of discharging a 3 kV charge through the coil in a $\\sim 6$ ms half-sine wave pulse, generating a peak current of $\\sim 10$ kA and a peak magnetic field of $\\sim 30$ Tesla. The heat generated in the resistive coil during a pulse must be allowed to dissipate before the system can be pulsed again. At peak field values, this requires a wait time $\\gtrsim$ 8 minutes between pulses. The sample to be studied is both thermally and vibrationally isolated from the coil. This new instrument offers a complementary measurement geometry to the APS split-pair pulsed magnet developed previously.\\cite{firstrsi,ttopmag} \n\n\\begin{figure} \n\\centering \n\\includegraphics[width=8.1cm]{fig1.eps}\n\\caption {Top: The inicident x-ray pulse (the narrow pulse) is timed to arrive during the magnetic field peak (broader pulse). In this case, a 5.75 ms magnetic field pulse to ~15.4 Tesla weighted by an x-ray exposure with a 1 ms fullwidth gives an average magnetic field value of $\\sim$ 14.9 Tesla, integrated over $\\pm$ 0.5 Tesla for the exposure. Middle: Raw data collected on the image plate for a single 1 ms exposure. Bottom: The same data, radially integrated and converted to $|\\bf{q}|$.}\n\\label{fig:1}\n\\end{figure}\n\nExperiments were performed at the 6ID-B station at the APS. 30 keV photons were selected using a Si(111) monochromator, with the mirror removed to provide an unfocused beam. The monochromator was detuned so as to reduce higher harmonic contamination. The incident beam size was defined by slits, with an illuminated area on sample of 0.2 mm X 1.0 mm. A $\\sim 1$ ms pulse of x-rays was defined using a pair of fast platinum shutters, and the arrival of this pulse at the sample was synchronized with the peak of the magnetic field pulse. Scattered x-rays were recorded on a two-dimensional image plate detector. In this configuration, our system is similar to the pulsed magnet instrument in operation at the European Synchrotron Radiation Facility (ESRF).\\cite{detlefs_ins} To calibrate our instrument, we repeated the measurements\\cite{solenoid2} of Detlefs \\textit{et al.} on polycrystalline TbVO$_4$ in pulsed magnetic fields, as previously measured by the ESRF instrument.\\cite{detlefs_tbvo} We reproduced the reported high field peak splitting in the tetragonal phase of TbVO$_4$,\\cite{detlefs_tbvo} validating our measurement scheme.\\cite{solenoid1,solenoid2} We performed our measurements of {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$} with the axis of the solenoid coincident with the incident beam, and the detector positioned so as to capture the majority of the solenoid exit window. In this way, at 30 keV the first two powder rings (corresponding to the $<$111$>$ and $<$200$>$ peaks) are completely captured by the detector, allowing a full radial integration and excellent counting statistics even for a single 1 ms exposure. The current through the solenoid and the x-ray flux incident on the sample were monitored as a function of time by tracing the output of a high-resolution current monitor and an air-filled ion chamber on a digital storage oscilloscope. A typical trace is shown in Fig. 1, along with data from a single exposure on the image plate. In order to extract the correlation length and lattice parameter from these diffraction patterns, each of the two lowest lying peaks was fit individually, generating two independent sets of data, which were subsequently compared. In all cases, the properties extracted from the two peaks aggreed within the errorbars of the measurement, and the quantities quoted in the following are the average of the two.\n\n\n\\section{Zero-Field Results}\n\nPowder diffraction patterns at room temperature show a single phase material, with symmetry and lattice constant consistent with the $\\gamma$ phase of {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}. On cooling, we observe the first two Bragg peaks to shift to higher $|\\bf{q}|$, while also broadening by a factor of $\\sim 2$ and exhibiting a reduction in peak intensity (see Fig. 2). This is consistent with a collapse in volume, accompanied by a striking reduction in the structural correlation length. It is interesting to note that we observe a single phase at all temperatures, in contrast to recent neutron diffraction measurements on {Ce$_{0.9}$Th$_{0.1}$} which reported a coexistence of both the $\\alpha$ and $\\gamma$ phases over a wide range of temperatures.\\cite{neutron2} Our sample was initially cooled directly to 8K to characterize the magnitude of the volume collapse, then warmed to 250 K while collecting data. The volume was observed to approach the $\\gamma$ phase value quite gradually. Next we cooled slowly and measured the lattice parameters with a fine point density. The resulting continuous but hysteretic volume collapse curves are plotted in Fig. 3. Surprisingly, on our second cooling we found that the absolute size of the volume collpase was diminished - thermal cycling of the sample is detrimental to the phase transition.\n\n\\begin{figure} \n\\centering \n\\includegraphics[width=8.5cm]{fig2.eps}\n\\caption {Zero-field temperature dependence of the $<$111$>$ Bragg reflection. The peak shifts to higher $|\\bf{q}|$ and broadens on cooling. The sample is a single phase at all temperatures. Resolution limited peaks at high temperature are well described by Gaussians, while a Voigt shape at low temperature is consistent with an Ornstein-Zernike scattering function with a short correlation length (solid lines).}\n\\label{fig:2}\n\\end{figure}\n\n\\begin{figure} \n\\centering \n\\includegraphics[width=8.5cm]{fig3.eps}\n\\caption {Zero-field volume collapse in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}. Sample was warmed from 8 K to 250 K ($\\blacksquare$), then cooled back to 10 K ($\\bullet$). This thermal cycling lead to a reduction in the magnitude of the volume collapse, shown by the dashed lines. The inset shows the Ornstein-Zernike correlation length extracted from the cooling measurements (see text). The correlated regions of the collapsed phase have average size of only $\\sim 20$ nm.}\n\\label{fig:3}\n\\end{figure}\n\n\nThe observed peak broadening can be modelled phenomenologically, by assuming that the high temperature patterns show long-range ordered and resolution limited scattering, while the low temperature broadening arises from a reduction in the Ornstein-Zernike correlation length. Our high temperature peaks are well described by Gaussians, and we are assuming a Lorentzian form for the scattering function (see Ref.[\\onlinecite{strucfluc}] for details). Therefore, the measured scattering at low temperature should be well described by the convolution of a Lorentzian and a Gaussian - a Voigt function. Since the width of this Voigt function is related to the width of the Gaussian (the resolution) and the width of the Lorentzian (the inverse correlation length), the numerically extracted fullwidth of the Bragg peaks can be converted to a resolution-deconvoluted correlation length. The solid lines in Fig. 2 demontrate the degree of agreement between these lineshapes and the measured data, and the extracted correlation length is plotted in the inset of Fig. 3. Our resolution limit is found to be on the order of a few microns, however, the low temperature collapsed phase exhibits correlation lengths which are roughly two orders of magnitude shorter than this. Below $\\sim$ 30 K, the $\\alpha$ phase correlations are seen to extend over an average range of $\\sim$ 20 nm. Since our alloy is $\\sim$ 80$\\%$ cerium, the mean distance between dopant atoms (either lanthanum or thorium) is on the order of 2 nm. Thus, while the inclusion of lanthanum and thorium in the alloy does not completely hinder the formation of the $\\alpha$ phase, and each correlated region on average contains multiple dopant atoms, nevertheless these atoms act to strongly disorder the sample at the nanoscale within the $\\alpha$ phase. Conversely, within the $\\gamma$ phase, individual grains are well correlated over micron-sized regions despite the same level of chemical disorder.\n\n\\section{results in pulsed magnetic field}\n\nMotivated by the results of Ref. [\\onlinecite{jpcm_celath}], we next sought to characterize the effect of applied magnetic field on the volume collapse transition in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}. Currently, the available capacitance at the APS limits our measurements to applied magnetic fields $\\lesssim$ 30 Tesla, well below the 56 Tesla estimated to fully suppress the {$\\gamma - \\alpha$} transition to zero Kelvin.\\cite{jpcm_celath} However, inspection of the resistivity measurements of Ref. [\\onlinecite{jpcm_celath}] reveals that our highest achievable magnetic fields should be sufficient to suppress the temperature of the volume collapse inflection point by $\\sim$ 8 K. We can maximize our sensitivity to this effect by cooling to the inflection point of the volume collapse ($\\sim$ 35 K) and pulsing to high magnetic field. The slope of $\\frac{\\Delta V}{V}$ is large at this temperature, and therefore an 8 K shift in the inflection point should cause a $\\sim$ 0.7 $\\%$ change in volume at 35 K, well within our resolution. However, as can be seen from Fig. 4, we failed to observe this effect. After cooling to 35 K, we collected five diffraction patterns at zero field, which served to define the zero field Bragg peak positions to an accuracy better than $\\sim 10^{-3} \\AA^{-1}$. Next, we collected ten patterns in pulsed field, with the average applied field over the x-ray exposure being $\\sim 28 \\pm 1$ Tesla. We observed no measurable change in the Bragg peak positions in any individual pattern. Nor did we observe any cumulative effect from multiple high-field pulses. Our measurements therefore constrain any volume changes at (35 K, 28 T) to be less than $\\sim$ 0.07 $\\%$, a full order of magnitude smaller than what would be expected.\\cite{jpcm_celath} \n\n\\begin{figure} \n\\centering \n\\includegraphics[width=8.5cm]{fig4.eps}\n\\caption {Observed and expected effects of 28 Tesla pulsed magnetic fields on the $<$111$>$ Bragg peak at T=35 K. There is no discernable change in the position or width of the peak. The expected curve is calculated based on the position and width change that would arise from an 8 K shift in the transition temperature, consistent with the results of Ref. [\\onlinecite{jpcm_celath}].}\n\\label{fig:4}\n\\end{figure}\n\n\\begin{figure} \n\\centering \n\\includegraphics[width=8.5cm]{fig5.eps}\n\\caption {Volume collapse ($\\Delta V\/V$) and Ornstein-Zernike correlation length ($\\xi$), comparing zero field cooled measurements and measurements where the system is repeatedly pulsed to 28 T on cooling. No field effect is seen.}\n\\label{fig:5}\n\\end{figure}\n\nIt is important to remember that the evidence for field induced suppression of the volume collapse in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$} for magnetic fields below 30 Tesla comes from resistivity curves, measured by cooling in constant applied magnetic fields. This is procedurally distinct from the zero-field-cooled pulsed measurements reported here. Discrepancy between field-cooled and zero-field-cooled behaviour is a hallmark of spin glass materials, wherein chemical disorder and competing interactions collude to freeze in disordered ground states.\\cite{spinglassrev} We have already shown that chemical disorder in our alloy has a strong effect on the development of $\\alpha$-phase correlations (see Fig. 3). It is therefore tempting to sugest that there may be some glassy behavior at play in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}. Direct comparison between field cooled and zero field cooled measurements are therefore highly desirable, however, a truly field-cooled measurement is not achievable with pulsed magnets. We have attempted to approximate one by slow cooling, and pulsing in regular intervals. In order to minimize the obscuring effects of thermal cycling, we performed two identical cooling curves, one after the other, warming to 280 K once to reset the system. For each of these measurements, we cooled at a controlled rate of 1 K$\/$min from 280 K. During the first of these cooldowns, we pulsed to 28 T at 10 min (10 K) intervals. During the second cooldown, we collected zero-field data in the same manner, albeit more frequently since there was no need to wait for the coil to cool down. The results are plotted in Fig. 5. Clearly, magnetic fields applied in this way are no more effective at suppressing the volume collapse than the zero-field-cooled methods shown in Fig. 4. \n\nWe are therefore forced to conclude that the available magnetic fields were insufficient to induce any structural effects in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}. If the {$\\gamma - \\alpha$} transition were continuous for this alloy (as claimed in Ref. [\\onlinecite{fiskthompson}]), and the magnetic field induced suppression occured as claimed in Ref. [\\onlinecite{jpcm_celath}], then we would have expected to see a clear shift in $|\\bf{q}|$ as denoted in Fig. 4 for the zero-field-cooled measurements, and a clear suppression in temperature should have occurred for the repetitively pulsed measurements reported in Fig. 5. Conversely, if the {$\\gamma - \\alpha$} transition were first order for this alloy (as claimed in Ref. [\\onlinecite{lashtricrit}]), then our measurements imply that the phase boundary was never crossed in our cooling curve. This would require a steeper increase of the critical field on lowering temperature than was reported in Ref. [\\onlinecite{jpcm_celath}.] It is also possible that the alloy is dynamically inhibited on the timescale of the magnetic field pulse, which would be consistent with a glassy behavior. Finally, it is possible that the effects reported in Ref. [\\onlinecite{jpcm_celath}] were not intrinsic to {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}. Clearly, the issue of field induced suppression of the {$\\gamma - \\alpha$} transition requires further investigation in elemental cerium and its alloys.\n\n\n\\section{effect of thermal cycling}\n\nAs has been noted in the preceeding sections, we have observed a detrimental effect due to thermal cycling. Our initial measurements on as-grown samples revealed well-correlated grains on the order of microns in size. On first cooling, the volume collapsed $\\alpha$ phase was seen to exhibit nanoscale disorder, and on second cooling the absolute size of the volume collapse had diminished (see Fig. 3 and discussion in Section III). Over the course of our week-long experiment, we observed the $\\alpha$-phase volume to gradually increase with each thermal cycling, while the $\\gamma$-phase volume was seen to gradually decrease. In addition, while the $\\alpha$-phase consistently showed nanoscale disorder with correlation lengths of $\\sim$ 20 nm, the $\\gamma$ phase correlations were observed to shrink, from micron-sized grains in the as-grown sample to correlation lengths of only $\\sim$ 150 nm after 11 thermal cycles. Zero-field cooling curves from the second, seventh, and eleventh cycle are plotted in Fig. 6, to illustrate the effect. It is likely that the strains associated with the large volume change and spatial variations in doping,\\cite{fiskthompson} which are manifest in the $\\alpha$-phase disorder, act to introduce new domain boundaries at low temperature, which thereafter persist as defects. This effect is insidious, since the suppression by thermal cycling we observe here could easily be mistaken for suppression by an external perturbation, if that perturbation was varied monotonically during the course of an experiment. Hypothetically, had we measured cooling curves through the volume collapse in DC magnetic fields which were monotomically increased during the course of our measurements, we may easily have misinterpreted the gradual buildup of thermal cycling damage as a magnetic field induced effect. Therefore, it is important to take extra care when mapping out phase diagrams in cerium alloys. More generally, these measurements highlight the need for careful consideration of strain driven effects in all materials studies, which can often obfuscate results.\n\n\\begin{figure}\n\\centering \n\\includegraphics[width=8.5cm]{fig6.eps}\n\\caption {Gradual contamination by nanoscale disorder, induced by thermal cycling through the {$\\gamma - \\alpha$} transition. The effects are clear in both the magnitude of the volume collapse ($\\Delta V\/V$) and the Ornstein-Zernike correlation length ($\\xi$). All data was collected on cooling in zero magnetic field. The three data sets represent the second ($\\bullet$), seventh ($\\blacksquare$), and eleventh ($\\blacktriangle$) thermal cycle.}\n\\label{fig:6}\n\\end{figure}\n\n\\section{conclusions} \n\nIn this article, we have reported a synchrotron x-ray diffraction study of the volume collapse transition in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$}, as a function of temperature and pulsed magnetic field. In this alloy, the {$\\gamma - \\alpha$} transition is smooth but hysteretic, characterized by a $\\sim 4 \\%$ volume change and a dramatic reduction in structural correlation length. The collapsed $\\alpha$ phase is shown to be unaffected by pulsed magnetic fields $\\lesssim$ 28 Tesla, both in zero-field cooled conditions and under repetitive pulsing on cooling. It is suggested that a deviation between truly field cooled and zero field cooled behavior in {Ce$_{0.8}$La$_{0.1}$Th$_{0.1}$} may account for the disagreement between our measurements and previous bulk studies.\\cite{jpcm_celath} This implies a glassy character to the low temperature phase. We have also shown that repeated thermal cycling through the {$\\gamma - \\alpha$} transition acts to disorder the sample on nanometer length scales, while suppressing the magnitude of the volume collapse transition. These measurements may constitute a microscopic measure of what has been generally refered to as ``sluggish dynamics'' in cerium alloys.\\cite{jpcm_celath,fiskthompson}\n\nIt is our hope that these results will advance understanding of the exotic {$\\gamma - \\alpha$} transition in elemental cerium and its alloys. Specifically, we believe the data presented here highlights the strong effects of disorder in these systems. We gratefully acknowledge fruitful discussions with J. Lashley and B. Toby. Use of the Advanced Photon Source is supported by the DOE, Office of Science, under Contract No. DE-AC02-06CH11357. Pulsed magnet collaborations between Argonne and Tohoku University are supported by the ICC-IMR. HN acknowledges KAKENHI No. 23224009 from MEXT. JPCR aknowledges the support of NSERC of Canada. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGamma-ray observations of the Small Magellanic Cloud with the EGRET telescope onboard \nthe {\\it Compton Gamma Ray Observatory} have proved that the bulk of cosmic rays (CRs) \npropagating in the Milky Way are produced in Galactic sources \\cite{sre93}. \nObservations of the diffuse $\\gamma$-ray emission from our Galaxy allow to estimate \nthe total CR luminosity \\cite{dog02}:\n\\begin{equation}\nL_{\\rm CR} = L_\\gamma {x_\\gamma \\over x} \\sim 5 \\times 10^{40}~{\\rm erg~s}^{-1},\n\\label{eqvt1}\n\\end{equation}\nwhere $L_\\gamma \\sim 5 \\times 10^{39}$~erg~s$^{-1}$ is the total \nluminosity of diffuse high-energy ($>100$~MeV) $\\gamma$ rays emitted in the decay of \n$\\pi^0$ produced by CR interaction with the interstellar medium (ISM), $x_\\gamma \n\\sim 120$~g~cm$^{-2}$ is the mean grammage needed for a CR ion to produce \na $\\pi^0$ in the ISM and $x \\sim 12$~g~cm$^{-2}$ is the mean path length that CRs \ntraverse before escaping the Galaxy, which is determined from measurements of the CR\nchemical composition near Earth. In comparison, the total power supplied by Galactic \nsupernovae (SNe) is\n\\begin{equation}\nL_{\\rm SN} = E_{\\rm SN} R_{\\rm SN} \\approx 10^{42}~{\\rm erg~s}^{-1},\n\\label{eqvt2}\n\\end{equation}\nwhere $E_{\\rm SN} \\approx 1.5\\times 10^{51}$~erg is the approximate total ejecta kinetic \nenergy of a SN and $R_{\\rm SN} \\approx 2$ per century is the current epoch \nGalactic SN rate \\cite{fer98}. Thus, SNe have enough power to sustain the CR \npopulation against escape from the Galaxy and energy losses, if there is a mechanism \nfor channeling $\\sim 5$\\% of the SN mechanical energy release into relativistic \nparticles.\n\nDiffusive shock acceleration (DSA) at the blast waves generated by SN explosions can \nin principle produce the required acceleration efficiency, as well as the observed \npower-law spectrum of CRs \\cite{kry77,axf78,bel78,bla78}. In this model, a fraction \nof ambient particles entering the SN shock front can be accelerated to high energies \nduring the lifetime of a supernova remnant (SNR) by diffusing back and forth on \ncompressive magnetic fluctuations of the plasma flow on both sides of the shock. A \ncritical ingredient of the theory is the strength of the turbulent magnetic field \nin the shock acceleration region, which governs the acceleration rate and in turn \nthe maximum energy of the accelerated particles. If the turbulent field upstream of \nthe SN shock is similar to the preexisting field in the surrounding ISM ($B \\sim\n5$~$\\mu$G), the maximum total energy of an ion of charge $Z$ was estimated 25 years ago \nto be (for a quasi-parallel shock geometry) $E_{\\rm max} \\sim 10^{14} Z$~eV \\cite{lag83}. \nBut in more recent developments of the DSA theory, it is predicted that large-amplitude \nmagnetic turbulence is self-generated by streaming of accelerated particles in the \nshock region, such that the ambient magnetic field can be strongly amplified as part \nof the acceleration process \\cite{bel01,ama06,vla06}. In this case, protons might be \naccelerated in SNRs up to $3\\times10^{15}$~eV, i.e. the energy of the spectral \"knee\" \nabove which the measured all-particle CR spectrum shows a significant steepening. \nContributions of accelerated $\\alpha$-particles and heavier species might then explain \nthe existing CR measurements up to $\\sim$10$^{17}$~eV \\cite{ber07}. Above energies of \n10$^{18}$--10$^{19}$~eV, CRs are probably of extragalactic origin. \n\nAnother uncertain parameter of the DSA model is the fraction of total shocked \nparticles injected into the acceleration process. Although theoretical progress has \nbeen made in recent years \\cite{bla05}, the particle injection and consequently the \nacceleration efficiency are still not well known. However, theory predicts that for \nefficient acceleration the energy density of the relativistic nuclear component can \nbecome comparable to that of the postshock thermal component, in which case the \nbackreaction of energetic ions can significantly modify the shock structure and \nthe acceleration process can become highly nonlinear (e.g. \\cite{ber99}). In \nparticular, the compression ratio of a CR-modified shock is expected to be higher \nthan for a test-particle shock (i.e. when the accelerated particles have no \ninfluence on the shock structure). \nThis is because of both the softer equation of state of a relativistic (CR) gas and \nthe energy loss due to escape of accelerated particles from the shock region \n\\cite{dec00}. Moreover, the temperature of the shock-heated gas can be reduced if a \nsignificant fraction of the total available energy of the shock goes into relativistic \nparticles. Observations of these nonlinear effects \\cite{hug00,dec05,war05} provide \nindirect evidence for the efficient acceleration of ions in SN shock waves. \n\nThe acceleration of electrons in SNRs leaves no doubt, since we observe the nonthermal \nsynchrotron emission that these particles produce in the local magnetic field. Radio \nsynchrotron radiation, which in SNRs is emitted by GeV electrons, was discovered \nin the 1950's. More recent is the observation of X-ray synchrotron emission from \nyoung shell-type SNRs \\cite{koy95}, which is due to electrons accelerated to very high \nenergies, $E_e>$1~TeV. Thanks to the extraordinary spectroscopic-imaging capabilities of \nthe {\\it XMM-Newton} and {\\it Chandra} X-ray observatories, this nonthermal emission can \nnow be studied in great details and recent observations of SNRs with these satellites \nhave shed new light on the DSA rate and the maximum energy of the accelerated particles. \nThis is the subject of Section~2. \n\nIn Section~3, we discuss the origin of the TeV $\\gamma$-ray emission observed from \na handful of shell-type SNRs with atmospheric Cerenkov telescopes. For some objects, \nthe detected $\\gamma$-rays have been explained as resulting from $\\pi^0$ production in \nnuclear collisions of accelerated ions with the ambient gas. If this were true, this \nhigh-energy emission would be the first observational proof that CR ions are indeed \naccelerated in SN shock waves. However, the origin of the TeV $\\gamma$-rays \nemitted in SNRs is still a matter of debate, because at least in some cases the \nhigh-energy photons can also be produced by inverse Compton scattering of \ncosmic-microwave-background photons (and possibly optical and infrared interstellar \nphotons) by ultrarelativistic electrons. \n\nIn Section 4, we show that radio observations of extragalactic SNe can \nprovide complementary information on the DSA mechanism. As an example, we use a \nsemianalytic description of nonlinear DSA to model the radio light curves \nof SN 1993J. We choose this object because the set of radio data \naccumulated over the years \\cite{wei07} constitutes one of the most detailed sets of \nmeasurements ever established for an extragalactic SN in any wavelength range. We \nderive from these data constraints on the magnetic field strength in the environment \nof the expanding SN shock wave, the maximum energy of the accelerated particles, as \nwell as on the fractions of shocked electrons and protons injected into the \nacceleration process. Conclusions are given in Section 5.\n\n\\section{X-ray synchrotron emission from SNRs}\n\nTogether with the thermal, line-dominated X-ray emission from the shock-heated gas, \na growing number of SNRs show nonthermal, featureless emission presumably produced\nby ultrarelativistic electrons in the blast wave region via a synchrotron process. \nHigh-angular resolution observations made with the {\\it Chandra} and \n{\\it XMM-Newton} X-ray observatories have revealed very thin rims of nonthermal \nemission associated with the forward shock. In several cases, \nlike SN~1006 \\cite{koy95} and G347.3-0.5 \\cite{cas04}, the synchrotron component \ncompletely dominates the thermal X-ray emission. The measured power-law spectral \nindex of the X-ray synchrotron radiation is always much steeper than that of the \nnonthermal radio emission, which is consistent with expectation that the X-ray \ndomain probes the high-energy end of the accelerated electron distribution. The \ncomparison of radio and X-ray fluxes allows to determine the exponential cutoff \n(maximum) frequency of the synchrotron emission, $\\nu_c$, which is related to the \nmaximum energy of the accelerated electrons and the ambient magnetic field as \n(e.g. \\cite{sta06})\n\\begin{equation}\n\\nu_c = 1.26\\times10^{16} \\bigg({E_{e,\\rm max} \\over 10{\\rm~TeV}}\\bigg)^2 \n\\bigg({B \\over 10~\\mu{\\rm G}}\\bigg)~{\\rm Hz}.\n\\label{eqvt3}\n\\end{equation}\nDiffusive shock acceleration can only occur for particles whose acceleration rate \nis higher than their energy loss rate in the acceleration region. The maximum \nelectron energy, $E_{e,\\rm max}$, can be estimated by equating the synchrotron \ncooling time (e.g. \\cite{par06}),\n\\begin{equation}\n\\tau_{\\rm syn}(E_{e,\\rm max}) = {E_{e,\\rm max} \\over (dE\/dt)_{\\rm syn}} \\propto\nE_{e,\\rm max}^{-1} B^{-2},\n\\label{eqvt4}\n\\end{equation}\nwhere $(dE\/dt)_{\\rm syn}$ is the synchrotron loss rate at $E_{e,\\rm max}$, to \nthe acceleration time\n\\begin{equation}\n\\tau_{\\rm acc}(E_{e,\\rm max}) = {E_{e,\\rm max} \\over (dE\/dt)_{\\rm acc}} \\sim\n{\\kappa(E_{e,\\rm max}) \\over V_s^2},\n\\label{eqvt5}\n\\end{equation}\nwhere $(dE\/dt)_{\\rm acc}$ and $\\kappa(E_{e,\\rm max})$ are the acceleration \nrate and mean spatial diffusion coefficient of the electrons of \nenergy $E_{e,\\rm max}$ in the blast wave region and $V_s$ is the shock speed. \nWe have neglected here the dependence of $\\tau_{\\rm acc}$ on the shock \ncompression ratio (see \\cite{par06}). The value of $\\kappa(E_{e,\\rm max})$ \ndepends on the strength and structure of the turbulent magnetic field. The DSA \ntheory predicts that CRs efficiently excite large amplitude magnetic fluctuations \nupstream of the forward shock and that these fluctuations scatter CRs very \nefficiently \\cite{bel78,bel01,ama06,vla06}. It is therefore generally assumed that \nthe spatial diffusion coefficient is close to the Bohm limit:\n\\begin{equation}\n\\kappa \\lower.5ex\\hbox{$\\; \\buildrel > \\over \\sim \\;$} \\kappa_B={r_g v \\over 3},\n\\label{eqvt6}\n\\end{equation}\nwhere $v$ is the particle speed and $r_g$=$pc\/(QeB)$ the particle gyroradius, $p$ \nbeing the particle momentum, $c$ the speed of light, $Q$ the charge number ($Q=1$ \nfor electrons and protons), and $-e$ the electronic charge. Note that for \nultrarelativistic electrons, $\\kappa_B = r_g c\/3 \\propto E_e B^{-1}$. Equating \nequations~(\\ref{eqvt4}) and \n(\\ref{eqvt5}) and using equation~(\\ref{eqvt3}) to express $E_{e,\\rm max}$ as a \nfunction of $B$ and $\\nu_c$, we can write the ratio of the electron diffusion \ncoefficient at the maximum electron energy to the Bohm coefficient as:\n\\begin{equation}\n\\eta_\\kappa={\\kappa(E_{e,\\rm max}) \\over \\kappa_B} \\propto V_s^2 \\nu_c^{-1}.\n\\label{eqvt7}\n\\end{equation}\nThus, measurements of $V_s$ and $\\nu_c$ can allow to derive $\\eta_\\kappa$ \nwithout knowing the ambient magnetic field. Using this result, several \nrecent studies \\cite{sta06,par06,rey04,yam04} have shown that there are regions \nin young ($t<10^4$~yr) SNRs where {\\it acceleration occurs nearly as fast as the \nBohm theoretical limit} (i.e. $1<\\eta_\\kappa<10)$. This provides an important \nconfirmation of a key prediction of the DSA model. \n\nThe strength of the magnetic field in the shock acceleration region may be derived \nfrom the thickness of the nonthermal X-ray rims observed in young SNRs (e.g. \n\\cite{vin03,bal06,par06}). One of the two interpretations that have been proposed to \nexplain the thin X-ray filaments is that they result from fast synchrotron cooling \nof ultrarelativistic electrons transported downstream of the forward shock. In this \nscenario, the width of the filaments is set by the distance that the electrons cover \nbefore their synchrotron emission falls out of the X-ray band. The electron transport \nin the downstream region is due to a combination of diffusion and advection, whose \ncorresponding scale heights are \\cite{bal06} $l_{\\rm diff}=\n\\sqrt{\\kappa \\tau_{\\rm syn}}\\propto B^{-3\/2}$ (see eqs.[\\ref{eqvt4}] and [\\ref{eqvt6}]) \nand $l_{\\rm adv}=\\tau_{\\rm syn}V_s\/r_{\\rm tot}\\propto B^{-3\/2} E_X^{-1\/2} V_s\/r_{\\rm tot}$, \nrespectively. Here, $r_{\\rm tot}$ is the overall compression ratio of the shock and \n$E_X \\sim 5$~keV is the typical X-ray energy at which the rims are observed. Thus, by \ncomparing $l_{\\rm diff}$ and $l_{\\rm adv}$ to the measured width of the X-ray filaments \n(e.g. $l_{\\rm obs}\\approx 3''$ in Cas A which gives 0.05~pc for a distance of 3.4 kpc \n\\cite{vin03}) one can estimate the downstream magnetic field. Applications of this \nmethod to {\\it Chandra} and {\\it XMM-Newton} observations of young SNRs have shown that \n{\\it the magnetic field at the forward shock is amplified by about two orders of \nmagnitude} as compared with the average Galactic field strength. This conclusion has \nbeen recently strengthened by the observations of rapid time variations ($\\sim$1~yr) in \nbright X-ray filaments of the SNR RXJ1713.7-3946 (also named G347.3-0.5), which are \ninterpreted as resulting from fast synchrotron cooling of TeV electrons in a magnetic \nfield amplified to milligauss levels \\cite{uch07}. Such a high magnetic field is \nlikely the result of a nonlinear amplification process associated with the efficient \nDSA of CRs \\cite{bel01,ama06,vla06}. \n\nThe other interpretation that has been proposed to account for the thin X-ray filaments \nis that they reflect the spatial distribution of the ambient magnetic field rather than\nthe spatial distribution of the ultrarealtivistic electrons \\cite{poh05}. In this \nscenario, the magnetic field is thought to be amplified at the shock as well, but the \nwidth of the X-ray rims is not set by $l_{\\rm diff}$ and $l_{\\rm adv}$, but by the \ndamping length of the magnetic field behind the shock. In this case, the relation given \nabove between the rim thickness and the \ndownstream magnetic field would not be valid. Comparison of high-resolution X-ray and \nradio images could allow to distinguish between the two interpretations, because \nthe synchrotron energy losses are expected to be relatively small for GeV electrons \nemitting in the radio band \\cite{vin03}. Thus, if the X-ray filaments are due to \nrapid synchrotron cooling of TeV electrons, the same structures should not be seen in \nradio images. A recent detailed study of Tycho's SNR has not allowed to draw firm \nconclusions on the role of magnetic damping behind the blast wave \\cite{cas07}. \nFurther high-resolution observations of SNRs in radio wavelengths would be very useful. \n\nThe findings that (1) DSA can proceed at nearly the maximum possible rate (i.e. the Bohm \nlimit) and (2) the magnetic field in the acceleration region can be strongly amplified, \nsuggest that CR ions can reach higher energies in SNR shocks than previously estimated \nby Lagage \\& Cesarsky \\cite{lag83}. Thus, Berezhko \\& V\\\"olk \\cite{ber07} have argued \nthat protons can be accelerated in SNRs up to the energy of the knee in the CR \nspectrum, at $3\\times10^{15}$~eV. But relaxing the assumption of Bohm diffusion used \nin the calculations of Berezhko \\& V\\\"olk, Parizot et al. \\cite{par06} have obtained \nlower maximum proton energies for five young SNRs. These authors have derived an upper \nlimit of $\\sim8\\times10^{14}$~eV on the maximum proton energy, $E_{p,\\rm max}$, and \nhave suggested that an additional CR component is required to explain the CR data \nabove the knee energy. Recently, Ellison \\& Vladimirov \\cite{ell08} have pointed out \nthat the average magnetic field that determines the maximum proton energy can be \nsubstantially less than the field that determines the maximum electron energy. This is \nbecause electrons remain in the vicinity of the shock where the magnetic field can be \nstrongly amplified, whereas protons of energies $E_p>E_{e,\\rm max}$ diffuse farther in \nthe shock precursor region where the field is expected to be weaker \n($E_{p,\\rm max}>E_{e,\\rm max}$ because radiation losses affect the electrons only). \nThis nonlinear effect of efficient DSA could reduce $E_{p,\\rm max}$ relative \nto the value expected from test-particle acceleration. Nonetheless, recent calculations \nof $E_{p,\\rm max}$ in the framework of nonlinear DSA models suggest that SNRs might \nwell produce CRs up to the knee \\cite{ell08,bla07}.\n\n\\section{TeV gamma-ray emission from SNRs}\n\nAtmospheric Cerenkov telescopes have now observed high-energy $\\gamma$ rays from six \nshell-type SNRs: Cas A with HEGRA \\cite{aha01} and MAGIC \\cite{alb07a}, \nRX~J1713.7-3946 with CANGAROO \\cite{eno02} and HESS \\cite{aha07a}, RX~J0852.0-4622 \n(Vela Junior) with CANGAROO-II \\cite{kat05} and HESS \\cite{aha07b}, RCW~86 with HESS \n\\cite{hop07}, IC~443 with MAGIC \\cite{alb07b}, and very recently SN~1006 in deep HESS \nobservations \\cite{aha05b}. In addition, four $\\gamma$-ray sources discovered in the \nGalactic plane survey performed with HESS are spatially coincident with SNRs \\cite{aha06a}. \n\nWith an angular resolution of $\\sim0.06^\\circ$ for individual $\\gamma$ rays\n\\cite{aha07a,aha07b}, HESS has provided detailed images above 100 GeV of the extended \nSNRs RX~J1713.7-3946 and RX~J0852.0-4622 (their diameters are $\\sim1^\\circ$ and \n$\\sim2^\\circ$, respectively). In both cases the images show a shell-like structure \nand there is a striking correlation between the morphology of the $\\gamma$-ray \nemission and the morphology previously\nobserved in X-rays. For both objects the X-ray emission is completely dominated by \nnonthermal synchrotron radiation. The similarity of the $\\gamma$-ray and X-ray images \nthus suggest that the high-energy emission might also be produced by ultrarelativistic\nelectrons, via inverse Compton (IC) scattering off cosmic-microwave-background (CMB), \noptical-starlight and infrared photons. The $\\gamma$-ray radiation would then be \nproduced by electrons of energy (in the Thompson limit)\n\\begin{equation}\nE_e \\sim \\bigg({3 \\over 4}{E_\\gamma \\over E_\\star}\\bigg)^{1\/2} m_e c^2,\n\\label{eqvt8}\n\\end{equation}\nwhere $E_\\star$ is the typical energy of the seed photons, $E_\\gamma$ is the average \nfinal energy of the upscattered photons and $m_e$ is the electron mass. For the CMB, \nwhose contribution to the total IC emission of SNRs generally dominates, $E_\\star \\sim \n3kT_{\\rm CMB}=7.1\\times10^{-4}$~eV ($k$ is the Boltzmann constant and \n$T_{\\rm CMB}=2.73$~K). Significant $\\gamma$-ray emission beyond $E_\\gamma=30$~TeV has \nbeen detected from RX~J1713.7-3946 \\cite{aha07a}. Thus, an IC origin for the high-energy \nemission would imply that electrons are accelerated to more than 90~TeV in this object. \nIn more accurate calculations that take into account the contributions of the optical \nand infrared interstellar radiation fields, the maximum electron energy is found to be \n$E_{e,\\rm max} \\sim 15$--40~TeV \\cite{por06}. This result is consistent with the value \nof $E_{e,\\rm max}$ derived from the width of X-ray filaments in RX~J1713.7-3946, \n$E_{e,\\rm max}=36$~TeV \\cite{par06}. \n\nAssuming that the same population of ultrarelativistic electrons produce both the \nobserved TeV $\\gamma$-rays and nonthermal X-rays, the mean magnetic field in the \ninteraction region can be readily estimated from the ratio of synchrotron to IC \nluminosities:\n\\begin{equation}\n{L_{\\rm syn} \\over L_{\\rm IC}}={U_B \\over U_{\\rm rad}}={B^2 \\over 8\\pi U_{\\rm rad}},\n\\label{eqvt9}\n\\end{equation}\nwhere $U_B=B^2 \/ (8\\pi)$ is the magnetic field energy density and $U_{\\rm rad}$ is \nthe total energy density of the seed photon field. With \n$U_{\\rm CMB}\\approx 0.25$~eV~cm$^{-3}$ for the CMB and \n$U_{\\rm IR}\\approx 0.05$~eV~cm$^{-3}$ for the interstellar infrared background (e.g. \n\\cite{aha06b}), we have $U_{\\rm rad}\\approx 0.3$~eV~cm$^{-3}$ (we neglect here the \ncontribution to the IC emission of the optical starlight background). Then, from the\nmeasured ratio $L_{\\rm syn} \/ L_{\\rm IC}\\approx 10$ for RX~J1713.7-3946 \\cite{aha06b}, \nwe obtain $B\\approx 11~\\mu$G. This value is significantly lower than the downstream \nmagnetic field estimated from the observed X-ray rims: $B\\sim80~\\mu$G \\cite{par06} (or \n$B>65~\\mu$G in Ref.~\\cite{ber06}). In other words, if the magnetic\nfield in the electron interaction region is as high as derived from the width of the \nX-ray filaments, IC radiation cannot account for the TeV $\\gamma$-ray data. \n\nThe magnetic field amplification is the main argument to favor a hadronic origin for \nthe high-energy $\\gamma$ rays produced in RX~J1713.7-3946 and other SNRs \\cite{ber06}. \nThe shape of the measured $\\gamma$-ray spectrum below $\\sim$1~TeV has also been used\nto advocate that the high-energy emission might not be produced by IC scattering \n\\cite{aha06b}, but the IC calculations of Ref.~\\cite{por06} reproduce the broadband \nemission of RX~J1713.7-3946 reasonably well. In the hadronic scenario, the TeV \n$\\gamma$ rays are due to nuclear collisions of accelerated protons and heavier \nparticles with ambient ions, which produce neutral pions $\\pi^0$ that decay in 99\\% \nof the cases into two photons with energies of 67.5~GeV each in the $\\pi^0$ rest \nframe ($2\\times67.5$~GeV is the $\\pi^0$ mass). At TeV energies in the observer rest \nframe, the spectrum of the $\\pi^0$-decay $\\gamma$ rays essentially reproduces,\nwith a constant scaling factor, the one of the parent ultrarelativistic particles. \nThe accelerated proton energies can be estimated from the $\\gamma$-ray spectrum \nas $E_p \\sim E_\\gamma\/0.15$ \\cite{aha07a}. The detection of $\\gamma$ rays with \n$E_\\gamma>30$~TeV in RX~J1713.7-3946 thus implies that protons are accelerated to more \nthan 200~TeV, which is still about an order of magnitude below the energy of the knee. \n\nHowever, the hadronic scenario is problematic for RX~J1713.7-3946. Due to the lack of \nthermal X-ray emission, the remnant is thought to expand mostly in a very diluted \nmedium of density $n<0.02$~cm$^{-3}$ \\cite{cas04}. It is likely that the SN \nexploded in a bubble blown by the wind of the progenitor star. The flux of $\\gamma$ \nrays produced by pion decay is proportional to the product of the number of \naccelerated protons and the ambient medium density. Thus, the total energy contained\nin CR protons would have to be large to compensate the low density of the ambient \nmedium. From the $\\gamma$-ray flux measured with HESS from the center of the remnant, \nPlaga \\cite{pla08} has recently estimated that the total CR-proton energy would have to \nbe $>4\\times10^{51}$~erg! Katz \\& Waxman \\cite{kat08} also argue against a hadronic \norigin for the TeV emission from RX~J1713.7-3946. They show that it would require that \nthe CR electron-to-proton abundance ratio at a given relativistic energy \n$K_{\\rm ep}\\lsim2\\times10^{-5}$, which is inconsistent with the limit they derived from \nradio observations of SNRs in the nearby galaxy M33, $K_{\\rm ep}\\gsim10^{-3}$. Moreover, radio \nand X-ray observations of RX~J1713.7-3946 suggest that the blast wave has recently hit \na complex of molecular clouds located in the western part of the remnant \\cite{cas04}. \nThe ambient medium density in this region has been estimated to be $\\sim300$~cm$^{-3}$. \nIn the hadronic scenario, a much higher $\\gamma$-ray flux would be expected in this \ndirection, contrary to the observations \\cite{pla08}. Thus, the $\\gamma$-ray morphology \nrevealed by HESS practically rules out pion production as the main contribution to \nthe high-energy radiation of RX~J1713.7-3946.\n\nBut then, why the magnetic field given by the ratio of synchrotron to IC luminosities \n(eq.~[\\ref{eqvt9}]) is inconsistent with the field derived from the X-ray filaments? \nThis suggests that the filamentary structures observed with {\\it Chandra} and \n{\\it XMM-Newton} are localized regions where the magnetic field is enhanced in \ncomparison with the mean downstream field \\cite{kat08}. It is possible that the \nmagnetic field is amplified at the shock as part of the nonlinear DSA process, but then \nrapidly damped behind the blast wave \\cite{poh05}. The mean field downstream the shock\nwould then not be directly related to the observed thickness of the X-ray rims. \n\nAlthough the unambiguous interpretation of the TeV observations of shell-type SNRs \nremains uncertain, the high-energy $\\gamma$-ray emissions from RX~J1713.7-3946 and \nRX~J0852.0-4622 are probably produced by IC scattering \\cite{kat08}. For Cas~A, the case \nfor a hadronic origin of the TeV radiation may be more compelling, as the density of the \nambient medium is higher \\cite{vin03}. The new source MAGIC~J0616+225 \\cite{alb07b} which \nis spatially coincident with IC~443 may also be produced by pion decay. IC~443 is one of \nthe best candidates for a $\\gamma$-ray source produced by interactions between CRs \naccelerated in a SNR and a nearby molecular cloud \\cite{tor03}. Hopefully the upcoming\n{\\it GLAST} satellite will allow a clear distinction between hadronic and electronic \n$\\gamma$-ray processes in these objects. With the expected sensitivity of the LAT \ninstrument between 30 MeV and 300 GeV, {\\it GLAST} observations of SNRs should \ndifferentiate between pion-decay and IC spectra \\cite{fun08}.\n\n\\section{Radio emission and nonlinear diffusive shock acceleration in SN 1993J}\n\nAbout 30 extragalactic SNe have now been detected at radio wavelengths\\footnote{See \nhttp:\/\/rsd-www.nrl.navy.mil\/7213\/weiler\/kwdata\/rsnhead.html.}. In a number of cases, \nthe radio evolution has been monitored for years after outburst. It is generally \naccepted that the radio emission is nonthermal synchrotron radiation from relativistic \nelectrons accelerated at the SN shock wave \\cite{che82}. At early epochs the radio flux \ncan be strongly attenuated by free-free absorption in the wind lost from the progenitor \nstar prior to the explosion. Synchrotron self-absorption can also play a role in some \nobjects \\cite{che98}. The radio emission from SNe can provide unique information on \nthe physical properties of the circumstellar medium (CSM) and the final stages of \nevolution of the presupernova system \\cite{wei02}. We show here that this emission \ncan also be used to study critical aspects of the DSA mechanism. \n\nThe type IIb SN~1993J, which exploded in the nearby galaxy M81 at a distance of \n$3.63\\pm0.34$~Mpc, is one of the brightest radio SNe ever detected (see \\cite{wei07} \nand references therein). Very long baseline interferometry (VLBI) imaging has revealed \na decelerating expansion of a shell-like radio source, which is consistent with the\nstandard model that the radio emission arises from a region behind the forward shock \npropagating into the CSM. The expansion has been found to be self-similar \n\\cite{mar97}, although small departures from a self-similar evolution have been \nreported \\cite{bar00}. The velocity of the forward shock can be estimated from the \nmeasured outer radius of the radio shell, i.e. the shock radius $r_s$, as \n$V_s=dr_s\/dt=3.35\\times10^4~t_d^{-0.17}$~km~s$^{-1}$, where $t_d$ is the time after \nshock breakout expressed in days. \n\nExtensive radio monitoring of the integrated flux density of SN~1993J has been \nconducted with the Very Large Array and several other radio telescopes \\cite{wei07}. \nFigure~\\ref{figvt1} shows a set of measured light curves at 0.3~cm (85--110 GHz), \n1.2~cm (22.5~GHz), 2~cm (14.9~GHz), 3.6~cm (8.4~GHz), 6~cm (4.9~GHz), and 20~cm \n(1.4~GHz). We see that at each wavelength the flux density first rapidly increases \nand then declines more slowly as a power in time (the data at 0.3~cm do not allow to \nclearly identify this behavior). The radio emission was observed to suddenly decline \nafter day $\\sim$3100 (not shown in Fig.~\\ref{figvt1}), which is interpreted in terms \nof an abrupt decrease of the CSM density at radial distance from the progenitor \n$r\\sim3\\times10^{17}$~cm \\cite{wei07}. The maximum intensity is reached first at lower \nwavelengths and later at higher wavelengths, which is characteristic of absorption \nprocesses. For SN~1993J, both free-free absorption in the CSM and synchrotron \nself-absorption are important \\cite{che98,fra98,wei07}. To model light curves of radio \nSNe, Weiler et al. \\cite{wei02,wei07} have developed a semi-empirical formula that \ntakes into account these two absorption mechanisms. For SN~1993J, the best fit to the \ndata using this semi-empirical model ({\\it dotted blue curves} in Fig.~\\ref{figvt1}) \nrequires nine free parameters \\cite{wei07}. \n\n\\begin{figure}\n\\includegraphics[width=1\\textwidth]{f1.eps}\n\\caption{Radio light curves for SN~1993J at 0.3, 1.2, 2, 3.6, 6, and 20~cm. The dotted\nblue lines represent the best fit semi-empirical model of Ref.~\\cite{wei07}. The \ndashed red (resp. solid green) lines show results of the present model for \n$\\eta_{\\rm inj}^p=\\eta_{\\rm inj}^e=10^{-5}$ (resp. \n$\\eta_{\\rm inj}^p=2\\times10^{-4}$ and $\\eta_{\\rm inj}^e=1.4\\times10^{-5}$; see text).\nThe data are from Ref.~\\cite{wei07} and references therein. \n}\n\\label{figvt1}\n\\end{figure}\n\nI have developed a model for the radio emission of SN~1993J, which is inspired by \nprevious works on the morphology of synchrotron emission in young SNRs \n\\cite{cas05,ell05}. The model will be presented in detail in a forthcoming publication \n\\cite{tat08} and I only give here broad outlines. First, the density profile for the \nCSM is taken as $\\rho_{\\rm CSM}(r)=\\rho_0(r\/r_0)^{-2}$ as expected for a constant wind \nmass-loss rate and terminal velocity. Here $r_0=3.49\\times10^{14}$~cm is the shock \nradius at $t=1$~day after outburst and $\\rho_0$ is a free parameter. Evidence for a \nflatter CSM density profile has been advocated ($\\rho_{\\rm CSM}\\propto r^{-s}$, \nwith $s \\sim 1.6$; see \\cite{wei07} and references therein), based on the measured time \ndependence of the optical depth to free-free absorption in the CSM, $\\tau_{\\rm ff}$. \nHowever, Fransson \\& Bj\\\"ornsson \\cite{fra98} have shown that the time \nevolution of $\\tau_{\\rm ff}$ can be explained by a decrease of the CSM temperature with \n$r$ together with the standard $r^{-2}$ distribution for the density. The results of the\npresent work provide support to this latter interpretation \\cite{tat08}. Thus, in the \npresent model, free-free absorption is calculated assuming the time dependence of \n$\\tau_{\\rm ff}$ obtained in Ref.~\\cite{fra98} and using the best-fit value of $\\rho_0$ \n(or more precisely $\\rho_0^2$) as a normalization factor. \n\nBecause synchrotron self-absorption is important in SN~1993J, the strength and evolution \nof the mean magnetic field in the region of the radio emission can be estimated from \nthe measured peak flux at different wavelengths \\cite{che98}. Using equation~(12) of \nRef.~\\cite{che98}, I obtain from the data at 1.2, 2, 3.6, 6, and 20~cm: \n\\begin{equation}\n\\langle B \\rangle = (46 \\pm 19) \\alpha^{-2\/9} t_d^{-1.01\\pm0.09}~{\\rm G}, \n\\label{eqvt10}\n\\end{equation}\nwhere $\\alpha$ is the ratio of the total energy density in relativistic electrons to\nthe magnetic energy density. The errors include the uncertainty in the contribution of \nfree-free absorption. This time dependence of $\\langle B \\rangle$ is close to that \nexpected if the magnetic field at the shock is amplified by a constant factor from \nthe available kinetic energy density\n(see \\cite{bel01}). In this case, one expects $B^2\/8\\pi \\propto \\rho_{\\rm CSM} V_s^2$,\nwhich gives $B\\propto t^{-1}$ for $\\rho_{\\rm CSM}\\propto r^{-2}$. Note that the \nflatter CSM density profile supported by Weiler et al. \\cite{wei07}, \n$\\rho_{\\rm CSM}\\propto r^{-1.6}$, would imply for the assumed scaling \n$B\\propto \\rho_{\\rm CSM}^{1\/2} V_s \\propto t^{-0.83}$, which is somewhat inconsistent \nwith the data. \n\nBased on the measured time dependence of $\\langle B \\rangle$, the immediate postshock \nmagnetic field is assumed to be of the form $B_d=B_0t_d^{-1}$, where $B_0$ is a free\nparameter expected to be in the range $\\sim$100--600~G for a typical value of $\\alpha$ \nin the range $\\sim$10$^{-5}$--10$^{-2}$ (see eq.~[\\ref{eqvt10}]). The evolution of \nthe magnetic field behind the shock is then calculated from the assumption that the \nfield is carried by the flow, frozen in the plasma, so that the parallel and \nperpendicular magnetic field components evolve conserving flux (see \\cite{cas05} and \nreferences therein). Results obtained with the alternative assumption that the \nmagnetic field is rapidly damped behind the shock wave will be given in \\cite{tat08}. \n\nThe hydrodynamic evolution of the plasma downstream the forward shock is calculated \nfrom the two-fluid, self-similar model of Chevalier \\cite{che83}, which takes into \naccount the effects of CR pressure on the dynamics of \nthe thermal gas. The overall structure of SNRs can be described by self-similar \nsolutions, if the initial density profiles in the ejected material (ejecta) \nand in the ambient medium have power-law distributions, and if the ratio of \nrelativistic CR pressure to total pressure at the shock front is constant \\cite{che83}. \nThe backreaction of energetic ions can strongly modify the shock structure of young \nSNRs, such as e.g. Kepler's remnant \\cite{dec00}. But the situation is different for \nSN~1993J because of the much higher magnetic field in the shock precursor region, \nwhich implies that energy is very efficiently transfered from the CRs to the thermal \ngas via Alfv\\'en wave dissipation \\cite{ber99}. The resulting increase in the gas \npressure ahead of the viscous subshock is found to limit the overall compression ratio, \n$r_{\\rm tot}$, to values close to 4 (i.e. the standard value for a test-particle \nstrong shock) even for efficient DSA. Thus, the hydrodynamic evolution of SN~1993J \ncan be safely calculated in the test-particle, self-similar approximation. \n\nBoth the energy spectra of the accelerated particles and the thermodynamic properties of \nthe gas just behind the shock front (i.e. the boundary conditions for the self-similar \nsolutions of the hydrodynamic evolution) are calculated with the semianalytic model of \nnonlinear DSA developed by Berezhko \\& Ellison \\cite{ber99} and Ellison et al. \n\\cite{ell00}. However, a small change to the model has been made: the Alfv\\'en waves \nare assumed to propagate isotropically in the precursor region and not only in the \ndirection opposite to the plasma flow (i.e. eqs.~(52) and (53) of Ref.~\\cite{ber99} are \nnot used). This is a reasonable assumption given the strong, nonlinear magnetic field \namplification \\cite{bel01}. Although the semianalytic model strictly applies to \nplane-parallel, steady state shocks, it has been sucessfully used in Ref.~\\cite{ell00} \nfor evolving SNRs. The main parameter of this model, \n$\\eta_{\\rm inj}^p$, is the fraction of total shocked protons in protons with momentum \n$p \\geq p_{\\rm inj}$ injected from the postshock thermal pool into the DSA process. \nThe work of Ref.~\\cite{bla05} allows us to accurately relate the injection momentum \n$p_{\\rm inj}$ to $\\eta_{\\rm inj}^p$. Similarly, we define $\\eta_{\\rm inj}^e$ for the \nelectron injection. The latter parameter is not important for the shock structure, \nbecause the fraction of total particle momentum carried by electrons is negligible, but \nit determines the brightness of optically-thin synchrotron emission for a given magnetic \nfield strength. The electrons accelerated at the shock experience adiabatic and \nsynchrotron energy losses as they are advected downstream with the plasma flow. The \nspectral evolution caused by these losses is calculated as in Ref.~\\cite{rey98}. \nFinally, once both the nonthermal electron distribution and the magnetic field in a \ngiven shell of material behind the forward shock have been determined, the synchrotron \nemission from that shell can be calculated \\cite{pac70}. The total radio emission along \nthe line of sight is then obtained from full radiative transfer calculations that \ninclude synchrotron self-absorption. \n\nWith the set of assumptions given above, the model has four free parameters: $\\rho_0$, \n$B_0$, $\\eta_{\\rm inj}^p$ and $\\eta_{\\rm inj}^e$. While the product $\\rho_0 \\times \n\\eta_{\\rm inj}^e$ is important for the intensity of optically-thin synchrotron \nemission, the CSM density normalization $\\rho_0$ also determines the level of free-free\nabsorption. The main effect of changing $B_0$ is to shift the radio light curves in time,\nbecause the turn-on from optically-thick to optically-thin synchrotron emission is \ndelayed when the magnetic field is increased. The proton injection paramater \n$\\eta_{\\rm inj}^p$ determines the shock structure and hence influences the shape of the\nelectron spectrum. \n\nCalculated radio light curves are shown in Fig.~\\ref{figvt1} for \n$\\rho_0=1.8\\times10^{-15}$~g~cm$^{-3}$, $B_0=400$~G, and two sets of injection \nparameters: $\\eta_{\\rm inj}^p=\\eta_{\\rm inj}^e=10^{-5}$ (test-particle case), and \n$\\eta_{\\rm inj}^p=2\\times10^{-4}$ and $\\eta_{\\rm inj}^e=1.4\\times10^{-5}$. We see that \nin the test-particle case, the decline of the optically-thin emission with time is too\nslow as compared to the data, except at 0.3~cm. The CR-modified shock provides a better \noverall fit to the measured flux densities, although significant deviations from the \ndata can be observed. In particular, we see that the calculated light curve at 0.3~cm \nfalls short of the data at this wavelength. It is possible that the\ndeviations of the best-fit curves from the data partly arise from the \napproximations used in the DSA model of Ref.~\\cite{ber99}, in which the nonthermal \nphase-space distribution function $f(p)$ is described as a three-component power law. \nBut it is also possible that it tells us something about the magnetic field evolution \nin the downstream region. The spatial distribution of the postshock magnetic field will \nbe studied in Ref.~\\cite{tat08} by comparing calculated synchrotron profiles with the \nobserved average profile of the radio shell. \n\n\\begin{figure}\n\\includegraphics[width=1\\textwidth]{f2.eps}\n\\caption{Calculated postshock phase space distribution functions, $f(p)$, vs. kinetic \nenergy, at day 100 after shock breakout. Following Ref.~\\cite{ber99}, $f(p)$ \nhas been multiplied by $[p\/(m_pc)]^4$ to flatten the spectra, and by $[(m_pc)^3\/n_u]$ to \nmake them dimensionless ($m_p$ is the proton mass and $n_u$ the proton number density \nahead of the shock precursor). The red lines are for protons and the black lines for \nelectrons. The two sets of injection parameters $\\eta_{\\rm inj}^p$ and $\\eta_{\\rm inj}^e$ \nare those used for the synchrotron calculations shown in Fig.~1.}\n\\label{figvt2}\n\\end{figure}\n\nFor SN~1993J, the main effect of the CR pressure is to reduce the compression ratio \nof the subshock, $r_{\\rm sub}$, whereas the overall compression ratio $r_{\\rm tot}$ \nremains nearly constant (see above). For $\\eta_{\\rm inj}^p=2\\times10^{-4}$, $r_{\\rm sub}$ \nis found to decrease from 3.58 to 3.35 between day 10 and day 3100 after outburst, whereas \n$r_{\\rm tot}$ stays between 4 and 4.04. Such a shock modification affects essentially \nthe particles of energies $0$ and $k_z\\Sigma<0$. Particularly, in the region $k_z\\Sigma>0$ magnetic field component $\\overline{B}^{(+),\\xi}_{\\bm k}$ is ahead of component $\\overline{B}^{(+),\\eta}_{\\bm k}$ in phase by $\\pi\/4$, in the region $k_z\\Sigma<0$ component $\\overline{B}^{(+),\\eta}_{\\bm k}$ is ahead of component $\\overline{B}^{(+),\\xi}_{\\bm k}$ in phase by $\\pi\/4$.\n\nNonstationary dynamo experiments for conducting helical flow were obtained at work \\cite{frick2002non}. Such flow with large enough magnetic Reynolds number and small magnetic Prandtl number ($Pr_m \\sim 10^{-5}$) was accelerated in toroidal chanel and then was braked up abruptly. Numerical simulation of the magnetic field evolution was performed for a helical flow with a profile extracted from the experimental data. It was shown that magnetic field increased with time at some values of parameters.\n\n\n\n\n\n\n\n\\textbf{Conclusion.} To summarise we have quantitatively characterized $\\alpha$-effect in three-dimensional coherent geostrophic helical vortex of conducting liquid with external magnetic field. We calculated each element of $\\alpha$-matrix that describes $\\alpha$-effect.\n\nThe results were obtained in considered region $\\operatorname{Ro}\\lesssim 1, \\,\\ Pr_m \\lesssim 1$ due to physical reasons of coherent vortexes existence. The expression for elements of $\\alpha$-matrix was obtained in the framework of the theory developed in \\cite{kolokolov2020structure} and \\cite{ogorodnikov2021velocity}. The elements of $\\alpha$-matrix strongly depends on order relation between Rossby $\\operatorname{Ro}$ and magnetic Prandtl $Pr_m$ numbers. The interplay of them leads to the interplay between time scales $\\tau_*$ and $\\tau_0$ that characterizes kinematic viscosity and rotation correspondingly.\n\n\\textbf{Acknowledgments.} This work was supported by the {Russian Science Foundation, Grant No. 20-12-00383.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently much attention has been given to lower dimensional gauge theories.\nSuch remarkable results as the chiral symmetry breaking \\cite{1}, quantum\nHall effect \\cite{2}, spontaneously broken Lorentz invariance by the\ndynamical generation of a magnetic field \\cite{3}, and the connection\nbetween non-perturbative effects in low-energy strong interactions and QCD$%\n_{2}$ \\cite{4}, show the broad range of applicability of these theories.\n\nIn particular, 2+1 dimensional gauge theories with fractional statistics\n-anyon systems \\cite{4a}- have been extensively studied. One main reason for\nsuch an interest has been the belief that a strongly correlated electron\nsystem in two dimensions can be described by an effective field theory of\nanyons \\cite{5}, \\cite{5a}. Besides, it has been claimed that anyons could\nplay a basic role in high-T$_{C}$ superconductivity \\cite{5a}-\\cite{6b}. It\nis known \\cite{a} that a charged anyon system in two spatial dimensions can\nbe modeled by means of a 2+1 dimensional Maxwell-Chern-Simons (MCS) theory.\nAn important feature of this theory is that it violates parity and\ntime-reversal invariance. However, at present no experimental evidences of P\nand T violation in high-T$_{C}$ superconductivity have been found. It should\nbe pointed out, nevertheless, that it is possible to construct more\nsophisticated P and T invariant anyonic models\\cite{6a}. In any case,\nwhether linked to high-T$_{C}$ superconductivity or not, the anyon system is\nan interesting theoretical model in its own right.\n\nThe superconducting behavior of anyon systems at $T=0$ has been investigated\nby many authors \\cite{6}-\\cite{15a}. Crucial to the existence of anyon\nsuperconductivity at $T=0$ is the exact cancellation between the bare and\ninduced Chern-Simons terms in the effective action of the theory.\n\nAlthough a general consensus exists regarding the superconductivity of anyon\nsystems at zero temperature, a similar consensus at finite temperature is\nyet to be achieved \\cite{8}-\\cite{16}. Some authors (see ref. \\cite{9}) have\nconcluded that the superconductivity is lost at $T\\neq 0$, based upon the\nappearance of a temperature-dependent correction to the induced Chern-Simons\ncoefficient that is not cancelled out by the bare term. In ref. \\cite{11} it\nis argued, however, that this finite temperature correction is numerically\nnegligible at $T<200$ $K$, and that the main reason for the lack of a\nMeissner effect is the development of a pole $\\sim \\left( \\frac{1}{{\\bf k}%\n^{2}}\\right) $ in the polarization operator component $\\Pi _{00}$ at $T\\neq\n0 $. There, it is discussed how the existence of this pole leads to a so\ncalled partial Meissner effect with a constant magnetic field penetration\nthroughout the sample that appreciably increases with temperature. On the\nother hand, in ref. \\cite{8}, it has been independently claimed that the\nanyon model cannot superconduct at finite temperature due to the existence\nof a long-range mode, found inside the infinite bulk at $T\\neq 0$. The long\nrange mode found in ref. \\cite{8} is also a consequence of the existence of\na pole $\\sim \\left( \\frac{1}{{\\bf k}^{2}}\\right) $ in the polarization\noperator component $\\Pi _{00}$ at $T\\neq 0$.\n\nThe apparent lack of superconductivity at temperatures greater than zero has\nbeen considered as a discouraging property of anyon models. Nevertheless, it\nmay be still premature to disregard the anyons as a feasible solution for\nexplaining high -T$_{c}$ superconductivity, at least if the reason\nsustaining such a belief is the absence of the Meissner effect at finite\ntemperature. As it was shown in a previous paper \\cite{16}, the lack of a\nMeissner effect, reported in ref. \\cite{11} for the case of a half-plane\nsample as a partial Meissner effect, is a direct consequence of the omission\nof the sample boundary effects in the calculations of the minimal solution\nfor the magnetic field within the sample. To understand this remark we must\ntake into account that the results of ref. \\cite{11} were obtained by\nfinding the magnetization in the bulk due to an externally applied magnetic\nfield at the boundary of a half-plane sample. However, in doing so, a\nuniform magnetization was assumed and therefore the boundary effects were\nindeed neglected. Besides, in ref. \\cite{11} the field equations were solved\nconsidering only one short-range mode of propagation for the magnetic field,\nwhile as has been emphasized in our previous letter \\cite{16}, there is a\nsecond short-range mode whose qualitative contribution to the solutions of\nthe field equations cannot be ignored.\n\nIn the present paper we study the effects of the sample's boundaries in the\nmagnetic response of the anyon fluid at finite temperature. This is done by\nconsidering a sample shaped as an infinite strip. When a constant and\nhomogeneous external magnetic field, which is perpendicular to the sample\nplane, is applied at the boundaries of the strip, two different magnetic\nresponses, depending on the temperature values, can be identified. At\ntemperatures smaller than the fermion energy gap inherent to the\nmany-particle MCS model ($T\\ll \\omega _{c}$), the system exhibits a Meissner\neffect. In this case the magnetic field cannot penetrate the bulk farther\nthan a very short distance ($\\overline{\\lambda }\\sim 10^{-5}cm$ for electron\ndensities characteristic of the high -T$_{c}$ superconductors and $T\\sim 200$\n$K$). On the other hand, as it is natural to expect from a physical point of\nview, when the temperatures are larger than the energy gap ($T\\gg \\omega\n_{c} $) the Meissner effect is lost. In this temperature region a periodic\ninhomogeneous magnetic field is present within the bulk.\n\nThese results, together with those previously reported in ref. \\cite{16},\nindicate that, contrary to some authors' belief, the superconducting\nbehavior (more precisely, the Meissner effect), found in the charged anyon\nfluid at $T=0$, does not disappear as soon as the system is heated.\n\nAs it is shown below, the presence of boundaries can affect the dynamics of\nthe system in such a way that the mode that accounts for a homogeneous field\npenetration \\cite{8} cannot propagate in the bulk. Although these results\nhave been proved for two types of samples, the half-plane \\cite{16} and the\ninfinite strip reported in this paper, we conjecture that similar effects\nshould also exist in other geometries.\n\nOur main conclusion is that the magnetic behavior of the anyon fluid is not\njust determined by its bulk properties, but it is essentially affected by\nthe sample boundary conditions. The importance of the boundary conditions in\n2+1 dimensional models has been previously stressed in ref.\\cite{b}.\n\nThe plan for the paper is as follows. In Sec. 2, for completeness as well as\nfor the convenience of the reader, we define the many-particle 2+1\ndimensional MCS model used to describe the charged anyon fluid, and briefly\nreview its main characteristics. In Sec. 3 we study the magnetic response in\nthe self-consistent field approximation of a charged anyon fluid confined to\nan infinite-strip, finding the analytical solution of the MCS field\nequations that satisfies the boundary conditions. The fermion contribution\nin this approximation is given by the corresponding polarization operators\nat $T\\neq 0$ in the background of a many-particle induced Chern-Simons\nmagnetic field. Using these polarization operators in the low temperature\napproximation ($T\\ll \\omega _{c}$), we determine the system's two London\npenetration depths. Taking into account that the boundary conditions are not\nenough to completely determine the magnetic field solution within the\nsample, an extra physical condition, the minimization of the system\nfree-energy density, is imposed. This is done in Sec. 4. In this section we\nprove that even though the electromagnetic field has a long-range mode of\npropagation in the charged anyon fluid at $T\\neq 0$ \\cite{8}, a constant and\nuniform magnetic field applied at the sample's boundaries cannot propagate\nthrough this mode. The explicit temperature dependence at $T\\ll \\omega _{c}$\nof all the coefficients appearing in the magnetic field solution, and of the\neffective London penetration depth are also found. In Sec. 5, we discuss how\nthe superconducting behavior of the charged anyon fluid disappears at\ntemperatures larger than the energy gap ($T\\gg \\omega _{c}$). Sec. 6\ncontains the summary and discussion.\n\n\\section{MCS Many-Particle Model}\n\nThe Lagrangian density of the 2+1 dimensional non-relativistic charged MCS\nsystem is\n\n\\begin{equation}\n{\\cal L}=-\\frac{1}{4}F_{\\mu \\nu }^{2}-\\frac{N}{4\\pi }\\varepsilon ^{\\mu \\nu\n\\rho }a_{\\mu }\\partial _{\\nu }a_{\\rho }+en_{e}A_{0}+i\\psi ^{\\dagger\n}D_{0}\\psi -\\frac{1}{2m}\\left| D_{k}\\psi \\right| ^{2}+\\psi ^{\\dagger }\\mu\n\\psi \\eqnum{2.1}\n\\end{equation}\nwhere $A_{\\mu }$ and $a_{\\mu }$ represent the electromagnetic and the\nChern-Simons fields respectively. The role of the Chern-Simons fields is\nsimply to change the quantum statistics of the matter field, thus, they do\nnot have an independent dynamics. $\\psi $ represents non-relativistic\nspinless fermions. $N\\ $ is a positive integer that determines the magnitude\nof the Chern-$%\n\\mathop{\\rm Si}%\n$mons coupling constant. The charged character of the system is implemented\nby introducing a chemical potential $\\mu $; $n_{e}$ is a background\nneutralizing `classical' charge density, and $m$ is the fermion mass. We\nwill consider throughout the paper the metric $g_{\\mu \\nu }$=$(1,-%\n\\overrightarrow{1})$. The covariant derivative $D_{\\nu }$ is given by\n\n\\begin{equation}\nD_{\\nu }=\\partial _{\\nu }+i\\left( a_{\\nu }+eA_{\\nu }\\right) ,\\qquad \\nu\n=0,1,2 \\eqnum{2.2}\n\\end{equation}\n\nIt is known that to guarantee the system neutrality in the presence of a\ndifferent from zero fermion density $\\left( n_{e}\\neq 0\\right) $,$\\ $a\nnontrivial background of Chern-Simons magnetic field $\\left( \\overline{b}=%\n\\overline{f}_{21}\\right) $ is generated. The Chern-Simons background field\nis obtained as the solution of the mean field Euler-Lagrange equations\nderived from (2.1)\n\n\\begin{mathletters}\n\\begin{equation}\n-\\frac{N}{4\\pi }\\varepsilon ^{\\mu \\nu \\rho }f_{\\nu \\rho }=\\left\\langle\nj^{\\mu }\\right\\rangle \\eqnum{2.3}\n\\end{equation}\n\n\\begin{equation}\n\\partial _{\\nu }F^{\\mu \\nu }=e\\left\\langle j^{\\mu }\\right\\rangle\n-en_{e}\\delta ^{\\mu 0} \\eqnum{2.4}\n\\end{equation}\nconsidering that the system formed by the electron fluid and the background\ncharge $n_{e}$ is neutral\n\n\\end{mathletters}\n\\begin{equation}\n\\left\\langle j^{0}\\right\\rangle -n_{e}\\delta ^{\\mu 0}=0 \\eqnum{2.5}\n\\end{equation}\nIn eq. (2.5) $\\left\\langle j^{0}\\right\\rangle $ is the fermion density of\nthe many-particle fermion system\n\n\\begin{equation}\n\\left\\langle j^{0}\\right\\rangle =\\frac{\\partial \\Omega }{\\partial \\mu }, \n\\eqnum{2.6}\n\\end{equation}\n$\\Omega $ is the fermion thermodynamic potential.\n\nIn this approximation it is found from (2.3)-(2.5) that the Chern-Simons\nmagnetic background is given by\n\n\\begin{equation}\n\\overline{b}=\\frac{2\\pi n_{e}}{N} \\eqnum{2.7}\n\\end{equation}\n\nThen, the unperturbed one-particle Hamiltonian of the matter field\nrepresents a particle in the background of the Chern-Simons magnetic field $%\n\\overline{b\\text{,}}$\n\n\\begin{equation}\nH_{0}=-\\frac{1}{2m}\\left[ \\left( \\partial _{1}+i\\overline{b}x_{2}\\right)\n^{2}+\\partial _{2}^{2}\\right] \\eqnum{2.8}\n\\end{equation}\nIn (2.8) we considered the background Chern-Simons potential, $\\overline{a}%\n_{k}$, $(k=1,2)$, in the Landau gauge\n\n\\begin{equation}\n\\overline{a}_{k}=\\overline{b}x_{2}\\delta _{k1} \\eqnum{2.9}\n\\end{equation}\n\nThe eigenvalue problem defined by the Hamiltonian (2.8) with periodic\nboundary conditions in the $x_{1}$-direction: $\\Psi \\left( x_{1}+L,\\text{ }%\nx_{2}\\right) =$ $\\Psi \\left( x_{1},\\text{ }x_{2}\\right) $,\n\n\\begin{equation}\nH_{0}\\Psi _{nk}=\\epsilon _{n}\\Psi _{nk},\\qquad n=0,1,2,...\\text{ }and\\text{ }%\nk\\in {\\cal Z} \\eqnum{2.10}\n\\end{equation}\nhas eigenvalues and eigenfunctions given respectively by\n\n\\begin{equation}\n\\epsilon _{n}=\\left( n+\\frac{1}{2}\\right) \\omega _{c}\\qquad \\eqnum{2.11}\n\\end{equation}\n\n\\begin{equation}\n\\Psi _{nk}=\\frac{\\overline{b}^{1\/4}}{\\sqrt{L}}\\exp \\left( -2\\pi\nikx_{1}\/L\\right) \\varphi _{n}\\left( x_{2}\\sqrt{\\overline{b}}-\\frac{2\\pi k}{L%\n\\sqrt{\\overline{b}}}\\right) \\eqnum{2.12}\n\\end{equation}\nwhere $\\omega _{c}=\\overline{b}\/m$ is the cyclotron frequency and $\\varphi\n_{n}\\left( \\xi \\right) $ are the orthonormalized harmonic oscillator wave\nfunctions.\n\nNote that the energy levels $\\epsilon _{n}$ are degenerates (they do not\ndepend on $k$). Then, for each Landau level $n$ there exists a band of\ndegenerate states. The cyclotron frequency $\\omega _{c}$ plays here the role\nof the energy gap between occupied Landau levels. It is easy to prove that\nthe filling factor, defined as the ratio between the density of particles $%\nn_{e}$ and the number of states per unit area of a full Landau level, is\nequal to the Chern-$%\n\\mathop{\\rm Si}%\n$mons coupling constant $N$. Thus, because we are considering that $N$ is a\npositive integer, we have in this MCS theory $N$ completely filled Landau\nlevels. Once this ground state is established, it can be argued immediately \n\\cite{6}, \\cite{6b}, \\cite{10a}, \\cite{15}, that at $T=0$ the system will be\nconfined to a filled band, which is separated by an energy gap from the free\nstates; therefore, it is natural to expect that at $T=0$ the system should\nsuperconduct. This result is already a well established fact on the basis of\nHartree-Fock analysis\\cite{6} and Random Phase Approximation \\cite{6b},\\cite\n{10a}. The case at $T\\neq 0$ is more controversial since thermal\nfluctuations, occurring in the many-particle system, can produce significant\nchanges. As we will show in this paper, the presence in this theory of a\nnatural scale, the cyclotron frequency $\\omega _{c}$, is crucial for the\nexistence of a phase at $T\\ll \\omega _{c}$, on which the system, when\nconfined to a bounded region, still behaves as a superconductor.\n\nThe fermion thermal Green's function in the presence of the background\nChern-Simons field $\\overline{b}$\n\n\\begin{equation}\nG\\left( x,x^{\\prime }\\right) =-\\left\\langle T_{\\tau }\\psi \\left( x\\right) \n\\overline{\\psi }\\left( x^{\\prime }\\right) \\right\\rangle \\eqnum{2.13}\n\\end{equation}\nis obtained by solving the equation\n\n\\begin{equation}\n\\left( \\partial _{\\tau }-\\frac{1}{2m}\\overline{D}_{k}^{2}-\\mu \\right)\nG\\left( x,x^{\\prime }\\right) =-\\delta _{3}\\left( x-x^{\\prime }\\right) \n\\eqnum{2.14}\n\\end{equation}\nsubject to the requirement of antiperiodicity under the imaginary time\ntranslation $\\tau \\rightarrow \\tau +\\beta $ ($\\beta $ is the inverse\nabsolute temperature). In (2.14) we have introduced the notation\n\n\\begin{equation}\n\\overline{D}_{k}=\\partial _{k}+i\\overline{a}_{k} \\eqnum{2.15}\n\\end{equation}\n\nThe Fourier transform of the fermion thermal Green's function (2.13)\n\n\\begin{equation}\nG\\left( p_{4},{\\bf p}\\right) =\\int\\limits_{0}^{\\beta }d\\tau \\int d{\\bf x}%\nG\\left( \\tau ,{\\bf x}\\right) e^{i\\left( p_{4}\\tau -{\\bf px}\\right) } \n\\eqnum{2.16}\n\\end{equation}\ncan be expressed in terms of the orthonormalized harmonic oscillator wave\nfunctions $\\varphi _{n}\\left( \\xi \\right) $ as \\cite{Efrain}\n\n\\begin{eqnarray}\nG\\left( p_{4},{\\bf p}\\right) &=&\\int\\limits_{0}^{\\infty }d\\rho\n\\int\\limits_{-\\infty }^{\\infty }dx_{2}\\sqrt{\\overline{b}}\\exp -\\left(\nip_{2}x_{2}\\right) \\exp -\\left( ip_{4}+\\mu -\\frac{\\overline{b}}{2m}\\right)\n\\rho \\nonumber \\\\\n&&\\sum\\limits_{n=0}^{\\infty }\\varphi _{n}\\left( \\xi \\right) \\varphi\n_{n}\\left( \\xi ^{\\prime }\\right) t^{n} \\eqnum{2.17}\n\\end{eqnarray}\nwhere $t=\\exp \\frac{\\overline{b}}{m}\\rho $, $\\xi =\\frac{p_{1}}{\\sqrt{%\n\\overline{b}}}+\\frac{x_{2}\\sqrt{\\overline{b}}}{2}$, $\\xi ^{\\prime }=\\frac{%\np_{1}}{\\sqrt{\\overline{b}}}-\\frac{x_{2}\\sqrt{\\overline{b}}}{2}$ and $%\np_{4}=(2n+1)\\pi \/\\beta $ are the discrete frequencies $(n=0,1,2,...)$\ncorresponding to fermion fields.\n\n\\section{Linear Response in the Infinite Strip}\n\n\\subsection{Effective Theory at $\\mu \\neq 0$ and $T\\neq 0$}\n\nIn ref.\\cite{8} the effective current-current interaction of the MCS model\nwas calculated to determine the independent components of the magnetic\ninteraction at finite temperature in a sample without boundaries, i.e., in\nthe free space. These authors concluded that the pure Meissner effect\nobserved at zero temperature is certainly compromised by the appearance of a\nlong-range mode at $T\\neq 0$. Our main goal in the present paper is to\ninvestigate the magnetic response of the charged anyon fluid at finite\ntemperature for a sample that confines the fluid within some specific\nboundaries. As we prove henceforth, the confinement of the system to a\nbounded region (a condition which is closer to the experimental situation\nthan the free-space case) is crucial for the realization of the Meissner\neffect inside the charged anyon fluid at finite temperature.\n\nLet us investigate the linear response of a charged anyon fluid at finite\ntemperature and density to an externally applied magnetic field in the\nspecific case of an infinite-strip sample. The linear response of the medium\ncan be found under the assumption that the quantum fluctuations of the gauge\nfields about the ground-state are small. In this case the one-loop fermion\ncontribution to the effective action, obtained after integrating out the\nfermion fields, can be evaluated up to second order in the gauge fields. The\neffective action of the theory within this linear approximation \\cite{8},%\n\\cite{11} takes the form\n\n\\begin{equation}\n\\Gamma _{eff}\\,\\left( A_{\\nu },a_{\\nu }\\right) =\\int dx\\left( -\\frac{1}{4}%\nF_{\\mu \\nu }^{2}-\\frac{N}{4\\pi }\\varepsilon ^{\\mu \\nu \\rho }a_{\\mu }\\partial\n_{\\nu }a_{\\rho }+en_{e}A_{0}\\right) +\\Gamma ^{\\left( 2\\right) } \\eqnum{3.1}\n\\end{equation}\n\n\\[\n\\Gamma ^{\\left( 2\\right) }=\\int dx\\Pi ^{\\nu }\\left( x\\right) \\left[ a_{\\nu\n}\\left( x\\right) +eA_{\\nu }\\left( x\\right) \\right] +\\int dxdy\\left[ a_{\\nu\n}\\left( x\\right) +eA_{\\nu }\\left( x\\right) \\right] \\Pi ^{\\mu \\nu }\\left(\nx,y\\right) \\left[ a_{\\nu }\\left( y\\right) +eA_{\\nu }\\left( y\\right) \\right] \n\\]\nHere $\\Gamma ^{\\left( 2\\right) }$ is the one-loop fermion contribution to\nthe effective action in the linear approximation. The operators $\\Pi _{\\nu }$\nand $\\Pi _{\\mu \\nu }$ are calculated using the fermion thermal Green's\nfunction in the presence of the background field $\\overline{b}$ (2.17). They\nrepresent the fermion tadpole and one-loop polarization operators\nrespectively. Their leading behaviors for static $\\left( k_{0}=0\\right) $\nand slowly $\\left( {\\bf k}\\sim 0\\right) $ varying configurations in the\nframe ${\\bf k}=(k,0)$ take the form\n\n\\begin{equation}\n\\Pi _{k}\\left( x\\right) =0,\\;\\;\\;\\Pi _{0}\\left( x\\right) =-n_{e},\\;\\;\\;\\Pi\n_{\\mu \\nu }=\\left( \n\\begin{array}{ccc}\n{\\it \\Pi }_{{\\it 0}}+{\\it \\Pi }_{{\\it 0}}\\,^{\\prime }\\,k^{2} & 0 & {\\it \\Pi }%\n_{{\\it 1}}k \\\\ \n0 & 0 & 0 \\\\ \n-{\\it \\Pi }_{{\\it 1}}k & 0 & {\\it \\Pi }_{\\,{\\it 2}}k^{2}\n\\end{array}\n\\right) , \\eqnum{3.2}\n\\end{equation}\n\nThe independent coefficients: ${\\it \\Pi }_{{\\it 0}}$, ${\\it \\Pi }_{{\\it 0}%\n}\\,^{\\prime }$, ${\\it \\Pi }_{{\\it 1}}$ and ${\\it \\Pi }_{\\,{\\it 2}}$ are\nfunctions of $k^{2}$, $\\mu $ and $\\overline{b}$. In order to find them we\njust need to calculate the $\\Pi _{\\mu \\nu }$ Euclidean components: $\\Pi\n_{44} $, $\\Pi _{42}$ and $\\Pi _{22}$. In the Landau gauge these Euclidean\ncomponents are given by\\cite{11},\n\n\\begin{mathletters}\n\\begin{equation}\n\\Pi _{44}\\left( k,\\mu ,\\overline{b}\\right) =-\\frac{1}{\\beta }%\n\\sum\\limits_{p_{4}}\\frac{d{\\bf p}}{\\left( 2\\pi \\right) ^{2}}G\\left( p\\right)\nG\\left( p-k\\right) , \\eqnum{3.3}\n\\end{equation}\n\n\\begin{equation}\n\\Pi _{4j}\\left( k,\\mu ,\\overline{b}\\right) =\\frac{i}{2m\\beta }%\n\\sum\\limits_{p_{4}}\\frac{d{\\bf p}}{\\left( 2\\pi \\right) ^{2}}\\left\\{ G\\left(\np\\right) \\cdot D_{j}^{-}G\\left( p-k\\right) +D_{j}^{+}G\\left( p\\right) \\cdot\nG\\left( p-k\\right) \\right\\} , \\eqnum{3.4}\n\\end{equation}\n\n\\end{mathletters}\n\\begin{eqnarray}\n\\Pi _{jk}\\left( k,\\mu ,\\overline{b}\\right) &=&\\frac{1}{4m^{2}\\beta }%\n\\sum\\limits_{p_{4}}\\frac{d{\\bf p}}{\\left( 2\\pi \\right) ^{2}}\\left\\{\nD_{k}^{-}G\\left( p\\right) \\cdot D_{j}^{-}G\\left( p-k\\right)\n+D_{j}^{+}G\\left( p\\right) \\cdot D_{k}^{+}G\\left( p-k\\right) \\right. \n\\nonumber \\\\\n&&\\left. +D_{j}^{+}D_{k}^{-}G\\left( p\\right) \\cdot G\\left( p-k\\right)\n+G\\left( p\\right) \\cdot D_{j}^{-}D_{k}^{+}G\\left( p-k\\right) \\right\\} \n\\nonumber \\\\\n&&-\\frac{1}{2m}\\Pi _{4}, \\eqnum{3.5}\n\\end{eqnarray}\nwhere the notation\n\n\\begin{eqnarray}\nD_{j}^{\\pm }G\\left( p\\right) &=&\\left[ ip_{j}\\mp \\frac{\\overline{b}}{2}%\n\\varepsilon ^{jk}\\partial _{p_{k}}\\right] G\\left( p\\right) , \\nonumber \\\\\nD_{j}^{\\pm }G\\left( p-k\\right) &=&\\left[ i\\left( p_{j}-k_{j}\\right) \\mp \n\\frac{\\overline{b}}{2}\\varepsilon ^{jk}\\partial _{p_{k}}\\right] G\\left(\np-k\\right) , \\eqnum{3.6}\n\\end{eqnarray}\nwas used.\n\nUsing (3.3)-(3.5) after summing in $p_{4}$, we found that, in the $k\/\\sqrt{%\n\\overline{b}}\\ll 1$ limit, the polarization operator coefficients ${\\it \\Pi }%\n_{{\\it 0}}$, ${\\it \\Pi }_{{\\it 0}}\\,^{\\prime }$, ${\\it \\Pi }_{{\\it 1}}$ and $%\n{\\it \\Pi }_{\\,{\\it 2}}$ are\n\n\\[\n{\\it \\Pi }_{{\\it 0}}=\\frac{\\beta \\overline{b}}{8\\pi {\\bf k}^{2}}%\n\\sum_{n}\\Theta _{n},\\;\\qquad {\\it \\Pi }_{{\\it 0}}\\,^{\\prime }=\\frac{2m}{\\pi \n\\overline{b}}\\sum_{n}\\Delta _{n}-\\frac{\\beta }{8\\pi }\\sum_{n}(2n+1)\\Theta\n_{n}, \n\\]\n\n\\[\n{\\it \\Pi }_{{\\it 1}}=\\frac{1}{\\pi }\\sum_{n}\\Delta _{n}-\\frac{\\beta \\overline{%\nb}}{16\\pi m}\\sum_{n}(2n+1)\\Theta _{n},\\qquad {\\it \\Pi }_{\\,{\\it 2}}=\\frac{1}{%\n\\pi m}\\sum_{n}(2n+1)\\Delta _{n}-\\frac{\\beta \\overline{b}}{32\\pi m^{2}}%\n\\sum_{n}(2n+1)^{2}\\Theta _{n}, \n\\]\n\n\\begin{equation}\n\\Theta _{n}=%\n\\mathop{\\rm sech}%\n\\,^{2}\\frac{\\beta (\\epsilon _{n}\/2-\\mu )}{2},\\qquad \\Delta _{n}=(e^{\\beta\n(\\epsilon _{n}\/2-\\mu )}+1)^{-1} \\eqnum{3.7}\n\\end{equation}\n\nThe leading contributions of the one-loop polarization operator coefficients\n(3.7) at low temperatures $\\left( T\\ll \\omega _{c}\\right) $ are\n\n\\begin{equation}\n{\\it \\Pi }_{{\\it 0}}=\\frac{2\\beta \\overline{b}}{\\pi }e^{-\\beta \\overline{b}%\n\/2m},\\qquad {\\it \\Pi }_{{\\it 0}}\\,^{\\prime }=\\frac{mN}{2\\pi \\overline{b}}%\n{\\it \\Lambda },\\qquad {\\it \\Pi }_{{\\it 1}}=\\frac{N}{2\\pi }{\\it \\Lambda }%\n,\\quad {\\it \\Pi }_{\\,{\\it 2}}=\\frac{N^{2}}{4\\pi m}{\\it \\Lambda },\\qquad {\\it %\n\\Lambda }=\\left[ 1-\\frac{2\\beta \\overline{b}}{m}e^{-\\beta \\overline{b}%\n\/2m}\\right] \\eqnum{3.8}\n\\end{equation}\nand at high temperatures $\\left( T\\gg \\omega _{c}\\right) $ are\n\n\\begin{equation}\n{\\it \\Pi }_{{\\it 0}}=\\frac{m}{2\\pi }\\left[ \\tanh \\frac{\\beta \\mu }{2}%\n+1\\right] ,\\qquad {\\it \\Pi }_{{\\it 0}}\\,^{\\prime }=-\\frac{\\beta }{48\\pi }%\n\\mathop{\\rm sech}%\n\\!^{2}\\!\\,\\left( \\frac{\\beta \\mu }{2}\\right) ,\\qquad {\\it \\Pi }_{{\\it 1}}=%\n\\frac{\\overline{b}}{m}{\\it \\Pi }_{{\\it 0}}\\,^{\\prime },\\qquad {\\it \\Pi }_{\\,%\n{\\it 2}}=\\frac{1}{12m^{2}}{\\it \\Pi }_{{\\it 0}} \\eqnum{3.9}\n\\end{equation}\nIn these expressions $\\mu $ is the chemical potential and $m=2m_{e}$ ($m_{e}$\nis the electron mass). These results are in agreement with those found in\nrefs.\\cite{8},\\cite{14}.\n\n\\subsection{MCS Linear Equations}\n\nTo find the response of the anyon fluid to an externally applied magnetic\nfield, one needs to use the extremum equations derived from the effective\naction (3.1). This formulation is known in the literature as the\nself-consistent field approximation\\cite{11}. In solving these equations we\nconfine our analysis to gauge field configurations which are static and\nuniform in the y-direction. Within this restriction we take a gauge in which \n$A_{1}=a_{1}=0$.\n\nThe Maxwell and Chern-Simons extremum equations are respectively,\n\n\\begin{equation}\n\\partial _{\\nu }F^{\\nu \\mu }=eJ_{ind}^{\\mu } \\eqnum{3.10a}\n\\end{equation}\n\n\\begin{equation}\n-\\frac{N}{4\\pi }\\varepsilon ^{\\mu \\nu \\rho }f_{\\nu \\rho }=J_{ind}^{\\mu } \n\\eqnum{3.10b}\n\\end{equation}\nHere, $f_{\\mu \\nu }$ is the Chern-Simons gauge field strength tensor,\ndefined as $f_{\\mu \\nu }=\\partial _{\\mu }a_{\\nu }-\\partial _{\\nu }a_{\\mu }$,\nand $J_{ind}^{\\mu }$ is the current density induced by the anyon system at\nfinite temperature and density. Their different components are given by\n\n\\begin{equation}\nJ_{ind}^{0}\\left( x\\right) ={\\it \\Pi }_{{\\it 0}}\\left[ a_{0}\\left( x\\right)\n+eA_{0}\\left( x\\right) \\right] +{\\it \\Pi }_{{\\it 0}}\\,^{\\prime }\\partial\n_{x}\\left( {\\cal E}+eE\\right) +{\\it \\Pi }_{{\\it 1}}\\left( b+eB\\right) \n\\eqnum{3.11a}\n\\end{equation}\n\n\\begin{equation}\nJ_{ind}^{1}\\left( x\\right) =0,\\qquad J_{ind}^{2}\\left( x\\right) ={\\it \\Pi }_{%\n{\\it 1}}\\left( {\\cal E}+eE\\right) +{\\it \\Pi }_{\\,{\\it 2}}\\partial _{x}\\left(\nb+eB\\right) \\eqnum{3.11b}\n\\end{equation}\nin the above expressions we used the following notation: ${\\cal E}=f_{01}$, $%\nE=F_{01}$, $b=f_{12}$ and $B=F_{12}$. Eqs. (3.11) play the role in the anyon\nfluid of the London equations in BCS superconductivity. When the induced\ncurrents (3.11) are substituted in eqs. (3.10) we find, after some\nmanipulation, the set of independent differential equations,\n\n\\begin{equation}\n\\omega \\partial _{x}^{2}B+\\alpha B=\\gamma \\left[ \\partial _{x}E-\\sigma\nA_{0}\\right] +\\tau \\,a_{0}, \\eqnum{3.12}\n\\end{equation}\n\n\\begin{equation}\n\\partial _{x}B=\\kappa \\partial _{x}^{2}E+\\eta E, \\eqnum{3.13}\n\\end{equation}\n\n\\begin{equation}\n\\partial _{x}a_{0}=\\chi \\partial _{x}B \\eqnum{3.14}\n\\end{equation}\nThe coefficients appearing in these differential equations depend on the\ncomponents of the polarization operators through the relations\n\n\\[\n\\omega =\\frac{2\\pi }{N}{\\it \\Pi }_{{\\it 0}}\\,^{\\prime },\\quad \\alpha =-e^{2}%\n{\\it \\Pi }_{{\\it 1}},\\quad \\tau =e{\\it \\Pi }_{{\\it 0}},\\quad \\chi =\\frac{%\n2\\pi }{eN},\\quad \\sigma =-\\frac{e^{2}}{\\gamma }{\\it \\Pi }_{{\\it 0}},\\quad\n\\eta =-\\frac{e^{2}}{\\delta }{\\it \\Pi }_{{\\it 1}}, \n\\]\n\n\\begin{equation}\n\\gamma =1+e^{2}{\\it \\Pi }_{{\\it 0}}\\,^{\\prime }-\\frac{2\\pi }{N}{\\it \\Pi }_{%\n{\\it 1}},\\quad \\delta =1+e^{2}{\\it \\Pi }_{\\,{\\it 2}}-\\frac{2\\pi }{N}{\\it \\Pi \n}_{{\\it 1}},\\quad \\kappa =\\frac{2\\pi }{N\\delta }{\\it \\Pi }_{\\,{\\it 2}}. \n\\eqnum{3.15}\n\\end{equation}\n\nDistinctive of eq. (3.12) is the presence of the nonzero coefficients $%\n\\sigma $ and $\\tau $. They are related to the Debye screening in the two\ndimensional anyon thermal ensemble. A characteristic of this 2+1 dimensional\nmodel is that the Debye screening disappears at $T=0$, even if the chemical\npotential is different from zero. Note that $\\sigma $ and $\\tau $ link the\nmagnetic field to the zero components of the gauge potentials, $A_{0}$ and $%\na_{0}$. As a consequence, these gauge potentials will play a nontrivial role\nin finding the magnetic field solution of the system.\n\n\\subsection{Field Solutions and Boundary Conditions}\n\nUsing eqs.(3.12)-(3.14) one can obtain a higher order differential equation\nthat involves only the electric field,\n\n\\begin{equation}\na\\partial _{x}^{4}E+d\\partial _{x}^{2}E+cE=0, \\eqnum{3.16}\n\\end{equation}\nHere, $a=\\omega \\kappa $, $d=\\omega \\eta +\\alpha \\kappa -\\gamma -\\tau \\kappa\n\\chi $, and $c=\\alpha \\eta -\\sigma \\gamma -\\tau \\eta \\chi $.\n\nSolving (3.16) we find\n\n\\begin{equation}\nE\\left( x\\right) =C_{1}e^{-x\\xi _{1}}+C_{2}e^{x\\xi _{1}}+C_{3}e^{-x\\xi\n_{2}}+C_{4}e^{x\\xi _{2}}, \\eqnum{3.17}\n\\end{equation}\nwhere\n\n\\begin{equation}\n\\xi _{1,2}=\\left[ -d\\pm \\sqrt{d^{2}-4ac}\\right] ^{\\frac{1}{2}}\/\\sqrt{2a} \n\\eqnum{3.18}\n\\end{equation}\nWhen the low density approximation $n_{e}\\ll m^{2}$ is considered (this\napproximation is in agreement with the typical values $n_{e}=2\\times\n10^{14}cm^{-2}$, $m_{e}=2.6\\times 10^{10}cm^{-1}$ found in high-T$_{C}$\nsuperconductivity), we find that, for $N=2$ and at temperatures lower than\nthe energy gap $\\left( T\\ll \\omega _{c}\\right) $, the inverse length scales\n(3.18) are given by the following real functions \n\\begin{equation}\n\\xi _{1}\\simeq e\\sqrt{\\frac{m}{\\pi }}\\left[ 1+\\left( \\frac{\\pi ^{2}n_{e}^{2}%\n}{m^{3}}\\right) \\beta \\exp -\\left( \\frac{\\pi n_{e}\\beta }{2m}\\right) \\right]\n\\eqnum{3.19}\n\\end{equation}\n\\begin{equation}\n\\xi _{2}\\simeq e\\sqrt{\\frac{n_{e}}{m}}\\left[ 1-\\left( \\frac{\\pi n_{e}}{m}%\n\\right) \\beta \\exp -\\left( \\frac{\\pi n_{e}\\beta }{2m}\\right) \\right] \n\\eqnum{3.20}\n\\end{equation}\nThese two inverse length scales correspond to two short-range\nelectromagnetic modes of propagation. These results are consistent with\nthose obtained in ref. \\cite{8} using a different approach. If the masses of\nthe two massive modes, obtained in ref. \\cite{8} by using the\nelectromagnetic thermal Green's function for static and slowly varying\nconfigurations, are evaluated in the range of parameters considered above,\nit can be shown that they reduce to eqs. (319), (3.20).\n\nThe solutions for $B$, $a_{0}$ and $A_{0}$, can be obtained using eqs.\n(3.13), (3.14), (3.17) and the definition of $E$ in terms of $A_{0,}$\n\n\\begin{equation}\nB\\left( x\\right) =\\gamma _{1}\\left( C_{2}e^{x\\xi _{1}}-C_{1}e^{-x\\xi\n_{1}}\\right) +\\gamma _{2}\\left( C_{4}e^{x\\xi _{2}}-C_{3}e^{-x\\xi\n_{2}}\\right) +C_{5} \\eqnum{3.21}\n\\end{equation}\n\n\\begin{equation}\na_{0}\\left( x\\right) =\\chi \\gamma _{1}\\left( C_{2}e^{x\\xi\n_{1}}-C_{1}e^{-x\\xi _{1}}\\right) +\\chi \\gamma _{2}\\left( C_{4}e^{x\\xi\n_{2}}-C_{3}e^{-x\\xi _{2}}\\right) +C_{6} \\eqnum{3.22}\n\\end{equation}\n\n\\begin{equation}\nA_{0}\\left( x\\right) =\\frac{1}{\\xi _{1}}\\left( C_{1}e^{-x\\xi\n_{1}}-C_{2}e^{x\\xi _{1}}\\right) +\\frac{1}{\\xi _{2}}\\left( C_{3}e^{-x\\xi\n_{2}}-C_{4}e^{x\\xi _{2}}\\right) +C_{7} \\eqnum{3.23}\n\\end{equation}\nIn the above formulas we introduced the notation $\\gamma _{1}=\\left( \\xi\n_{1}^{2}\\kappa +\\eta \\right) \/\\xi _{1}$, $\\gamma _{2}=\\left( \\xi\n_{2}^{2}\\kappa +\\eta \\right) \/\\xi _{2}$.\n\nIn obtaining eq. (3.16) we have taken the derivative of eq. (3.12).\nTherefore, the solution of eq. (3.16) belongs to a wider class than the one\ncorresponding to eqs. (3.12)-(3.14). To exclude redundant solutions we must\nrequire that they satisfy eq. (3.12) as a supplementary condition. In this\nway the number of independent unknown coefficients is reduced to six, which\nis the number corresponding to the original system (3.12)-(3.14). The extra\nunknown coefficient is eliminated substituting the solutions (3.17), (3.21),\n(3.22) and (3.23) into eq. (3.12) to obtain the relation\n\n\\begin{equation}\ne{\\it \\Pi }_{{\\it 1}}C_{5}=-{\\it \\Pi }_{{\\it 0}}\\left( C_{6}+eC_{7}\\right) \n\\eqnum{3.24}\n\\end{equation}\n\nEq. (3.24) has an important meaning, it establishes a connection between the\ncoefficients of the long-range modes of the zero components of the gauge\npotentials $(C_{6}+eC_{7})$ and the coefficient of the long-range mode of\nthe magnetic field $C_{5}$. Note that if the induced Chern-Simons\ncoefficient ${\\it \\Pi }_{{\\it 1}}$, or the Debye screening coefficient ${\\it %\n\\Pi }_{{\\it 0}}$ were zero, there would be no link between $C_{5}$ and $%\n(C_{6}+eC_{7})$. This relation between the long-range modes of $B$, $A_{0}$\nand $a_{0}$ can be interpreted as a sort of Aharonov-Bohm effect, which\noccurs in this system at finite temperature. At $T=0$, we have ${\\it \\Pi }_{%\n{\\it 0}}=0$, and the effect disappears.\n\nUp to this point no boundary has been taken into account. Therefore, it is\neasy to understand that the magnetic long-range mode associated with the\ncoefficient $C_{5}$, must be identified with the one found in ref. \\cite{8}\nfor the infinite bulk using a different approach. However, as it is shown\nbelow, when a constant and uniform magnetic field is perpendicularly applied\nat the boundaries of a two-dimensional sample, this mode cannot propagate\n(i.e. $C_{5}\\equiv 0$) within the sample. This result is crucial for the\nexistence of Meissner effect in this system.\n\nIn order to determine the unknown coefficients we need to use the boundary\nconditions. Henceforth we consider that the anyon fluid is confined to the\nstrip $-\\infty 0$, it can be shown that the\neffective penetration length $\\overline{\\lambda }$ (defined as the distance $%\nx$ where the magnetic field falls down to a value $B\\left( \\overline{\\lambda \n}\\right) \/\\overline{B}=e^{-1}$) increases with the temperature as\n\n\\begin{equation}\n\\overline{\\lambda }\\simeq \\overline{\\lambda }_{0}\\left( 1+\\overline{\\kappa }%\n\\beta \\exp -\\frac{1}{2}\\overline{\\kappa }\\beta \\right) \\eqnum{4.8}\n\\end{equation}\nwhere $\\overline{\\lambda }_{0}=\\sqrt{m\/n_{e}e^{2}}$ and $\\overline{\\kappa }%\n=\\pi n_{e}\/m$. At $T=200K$ the effective penetration length is $\\overline{%\n\\lambda }\\sim 10^{-5}cm$.\n\nIt is timely to note that the presence of explicit (proportional to $N$) and\ninduced (proportional to ${\\it \\Pi }_{{\\it 1}}$) Chern-Simons terms in the\nanyon effective action (3.1) is crucial to obtain the Meissner solution\n(4.4). If the Chern-Simons interaction is disconnected ($N\\rightarrow \\infty \n$ and ${\\it \\Pi }_{{\\it 1}}=0$), then $a=0,$ $d=1+e^{2}{\\it \\Pi }_{{\\it 0}%\n}{}^{\\prime }\\neq 0$ and $c=e^{2}{\\it \\Pi }_{{\\it 0}}\\,\\neq 0$ in eq.\n(3.16). In that case the solution of the field equations within the sample\nare $E=0$, $B=\\overline{B}$. That is, we regain the QED in (2+1)-dimensions,\nwhich does not exhibit any superconducting behavior.\n\n\\section{High Temperature Non-Superconducting Phase}\n\nWe have just found that the charged anyon fluid confined to an infinite\nstrip exhibits Meissner effect at temperatures lower than the energy gap $%\n\\omega _{c}$. It is natural to expect that at temperatures larger than the\nenergy gap this superconducting behavior should not exist. At those\ntemperatures the electron thermal fluctuations should make available the\nfree states existing beyond the energy gap. As a consequence, the charged\nanyon fluid should not be a perfect conductor at $T\\gg \\omega _{c}$. A\nsignal of such a transition can be found studying the magnetic response of\nthe system at those temperatures.\n\nAs can be seen from the magnetic field solution (4.4), the real character of\nthe inverse length scales (3.18) is crucial for the realization of the\nMeissner effect. At temperatures much lower than the energy gap this is\nindeed the case, as can be seen from eqs. (3.19) and (3.20).\n\nIn the high temperature $\\left( T\\gg \\omega _{c}\\right) $ region the\npolarization operator coefficients are given by eq. (3.9). Using this\napproximation together with the assumption $n_{e}\\ll m^{2}$, we can\ncalculate the coefficients $a$, $c$ and $d$ that define the behavior of the\ninverse length scales,\n\n\\begin{equation}\na\\simeq \\pi ^{2}{\\it \\Pi }_{{\\it 0}}{}^{\\prime }{\\it \\Pi }_{\\,{\\it 2}} \n\\eqnum{5.1}\n\\end{equation}\n\n\\begin{equation}\nc\\simeq e^{2}{\\it \\Pi }_{{\\it 0}}{} \\eqnum{5.2}\n\\end{equation}\n\n\\begin{equation}\nd\\simeq -1 \\eqnum{5.3}\n\\end{equation}\n\nSubstituting with (5.1)-(5-3) in eq. (3.18) we obtain that the inverse\nlength scales in the high-temperature limit are given by\n\n\\begin{equation}\n\\xi _{1}\\simeq e\\sqrt{m\/2\\pi }\\left( \\tanh \\frac{\\beta \\mu }{2}+1\\right) ^{%\n\\frac{1}{2}} \\eqnum{5.4}\n\\end{equation}\n\n\\begin{equation}\n\\xi _{2}\\simeq i\\left[ 24\\sqrt{\\frac{2m}{\\beta }}\\cosh \\frac{\\beta \\mu }{2}%\n\\left( \\tanh \\frac{\\beta \\mu }{2}+1\\right) ^{-\\frac{1}{2}}\\right] \n\\eqnum{5.5}\n\\end{equation}\n\nThe fact that $\\xi _{2}$ becomes imaginary at temperatures larger than the\nenergy gap, $\\omega _{c}$, implies that the term $\\gamma _{2}\\left(\nC_{4}e^{x\\xi _{2}}-C_{3}e^{-x\\xi _{2}}\\right) $ in the magnetic field\nsolution (3.21) ceases to have a damping behavior, giving rise to a periodic\ninhomogeneous penetration. Therefore, the fluid does not exhibit a Meissner\neffect at those temperatures since the magnetic field will not be totally\nscreened. This corroborate our initial hypothesis that at $T\\gg \\omega _{c}$\nthe anyon fluid is in a new phase in which the magnetic field can penetrate\nthe sample.\n\nWe expect that a critical temperature of the order of the energy gap ($T\\sim\n\\omega _{c}$) separates the superconducting phase $\\left( T\\ll \\omega\n_{c}\\right) $ from the non-superconducting one $\\left( T\\gg \\omega\n_{c}\\right) $. Nevertheless, the temperature approximations (3.8) and (3.9)\nare not suitable to perform the calculation needed to find the phase\ntransition temperature. The field solutions in this new non-superconducting\nphase is currently under investigation. The results will be published\nelsewhere.\n\n\\section{Concluding Remarks}\n\nIn this paper we have investigated the magnetic response at finite\ntemperature of a charged anyon fluid confined to an infinite strip. The\ncharged anyon fluid was modeled by a (2+1)-dimensional MCS theory in a\nmany-particle ($\\mu \\neq 0$, $\\overline{b}\\neq 0$) ground state. The\nparticle energy spectrum of the theory exhibits a band structure given by\ndifferent Landau levels separated by an energy gap $\\omega _{c}$, which is\nproportional to the background Chern-Simons magnetic field $\\overline{b}$.\nWe found that the energy gap $\\omega _{c}$ defines a scale that separates\ntwo phases: a superconducting phase at $T\\ll \\omega _{c}$, and a\nnon-superconducting one at $T\\gg \\omega _{c}$.\n\nThe total magnetic screening in the superconducting phase is characterized\nby two penetration lengths corresponding to two short-range eigenmodes of\npropagation of the electromagnetic field within the anyon fluid. The\nexistence of a Meissner effect at finite temperature is the consequence of\nthe fact that a third electromagnetic mode, of a long-range nature, which is\npresent at finite temperature in the infinite bulk \\cite{8}, does not\npropagate within the infinite strip when a uniform and constant magnetic\nfield is applied at the boundaries. This is a significant property since the\nsamples used to test the Meissner effect in high-$T_{c}$ superconductors are\nbounded.\n\nIt is noteworthy that the existence at finite temperature of a Debye\nscreening (${\\it \\Pi }_{{\\it 0}}\\,\\neq 0$) gives rise to a sort of\nAharonov-Bohm effect in this system with Chern-Simons interaction ($N$\nfinite, ${\\it \\Pi }_{{\\it 1}}\\neq 0$). When ${\\it \\Pi }_{{\\it 0}}\\,\\neq 0$,\nthe field combination $a_{0}+eA_{0}$ becomes physical because it enters in\nthe field equations in the same foot as the electric and magnetic fields\n(see eq. (3.12)). A direct consequence of this fact is that the coefficient $%\nC_{5}$, associated to the long-range mode of the magnetic field, is linked\nto the coefficients $C_{6}$ and $C_{7}$ of the zero components of the\npotentials (see eq. (3.24)).\n\nWhen $T=0$, since ${\\it \\Pi }_{{\\it 0}}\\,=0$ and ${\\it \\Pi }_{{\\it 1}}\\neq 0$%\n, eq. (3.24) implies $C_{5}=0$. That is, at zero temperature the long-range\nmode is absent. This is the well known Meissner effect of the anyon fluid at \n$T=0$. When $T\\neq 0$, eq. (3.24) alone is not enough to determine the value\nof $C_{5}$, since it is given in terms of $C_{6}$ and $C_{7}$ which are\nunknown. However, when eq. (3.24) is taken together with the field\nconfigurations that satisfy the boundary conditions for the infinite-strip\nsample (eqs. (3.17), (3.21)-(3.23) and (3.25)), and with the sample\nstability condition (4.3), we obtain that $C_{5}=0$. Thus, the combined\naction of the boundary conditions and the Aharonov-Bohm effect expressed by\neq. (3.24) accounts for the total screening of the magnetic field in the\nanyon fluid at finite temperature.\n\nFinally, at temperatures large enough ($T\\gg \\omega _{c}$) to excite the\nelectrons beyond the energy gap, we found that the superconducting behavior\nof the anyon fluid is lost. This result was achieved studying the nature of\nthe characteristic lengths (3.18) in this high temperature approximation. We\nshowed that in this temperature region the characteristic length $\\xi _{2}$\nbecomes imaginary (eq. (5.5)), which means that a total damping solution for\nthe magnetic field does not exist any more, and hence the magnetic field\npenetrates the sample.\n\n\\begin{quote}\nAcknowledgments\n\\end{quote}\n\nThe authors are very grateful for stimulating discussions with Profs. G.\nBaskaran, A. Cabo, E.S. Fradkin, Y. Hosotani and J. Strathdee. We would also\nlike to thank Prof. S. Randjbar-Daemi for kindly bringing the publication of\nref. \\cite{b} to our attention. Finally, it is a pleasure for us to thank\nProf. Yu Lu for his hospitality during our stay at the ICTP, where part of\nthis work was done. This research has been supported in part by the National\nScience Foundation under Grant No. PHY-9414509.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Log-linear models with user groups} \\label{sec:augmentation}\n\n\nAs discussed above, a user-agnostic model such as \\eqref{eq:basicModel} does not do justice to the variability of language comprehension and production across different speakers and listeners. We will therefore extend it to a model which distinguishes different \\emph{user groups}. We will not try to model why\\footnote{E.g., in the sense of explicitly modeling sociolects or the difference between novice system users vs.\\ experts.} users behave differently. Instead our model sorts users into groups simply based on the way in which they respond to stimuli, in the sense of Section~\\ref{sec:basicModel}, and implements this by giving each group $g$ its own parameter vector $\\parAdapt{g}$. As a theoretical example, Group 1 might contain users who reliably comprehend REs which use colors (``the green button''), whereas Group 2 might contain users who more easily understand relational REs (``the button next to the lamp''). These groups are then discovered at training time.\n\n\nWhen our trained NLG system starts interacting with an unseen user $u$, it will infer the group to which $u$ belongs based on $u$'s observed responses to previous stimuli. Thus as the dialogue with $u$ unfolds, the system will have an increasingly precise estimate of the group to which $u$ belongs, and will thus be able to generate language which is increasingly well-tailored to this particular user.\n\n\\subsection{Generative story}\n\n\\begin{figure} \n\\begin{center}\n\t\\input{plate_diagram.tex}\n\\end{center}\n\\caption{Plate diagram for the user group model.} \\label{fig:plates_drawn}\n\\end{figure}\n\nWe assume training data $D = \\{(b_i,s_i,u_i)\\}_i$ which contains stimuli $s_i$ together with the behaviors $b_i$ which the users $u_i$ exhibited in response to $s_i$. We write $D\\up{u} = \\{(b^u_1, s^u_1), \\ldots (b^u_N, s^u_N)\\}$ for the data points for each user $u$.\n\nThe generative story we use is illustrated in Fig.~\\ref{fig:plates_drawn}; observable variables are shaded gray, unobserved variables and parameters to be set in training are shaded white and externally set hyperparameters have no circle around them. Arrows indicate which variables and parameters influence the probability distribution of other variables.\n\nWe assume that each user belongs to a group $g \\in \\{1,\\ldots,K\\}$, where the number $K$ of groups is fixed beforehand based on, e.g., held out data. A group $g$ is assigned to $u$ at random from the distribution\n\\begin{eqnarray} \\label{eq:prior}\n\\condProb{g}{\\ensuremath{\\pi}} = \\displaystyle\n\\frac{\\exp(\\ensuremath{\\pi}_g)}{\\sum_{g'=1}^K \\exp(\\ensuremath{\\pi}_{g'})}\n\\end{eqnarray}\nHere $\\ensuremath{\\pi} \\in \\mathbb{R}^K$ is a vector of weights, which defines how probable each group is a-priori.\n\n\\begin{figure*}\n\t\\begin{eqnarray}\n\t\tP(D;\\theta) = & \\displaystyle \\left(\\prod_{u \\in U} \\sum_{g =1}^K P(g|\\ensuremath{\\pi}) \\cdot \\prod_{d \\in D\\up{u}} \\condProb{b_d}{s_d;\\parGroup{g}}\\right) \\cdot \\left(\\mathcal{N}\\left(\\ensuremath{\\pi}|0,\\sigma\\up{\\ensuremath{\\pi}}\\right) \\cdot \\prod_{g=1}^K\\mathcal{N}\\left(\\parAdapt{g}|0,\\sigma\\up{\\rho}\\right) \\right) \\label{eqn::probability} \\\\\n\t\t\\mathcal{L}(\\theta) = & \\displaystyle \\sum_{u \\in U} \\log \\sum_{g = 1}^K P(g|\\ensuremath{\\pi}) \\cdot \\prod_{d \\in D\\up{u}} \\condProb{b_d}{s_d;\\parGroup{g}} \\label{eqn::loglike} \\\\\n\t\t\\mathcal{AL}(\\theta) = & \\displaystyle \\sum_{u \\in U} \\sum_{g = 1}^K \\left( \\condProb{g}{D\\up{u};\\theta\\down{i-1}} \\cdot \\left( \\log P(g|\\ensuremath{\\pi}) + \\sum_{d \\in D_u} \\log \\condProb{b_d}{s_d;\\parGroup{g}}\\right)\\right) \\label{eqn::grad}\n\t\\end{eqnarray}\n\\end{figure*}\n\nWe replace the single parameter vector $\\rho$ of \\eqref{eq:basicModel} with group-specific parameters vectors $\\parAdapt{g}$, thus obtaining a potentially different log-linear model $\\condProb{b}{s; \\parGroup{g}}$ for each group. After assigning a group, our model generates responses $b^u_1,\\ldots,b^u_N$ at random from $\\condProb{b}{s; \\parAdapt{g}}$, based on the group specific parameter vector and the stimuli $s^u_1,\\ldots,s^u_N$. This accounts for the generation of the data.\n\nWe model the parameter vectors $\\ensuremath{\\pi} \\in \\mathbb{R}^K$, and $\\parAdapt{g} \\in \\mathbb{R}^n$ for every $1 \\leq g \\leq K$ as drawn from normal distributions $\\mathcal{N}(0,\\sigma\\up{\\pi})$, and $\\mathcal{N}(0,\\sigma\\up{\\rho})$, which are centered at $0$ with externally given variances and no covariance between parameters. This has the effect of making parameter choices close to zero more probable. Consequently, our models are unlikely to contain large weights for features that only occurred a few times or which are only helpful for a few examples. This should reduce the risk of overfitting the training set.\n\nThe equation for the full probability of the data and a specific parameter setting is given in \\eqref{eqn::probability}. The left bracket contains the likelihood of the data, while the right bracket contains the prior probability of the parameters.\n\n\\subsection{Predicting user behavior}\n\\label{sec:group-posterior}\n\nOnce we have set values $\\theta = (\\ensuremath{\\pi}, \\parAdapt{1}, \\ldots, \\parAdapt{K})$ for all the parameters, we want to predict what behavior $b$ a user $u$ will exhibit in response to a stimulus $s$. If we encounter a completely new user $u$, the prior user group distribution from \\eqref{eq:prior} gives the probability that this user belongs to each group. We combine this with the group-specific log-linear behavior models to obtain the distribution:\n\\begin{eqnarray} \\label{eq:group-loglin}\n\\condProb{b}{s; \\theta} = \n \\sum_{g = 1}^K \n \\condProb{b}{s; \\parGroup{g}} \\cdot \\condProb{g}{\\ensuremath{\\pi}}\n \\label{expr::findREUser} \\end{eqnarray}\n\n\\noindent\nThus, we have a group-aware replacement for \\eqref{eq:basicModel}.\n\nFurthermore, in the interactive setting of a dialogue system, we may have multiple opportunities to interact with the same user $u$. We can then develop a more precise estimate of $u$'s group based on their responses to previous stimuli. Say that we have made the previous observations $D\\up{u} = \\{\\tuple{s_1,b_1},\\dots,\\tuple{s_N,b_N}\\}$ for user $u$. Then we can use Bayes' theorem to calculate a \\emph{posterior} estimate for $u$'s group membership:\n\\begin{eqnarray} \\label{eq:posterior}\n\\condProb{g}{D\\up{u}; \\theta}\n\\propto\n\\condProb{D\\up{u}}{\\parGroup{g}} \\cdot \\condProb{g}{\\ensuremath{\\pi}}\n\\end{eqnarray}\n\nThis posterior balances whether a group is likely in general against whether members of that group behave as $u$ does. We can use $P_u(g) = \\condProb{g}{D\\up{u}; \\theta}$ as our new estimate for the group membership probabilities for $u$ and replace \\eqref{eq:group-loglin} with:\n\\begin{eqnarray} \\label{eq:posterior-loglin}\n\\condProb{b}{s, D\\up{u}; \\theta} = \n \\sum_{g = 1}^K \\condProb{b}{s; \\parGroup{g}} \\cdot P_u(g)\n \\end{eqnarray}\n\n\\noindent for the next interaction with $u$.\n\nAn NLG system can therefore adapt to each new user over time. Before the first interaction with $u$, it has no specific information about $u$ and models $u$'s behavior based on \\eqref{eq:group-loglin}. As the system interacts with $u$ repeatedly, it collects observations $D\\up{u}$ about $u$'s behavior. This allows it to calculate an increasingly accurate posterior $P_u(g) = \\condProb{g}{D\\up{u}; \\theta}$ of $u$'s group membership, and thus generate utterances which are more and more suitable to $u$ using \\eqref{eq:posterior-loglin}.\n\n\\section{Log-linear models for NLG in dialog} \\label{sec:basicModel}\n\nWe start with a basic model of the way in which people produce and comprehend language. In order to generalize over production and comprehension, we will simply say that a human language user exhibits a certain \\emph{behavior} $b$ among a range of possible behaviors, in response to a \\emph{stimulus} $s$. The behavior of a speaker is the utterance $b$ they produce in order to achieve a communicative goal $s$; the behavior of a listener is the meaning $b$ which they assign to the utterance $s$ they hear.\n\nGiven this terminology, we define a basic log-linear model~\\cite{BergerPP96} of language use as follows:\n\\begin{eqnarray} \\label{eq:basicModel}\n\t\\condProb{b}{s;\\rho} = \\displaystyle \\frac{\\exp(\\rho \\cdot \\phi(b,s))}{\\sum_{b'}{\\exp(\\rho \\cdot \\phi(b', s))}}\n\\end{eqnarray}\n\n\\noindent\nwhere $\\rho$ is a real-valued parameter vector of length $n$\nand $\\phi(b,s)$ is a vector of real-valued \\textit{feature functions} $f_1,...,f_n$ over behaviors and stimuli. The parameters can be trained by maximum-likelihood estimation from a corpus of observations $(b,s)$. In addition to maximum-likelihood training it is possible to include some prior probability distribution, which expresses our belief about the probability of any parameter vector and which is generally used for regularization. The latter case is referred to as \\emph{a posteriori} training, which selects the value of $\\rho$ that maximizes the product of the parameter probability and the probability of the data. \n\n\nIn this paper, we focus on the use of such models in the context of the NLG module of a dialogue system, and more specifically on the generation of referring expressions (REs). Using \\eqref{eq:basicModel} as a \\emph{comprehension} model, \\newcite{EngonopoulosVTK13} developed an RE generation model in which the stimulus $s=(r,c)$ consists of an RE $r$ and a visual context $c$ of the GIVE Challenge \\cite{StrDenGarGarKolThe11}, as illustrated in Fig.~\\ref{fig:GIVE}. The behavior is the object $b$ in the visual scene to which the user will resolve the RE. Thus for instance, when we consider the RE $r=$``the blue button'' in the context of Fig.~\\ref{fig:GIVE}, the log-linear model may assign a higher probability to the button on the right than to the one in the background. \\newcite{EngonopoulosK14} develop an algorithm for generating the RE $r$ which maximizes $\\condProb{b^*}{s;\\rho}$, where $b^*$ is the intended\nreferent in this setting.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.75\\columnwidth]{screen1_wt}\n\\caption{A visual scene and a system-generated instruction from the GIVE challenge.} \n\\label{fig:GIVE}\n\\end{figure}\n\nConversely, log-linear models can also be used to directly capture how a human speaker would refer to an object in a given scene. In this case, the stimulus $s = (a,c)$ consists of the target object $a$ and the visual context $c$, and the behavior $b$ is the RE. We follow \\newcite{ferreira14:_refer} in training individual models for the different attributes which can be used in the RE (e.g., that $a$ is a button; that it is blue; that the RE contains a binary relation such as ``to the right of''), such that we can simply represent $b$ as a binary choice $b \\in \\{1,-1\\}$ between whether a particular attribute should be used in the RE or not. We can then implement an analog of Ferreira's model in terms of \\eqref{eq:basicModel} by using feature functions $\\phi(b,a,c) = b \\cdot \\phi'(a,c)$, where $\\phi'(a,c)$ corresponds to their \\textit{context} features, which do not capture any speaker-specific information. \n\n\n\n\n\\section{Conclusion}\n\nWe have presented a probabilistic model for NLG which predicts\nthe behavior of individual users of a dialog system by dynamically\nassigning them to user groups, which were discovered\nduring training\\footnote{Our code and data is available in \\url{https:\/\/bit.ly\/2jIu1Vm}}. \nWe showed for two separate NLG-related tasks,\nRE production and RE comprehension, how our model, \nafter being trained with data that is not annotated with user groups,\ncan quickly adapt to unseen users as it gets more observations \nfrom them in the course of a dialog and makes increasingly \naccurate predictions about their behavior.\n\nAlthough in this work we apply our model to tasks related to NLG,\nnothing hinges on this choice; it can also be applied to any other \ndialog-related prediction task where user variation plays a role. \nIn the future, we will also try to apply the basic principles of our \nuser group approach to more sophisticated underlying models, such \nas neural networks. \n\\section{Evaluation} \\label{sec:evaluation}\n\nOur model can be used in any component of a dialog system for which a prediction\nof the user's behavior is needed. In this work, we evaluate it in two\nNLG-related prediction tasks: RE production and RE comprehension. In both cases\nwe evaluate the ability of our model to predict the user's behavior given a\nstimulus. We expect our user-group model to gradually improve its prediction accuracy\ncompared to a generic baseline without user groups as it sees more observations\nfrom a given user.\n\nIn all experiments described below we set the prior variances \n$\\sigma_{\\gamma}=1.0$ and $\\sigma_{\\pi}=0.3$ after trying out values between\n0.1 and 10 on the training data of the comprehension experiment.\n\n\\subsection{RE production}\\label{sub:production}\n\n\\paragraph{Task} The task of RE generation can be split in two\nsteps: \\emph{attribute selection}, the selection of the visual attributes\nto be used in the RE such as color, size, relation to other objects and\n\\emph{surface realization}, the generation of a full natural language expression. \nWe focus here on attribute selection: given a visual scene and a target object, \nwe want to predict the set of attributes of the target object that a human\nspeaker would use in order to describe it. Here we treat attribute selection in\nterms of individual classification decisions on whether to use each attribute,\nas described in Section~\\ref{sec:basicModel}.\nMore specifically, we focus on predicting whether the speaker will use a \\emph{spatial\nrelation} to another object (``landmark''). Our motivation for choosing this attribute \nstems from the fact that previous authors ~\\cite{viethen2008use,ferreira14:_refer}\nhave found substantial variation between different users with respect to their preference\ntowards using spatial relations.\n\n\n\\paragraph{Data} We use the GRE3D3 dataset of human-produced\nREs~\\cite{viethen2010speaker}, which contains 630 descriptions for 10 scenes\ncollected from 63 users, each describing the same target object in each scene.\n$35\\%$ of the descriptions in this corpus use a spatial relation. An example of\nsuch a scene can be seen in Fig.~\\ref{fig:gre3d}.\n\n\\begin{figure} \n\\centering\n\\includegraphics[width=0.6\\columnwidth]{gre3d} \n\\caption{A sample scene with a human-produced RE from the GRE3D3 dataset.} \n\\label{fig:gre3d} \n\\end{figure}\n\n\\paragraph{Models}\n\nWe use two baselines for comparison: \n\n\\begin{itemize}[leftmargin=0pt]\n\\item[] \\emph{Basic}: The state-of-the-art model on this task with this dataset, \nunder the assumption that users are seen in training, is\npresented in \\newcite{ferreira14:_refer}. They define context features such as\ntype of relation between the target object and its landmark, number of object of\nthe same color or size, etc., then train an SVM classifier to predict the use of each\nattribute. We recast their model in terms of a log-linear model with the same features,\nto make it fit with the setup of Section~\\ref{sec:basicModel}.\n\n\\item[] \\emph{Basic++}: \\newcite{ferreira14:_refer} also take speaker features into account.\nWe do not use speaker identity and the speaker's attribute frequency vector, \nbecause we only evaluate on unseen users. \nWe do use their other speaker features (age, gender), \ntogether with \\emph{Basic}'s context features; \nthis gives us a strong baseline which is aware of manually\nannotated user group characteristics.\n\\end{itemize}\n\nWe compare these baselines to our \\emph{Group} model \nfor values of $K$ between 1 and 10, using the exact same features as \n\\emph{Basic}. We do not use the speaker features of \\emph{Basic++}, \nbecause we do not want to rely on manually annotated groups. \nNote that our results are not directly comparable with those \nof \\newcite{ferreira14:_refer}, because of a different training-test split:\ntheir model requires having seen speakers in training, \nwhile we explicitly want to test our model's ability to generalize to unseen users.\n\n\\paragraph{Experimental setup} We evaluate using cross-validation, splitting the\nfolds so that all speakers we see in testing are previously unseen\nin training. We use 9 folds in order to have folds of the same size (each\ncontaining 70 descriptions coming from 7 speakers). At each iteration we train\non 8 folds and test on the 9th. At test time, we process each test instance\niteratively: we first predict for each instance whether the user $u$ would use a\nspatial relation or not and test our prediction; we then add the actual\nobservation from the corpus to the set $D\\up{u}$ of observations for this particular\nuser, in order to update our estimate about their group membership.\n\n\\paragraph{Results}\n\nFigure~\\ref{fig:gre3d3-f1} shows the test F1-score (micro-averaged over all\nfolds) as we increase the number of groups, compared to the baselines. \nFor our \\emph{Group} models, these are averaged over all interactions with the user.\nOur model gets F1-scores between $0.69$ and $0.76$ for all values of $K>1$, \noutperforming both \\emph{Basic} ($0.22$) and \\emph{Basic++} ($0.23$).\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1.0\\columnwidth]{gre3d3-f1}\n\\vspace{-1cm}\n\\caption{F1 scores on test data for values of $K$ between $1$ and $10$ in the production experiment.}\n\\label{fig:gre3d3-f1} \\end{figure}\n\n\nIn order to take a closer look at our model's behavior, we also show the\naccuracy of our model as it observes more instances at test time. We compare the\nmodel with $K=3$ groups against the two baselines.\nFigure~\\ref{fig:gre3d3-time} shows that the group model's F1-score increases dramatically after\nthe first two observations and then stays high throughout the test phase, \nalways outperforming both baselines by at least 0.37 F1-score points after the \nfirst observation. The baseline models of course are not expected to improve \nwith time; fluctuations are due to differences between the visual scenes.\nIn the same figure, we plot the evolution of the entropy of the group model's posterior\ndistribution over the groups (see (\\ref{eq:posterior})). As expected, the model is \nhighly uncertain at the beginning of the test phase about which group the user \nbelongs to, then gets more and more certain as the set $D\\up{u}$ of observations \nfrom that user grows. \n\n\n\\begin{figure}\n\n\\centering\n\n\\includegraphics[width=1.0\\columnwidth]{gre3d3-time} \n\\vspace{-1cm}\n\\caption{F1-score evolution with increasing number of observations from the user in the production experiment.} \n\\label{fig:gre3d3-time} \n\\end{figure}\n\n\n\\subsection{RE comprehension}\\label{sub:comprehension}\n \n\\paragraph{Task}\nOur next task is to predict the referent to which a user will resolve an RE in\nthe context of a visual scene. Our model is given a stimulus $s=(r,\nc)$ consisting of an instruction containing an RE $r$ and a visual context\n$c$ and outputs a probability distribution over all possible referents $b$. Such\na model can be used by a probabilistic RE generator to select an RE which is\nhighly likely to be correctly understood by the user or predict potential\nmisunderstandings (see Section~\\ref{sec:basicModel}). \n\n\\paragraph{Data}\nWe use the GIVE-2.5 corpus for training and the GIVE-2 corpus for \ntesting our model (the same used by \\newcite{EngonopoulosVTK13}). These contain recorded\nobservations of dialog systems giving instructions to users who play a game in a\n3D environment. Each instruction contains an RE $r$, which is recorded in the data\ntogether with the visual context $c$ at the time the instruction was given. The object\n$b$ which the user understood as the referent of the RE is inferred by the immediately subsequent\naction of the user. In total, we extracted 2927 observations by 403 users from GIVE-25 and \n5074 observations by 563 users from GIVE-2. \n\n\\paragraph{Experimental setup} \nWe follow the training method described in\nSection~\\ref{sec:basicModel}. \nAt test time, we present the observations from each user in the order they occur \nin the test data; for each stimulus, we ask our models to predict the referent $a$ which the user \nunderstood to be the referent of the RE, and compare with the recorded observation.\nWe subsequently add the recorded observation to the dataset for the user and continue.\n\n\\paragraph{Models}\n\nAs a baseline, we use the \\emph{Basic} model described in Section\n\\ref{sec:basicModel}, with the features of the ``semantic'' model \nof \\newcite{EngonopoulosVTK13}. Those features capture\ninformation about the objects in the visual scene (e.g. salience) \nand some basic semantic properties of the RE (e.g. color, position). \nWe use those features for our \\emph{Group} model as well, and evaluate for \n$K$ between 1 and 10. \n\n\n\\paragraph{Results on GIVE data}\n\n\\emph{Basic} had a test accuracy of 72.70\\%, which was almost identical\nwith the accuracy of our best \\emph{Group} model for $K=6$ (72.78\\%).\nThis indicates that our group model does not differentiate between users. \nIndeed, after training, the 6-group model assigns\n$81\\%$ prior probability to one of the groups, and effectively gets stuck with this\nassignment while testing; the mean entropy of the posterior group distribution only falls from \nan initial 1.1 to 0.7 after 10 observations.\n\nWe speculate that the reason behind this is that the\nfeatures we use are not sensitive enough to capture the differences between the users in\nthis data. Since our model relies completely on observable behavior, \nit also relies on the ability of the features to make relevant distinctions between users.\n\n\\paragraph{Results on synthetic data}\n\nIn order to test this hypothesis, we made a synthetic dataset based on the GIVE datasets \nwith 1000 instances from 100 users, in the following way:\nfor each user, we randomly selected 10 scenes from GIVE-2, and replaced the target\nthe user selected, so that half of the users always select the target \nwith the highest visual salience, and the other half always select the one with the lowest. \nOur aim was to test whether our model is capable of identifying groups when they are\nclearly present in the data and exhibit differences which our\nfeatures are able to capture.\n\nWe evaluated the same models in a 2-fold cross-validation. \nFigure~\\ref{fig:synth-acc} shows\nthe prediction accuracy for \\emph{Basic} and the \\emph{Group} models for $K$ from 1 to 10. \nAll models for $K>1$ clearly outperform the baseline model: the 2-group model gets\n$62.3\\%$ vs $28.6\\%$ averaged over\nall test examples, while adding more \nthan two groups does not further improve the accuracy. \nWe also show in Figure~\\ref{fig:synth-time} the evolution of the accuracy as \n$D\\up{u}$ grows: the \\emph{Group} model with $K=2$ reaches a 64\\% testing accuracy \nafter seeing two observations from the same user. In the same figure,\nthe entropy of the posterior distribution over groups (see production experiment)\nfalls towards zero as $D\\up{u}$ grows.\nThese results show that our model is capable of correctly assigning a user to the group \nthey belong to, once the features are adequate for distinguishing between different \nuser behaviors.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\columnwidth]{synth-acc} \n\\vspace{-1cm}\n\\caption{Prediction accuracies in the comprehension experiment with synthetic data.} \n\\label{fig:synth-acc} \n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\columnwidth]{synth-time} \n\\vspace{-1cm}\n\\caption{Accuracy evolution with increasing number of observations from the user in the comprehension experiment with synthetic data.} \n\\label{fig:synth-time} \n\\end{figure}\n\n\n\n\\subsection{Discussion}\n\n\nOur model was shown to be successful in discovering groups \nof users with respect to their behavior, within datasets which present\ndiscernible user variation. In particular, if all listeners are influenced in a similar way\n by e.g. the visual salience of an object, then the group model cannot \nlearn different weights for the visual salience feature; if this happens \nfor all available features, there are effectively no groups for our model to discover.\n\nOnce the groups have been discovered, \nour model can then very quickly distinguish between them at test time. \nThis is reflected in the steep performance improvement even after the first\nuser observation in both the real data experiment in~\\ref{sub:production}\nand the synthetic data experiment in~\\ref{sub:comprehension}.\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction} \\label{sec:introduction}\n\nPeople vary widely both in their linguistic preferences when producing\nlanguage and in their ability to understand specific\nnatural-language expressions, depending on what they know about the\ndomain, their age and cognitive capacity, and many other factors. It\nhas long been recognized that effective NLG systems should \ntherefore \\emph{adapt} to the current user, in\norder to generate language which works well for them. \nThis adaptation needs to address all levels of the NLG pipeline,\nincluding discourse planning \\cite{paris88:_tailor}, sentence planning\n\\cite{walker07:_indiv_domain_adapt_senten_plann_dialog}, and RE generation\n\\cite{janarthanam14:_adapt_gener_dialog_system_using}, and depends on\nmany features of the user, including level of expertise and language\nproficiency, age, and gender.\n\nExisting techniques for adapting the output of an NLG system have shortcomings \nwhich limit their practical usefulness. Some systems need user-specific information \nin training \\cite{ferreira14:_refer} and therefore cannot generalize to unseen \nusers. Other systems assume that each user in the training data \nis annotated with their group, which allows them to learn a model \nfrom the data of each group.\nHowever, hand-designed user groups may not reflect the true variability \nof the data, and may therefore inhibit the system's ability to flexibly adapt to new users.\n\nIn this paper, we present a user adaptation model for NLG systems\nwhich induces user groups from training data in which these groups\nwere not annotated. At training time, we probabilistically assign users to groups and learn the language preferences for each group. At evaluation time, we assume that \nour system has a chance to interact with each new user repeatedly \n-- e.g., in the context of a dialogue system. It will then calculate \nan increasingly accurate estimate of the user's group membership\n based on observable behavior, and use it to generate \nutterances that are suitable to the user's true group.\n\nWe evaluate our model on two tasks involving the generation of referring \nexpressions (RE). First, we predict the use of spatial relations in\n humanlike REs in the GRE3D domain~\\cite{viethen2010speaker}\n using a log-linear production model in the spirit of \\newcite{ferreira14:_refer}. \nSecond, we predict the comprehension of generated REs, in a synthetic dataset\nbased on data from the GIVE Challenge domain \\cite{StrDenGarGarKolThe11} \nwith the log-linear comprehension model of \\newcite{EngonopoulosVTK13}.\nIn both cases, we show that our model discovers user groups in the training \ndata and infers the group of unseen users with high confidence after only a few \ninteractions during testing. In the GRE3D domain, our system outperformed a strong baseline \nwhich used demographic information for the users. \n\n\n\n\n\n\n\n\\section{Related Work} \\label{sec:related}\n\nDifferences between individual users have a\nsubstantial impact on language comprehension. Factors that play a role\ninclude level of expertise and spatial ability \\cite{benyon1993developing}; \nage \\cite{h\u00e4user17:_age}; gender \\cite{navi-12}; or\nlanguage proficiency \\cite{KolStrGarByrCasDalMooObe10}.\n\nIndividual differences are also reflected in the way people \nproduce language. \\newcite{viethen2008use} present a corpus study \nof human-produced REs (GRE3D3) for simple visual scenes, where they \nnote two clearly distinguishable groups of speakers, one that always uses\na spatial relation and one that never does.\n\\newcite{ferreira14:_refer} show that \na model using speaker-specific information\noutperforms a generic model in predicting the attributes used by a speaker\nwhen producing an RE. However, \ntheir system needs to have seen the particular speaker in training,\nwhile our system can dynamically adapt to unseen users.\n\\newcite{ferreira2017improving} also demonstrate\nthat splitting speakers in predefined groups and training each group separately\nimproves the human likeness of REs compared to training individual user models.\n\n\nThe ability to adapt to the comprehension and production preferences of a user\nis especially important in the context of a dialog system, where \nthere are multiple chances of interacting with the same user. \nSome methods adapt to dialog system users \nby explicitly modeling the users' knowledge state.\nAn early example is \\newcite{paris88:_tailor}; she selects a discourse plan for\na user, depending on their level of domain knowledge ranging between novice\nand expert, but provides no mechanism for inferring the group to\nwhich the user belongs. \\newcite{rosenblum93:_partic_instr_dialog} try to\ninfer what knowledge a user possesses during dialogue, based on the\nquestions they ask. \\newcite{janarthanam14:_adapt_gener_dialog_system_using}\n adapt to unseen users by using reinforcement learning with simulated users\nto make a system able to adjust to the level of the user's knowledge. \nThey use five predefined groups from which they generate the simulated users' behavior, \nbut do not assign real users to these groups. \nOur system makes no assumptions about the user's knowledge and does not need to \ntrain with simulated users, or use any kind of information-seeking moves;\nwe instead rely on the groups that\nare discovered in training and dynamically assign new, unseen users,\nbased only on their observable behavior in the dialog. \n\n\nAnother example of a user-adapting dialog component is SPaRKy\n \\cite{walker07:_indiv_domain_adapt_senten_plann_dialog},\na trainable sentence planner that can tailor sentence plans to\nindividual users' preferences. This requires training on separate data\n for each user; in contrast to this, we leverage the similarities between users \nand can take advantage of the full training data.\n\n\n\n\n\n\n\n\n\n\\section{Task} \\label{sec:task}\n\n\n\\section{Training} \\label{sec::training}\n\nSo far we have not discussed how to find settings for the parameters $\\theta = \\ensuremath{\\pi},\\parAdapt{1},\\dots,\\parAdapt{K}$, which define our probability model. The key challenge for training is the fact that we want to be able to train while treating the assignment of users to groups as unobserved.\n\nWe will use a maximum \\emph{a posteriori} estimate for $\\theta$, i.e., the setting which maximizes \\eqref{eqn::probability} when $D$ is our training set. We will first discuss how to pick parameters to maximize only the left part of \\eqref{eqn::probability}, i.e., the data likelihood, since this is the part that involves unobserved variables. We will then discuss handling the parameter prior in section \\ref{sec::prior}.\n\n\\subsection{Expectation Maximization}\n\nGradient descent based methods \\cite{NocedalW06} exist for finding the parameter settings which maximize the likelihood for log-linear models, under the conditions that all relevant variables are observed in the training data. If group assignments were given, gradient computations, and therefore gradient based maximization, would be straightforward for our model. One algorithm specifically designed to solve maximization problems with unknown variables by reducing them to the case where all variables are observed, is the expectation maximization (EM) algorithm \\cite{NealH99}. Instead of maximizing the data likelihood from \\eqref{eqn::probability} directly, EM equivalently maximizes the log-likelihood, given in \\eqref{eqn::loglike}. It helps us deal with unobserved variables by introducing ``pseudo-observations'' based on the expected frequency of the unobserved variables.\n\n\nEM is an iterative algorithm which produces a sequence of parameter settings $\\theta\\down{1},\\dots,\\theta\\down{n}$. Each will achieve a larger value for (\\ref{eqn::loglike}). Each new setting is generated in two steps: (1) an lower bound on the log-likelhood is generate and (2) the new parameter setting is found by optimizing this lower bound. To find the lower bound we compute the probability for every possible value the unobserved variables could have had, based on the observed variables and the parameter setting $\\theta\\down{i-1}$ from the last iteration step. Then the lower bound essentially assumes that each assignment was seen with a frequency equal to these probabilities - these are the ``pseudo-observations''.\n\n\nIn our model the unobserved variables are the assignments of users to groups. The probability of seeing each user $u$ assigned to a group, given all the data $D\\up{u}$ and the model parameters from the last iteration $\\theta\\down{i-1}$, is simply the posterior group membership probability $\\condProb{g}{D\\up{u};\\theta\\down{i-1}}$. The lower bound is then given by \\eqref{eqn::grad}. This is the sum of the log probabilities of the data points under each group model, weighted by $\\condProb{g}{D\\up{u};\\theta\\down{i-1}}$. We can now use gradient descent techniques to optimize this lower bound.\n\n\\subsubsection{Maximizing the Lower Bound}\n\nTo fully implement EM we need a way to maximize \\eqref{eqn::grad}. This can be achieved with gradient based methods such as L-BFGS \\cite{NocedalW06}. Here the gradient refers to the vector of all partial derivatives of the function with respect to each dimension of $\\theta$. We therefore need to calculate these partial derivatives.\n\nThere are existing implementations of the gradient computations our base model such as in \\newcite{EngonopoulosVTK13}. The gradients of \\eqref{eqn::grad} for each of the $\\parAdapt{g}$ is simply the gradient for the base model on each datapoint $d$ weighted by $\\condProb{g}{D\\up{u};\\theta\\down{i-1}}$ if $d \\in D_u$, i.e., the probability that the user $u$ from which the datapoint originates belongs to group $g$. We can therefore compute the gradients needed for each $\\parAdapt{g}$ by using implementations developed for the base model.\n\nWe also need gradients for the parameters in $\\ensuremath{\\pi}$, which are only used in our extended model. We can use the rules for computing derivatives to find, for each dimension $g$:\n\n\\begin{equation*}\n\\grad{\\mathcal{UL}(\\theta)}{\\ensuremath{\\pi}_g} = \\displaystyle \\sum_{u \\in U} P_u(g) - \\frac{\\exp\\left({\\ensuremath{\\pi}}_g\\right)}{\\sum_{g' = 1}^K \\exp\\left({\\ensuremath{\\pi}}_{g'}\\right)}\n\\end{equation*}\n\nwhere $P_u(g) = \\condProb{g}{D\\up{u};\\theta\\down{i-1}}$. Using these gradients we can use L-BFGS to maximize the lower bound and implement the EM iteration.\n\n\\subsection{Handling the Parameter Prior} \\label{sec::prior}\n\nSo far we have discussed maximization only for the likelihood without accounting for the prior probabilities for every parameter. To obtain our full training objective we add the log of the right hand side of \\eqref{eqn::probability}:\n\n\\begin{equation}\n\\log \\left(\\mathcal{N}\\left(\\ensuremath{\\pi}|0,\\sigma\\up{\\ensuremath{\\pi}}\\right) \\cdot \\prod_{g=1}^K\\mathcal{N}\\left(\\parAdapt{g}|0,\\sigma\\up{\\rho}\\right) \\right) \\nonumber\n\\end{equation}\n\ni.e., the parameter prior, to \\eqref{eqn::loglike} and \\eqref{eqn::grad}. The gradient contribution from these priors can be computed with standard techniques.\n\n\\subsection{Training Iteration}\n\nWe can now implement an EM loop, which maximizes \\eqref{eqn::probability} as follows: we randomly pick an initial value $\\theta\\down{0}$ for all parameters. Then we repeatedly compute the $\\condProb{g}{D\\up{u};\\theta\\down{i-1}}$ values and maximize the lower bound using L-BFGS to find $\\theta\\down{i}$. This EM iteration is guaranteed to eventually converge towards a local optimum of our objective function. Once change in the objective falls below a pre-defined threshold, we keep the final $\\theta$ setting.\n\nFor our implementation we make a small improvement to the approach: L-BFGS is itself an iterative algorithm and instead of running it until convergence every time we need to find a new $\\theta\\down{i}$, we only let it take a few steps. Even if we just took a single L-BFGS step in each iteration, we would still obtain a correct algorithm \\cite{NealH99} and this has the advantage that we do not spend time trying to find a $\\theta\\down{i}$ which is a good fit for the likely poor group assignments $\\condProb{g}{D\\up{u};\\theta\\down{i-1}}$ we obtain from early parameter estimates.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Conflict of interest}\nThe authors declare that they have no conflict of interest.\n\n\n\\bibliographystyle{spbasic_FS} \n\n\n\\section{Introduction}\\label{intro}\nOur Sun is an active star and as such undergoes cyclic variations, which are related to more or less frequently occurring activity phenomena observed at the solar surface. High energetic activity phenomena, produced due to changes in the Sun's magnetic field, propagate through our solar system where they interact with the planet's atmospheres. At Earth, these interactions are well documented and known to cause geomagnetic disturbances having consequences for modern society. The influence by the Sun on our solar system is termed \\textit{Space Weather}. Therefore, solar activity needs to be permanently monitored from space and ground in order to assess times of increased influence. International space agencies created programs, such as ESA \\textit{Space Situational Awareness} (SSA) or NASA \\textit{Living with a star} (LWS) (cf. Figure~\\ref{fig:esa}), to enhance Space Weather awareness and with that support and fund on a long-term basis fundamental research and development of Space Weather forecasting tools. \n\nThis review article focuses on the following Space Weather phenomena: \n\\begin{enumerate}\n \\item Coronal mass ejections\n \\item Flares\n \\item Solar Energetic Particles \n \\item Solar wind stream interaction regions \n\\end{enumerate}\n\nTo properly describe these phenomena from the solar perspective, a number of processes need to be understood, such as active region and magnetic field evolution, energy build-up and release, as well as the global structuring of inner heliospheric space. Space Weather is a topic of broad interest and sustains an exciting and wealthy interdisciplinary research community\\footnote{For example, the SCOSTEP effort that resulted in excellent publications via CAWSES \\url{http:\/\/www.terrapub.co.jp\/onlineproceedings\/ste\/CAWSES2007\/index.html}, the VarSITI programs \\cite[e.g., ISEST][see \\url{http:\/\/www.varsiti.org}]{zhang18} or the international Space Weather Action Teams, iSWAT, where interdisciplinary groups gather together under \\url{https:\/\/www.iswat-cospar.org}.}. With that it fosters information and knowledge exchange between international research groups on solar-, heliospheric- and geo-space (Sun-to-impact disciplines) in order to enhance scientific knowledge for improving existing and developing new models for Space Weather forecasting. \n\n\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{intro1.pdf}\n\\caption{Solar activity phenomena (depicted here as CME) affect Earth and near-Earth space and therefore, need to be permanently monitored. Space Weather forecasting is of global interest and funded by international agencies. In the near future, satellites will observe the Sun and its dynamic phenomena from different viewpoints, such as a combined L1 and L5 position. Image courtesy: ESA}\n\\label{fig:esa} \n\\end{figure}\n\n\nCoronal mass ejections (CMEs) are a rather recent phenomenon, discovered just about 50 years ago, but in the meantime are known as the main drivers of the most severe Space Weather disturbances \\citep[see e.g.,][]{howard06,Gopalswamy16_CME_history}. They are huge structures that manifest themselves within some tens of minutes as clouds of magnetized plasma impulsively expelled from the Sun and subsequently propagating into interplanetary space \\citep[see e.g.,][]{forbes00}. CMEs arise from usually complex and closed magnetic field structures in equilibrium that is disrupted due to some instability causing its eruption \\cite[e.g., emerging magnetic flux, remote reconfiguration of large scale magnetic field, or field rotation; see e.g.,][]{torok13,schmieder15,green18}. Instabilities in the solar magnetic field and their occurrence frequency are modulated by the 11-year activity cycle of the Sun. The most strong CME events may propagate the 1AU distance within a day \\citep[e.g.,][]{cliver90,gopalswamy05_extreme,liu14_nature}. Less strong events, on average, propagate the same distance in up to 4 days \\citep[see e.g.,][]{shanmugaraju14}. CMEs may be linked to large geomagnetic disturbances, due to shock compression and reconnection with the Earth's magnetic field. They may lead to ionospheric and geomagnetically-induced currents \\citep[see e.g.,][]{pirjola05}. Usually the most severe geomagnetic storms are caused by fast and massive CMEs, erupting from the central region of the visible solar disk and carrying a strong southward magnetic field component that reconnects with the Earth's magnetic field \\citep[see e.g.,][]{pulkkinen07}. Consequently, CMEs are a major topic of solar and Space Weather research. \n\n\nThe power for making a CME energetic (i.e., being fast and wide) undoubtedly stems from the free magnetic energy which is released as consequence of magnetic reconnection processes. Magnetic reconnection enables to impulsively drive plasma and to accelerate particles to high energies causing on the one hand flare emission, which is observed in the solar atmosphere, and on the other hand solar energetic particles (SEPs), which are measured in interplanetary space. Energetic particles from strong SEP events may reach almost speed of light and travel the 1AU distance within about 10 minutes. High energy SEP events (about 1 GeV) may lead to enhanced proton fluxes even at ground level. Hence, most intense events can endanger life and technology on Earth and in space. Further consequences of CMEs and SEPs are disruptions of satellite operations, radio communications and ground power systems \\citep[e.g.,][]{bothmer07}. Unlike CMEs, having lead times of some tens of hours between first observational signatures and impact at Earth, flares and SEP events occur and impact almost simultaneously \\citep[see e.g.,][]{lugaz17,cairns18,Malandraki18_book}. Accordingly, to predict the occurrence of flares and SEPs one needs to predict the instabilities leading to the onset of magnetic reconnection processes, one of the big challenges in solar physics. \n\n\n\nThe continuous solar wind flow in a quiet state (usually termed background solar wind) is represented by an alternation of slow and fast solar wind streams that interact and form stream interaction regions (SIRs). If steady in their existence and persisting over more than one solar rotation, they are called co-rotating interaction regions (CIRs). During times of low solar activity, Space Weather is dominated by CIR induced storms \\citep{tsurutani06}. Different flow speeds of the background solar wind also change the propagation behavior of CMEs in interplanetary space. This has consequences on the CME transit time and impact speed at planetary atmospheres \\citep[drag force; see][]{gopalswamy00_accelCME,vrsnak01_solph,Cargill2004OnEjections,vrsnak06}. Moreover, CMEs disrupt the continuous outflow of the solar wind and reconfigure the magnetic field on large spatial and short temporal scales altering the background solar wind. For Space Weather and CME modeling\/forecasting purposes, these ever changing conditions in interplanetary space are very challenging to tackle. \n\n \\begin{figure}\n \\includegraphics[width=\\textwidth]{intro2.pdf}\n\\caption{Current and past space missions carrying instruments for gathering remote sensing image data and in-situ plasma and magnetic field measurements. The majority of spacecraft is located in the ecliptic plane orbiting planets or at the Lagrangian point L1. The coronagraph field of view of the SoHO\/LASCO instrument C3 covers 30 solar radii. The background white-light image is taken from STEREO\/HI1+2 data covering about 90 degrees in the ecliptic. Not to scale.\n}\n\\label{fig:insitu} \n\\end{figure}\n\nFor comprehensive investigations a rich source of observational data is currently available from many different instruments located at multiple viewpoints and different radial distances (see Figure~\\ref{fig:insitu}). In Earth orbit current operational missions are e.g., GOES (Geostationary Operational Environmental Satellite), SDO \\citep[SDO:][]{Pesnell2012TheSDO}, Proba-2 \\citep{santandrea13}, located at L1 - 1.5 million km upstream of Earth - there is the Solar and Heliospheric Observatory \\citep[SoHO:][]{Domingo95_SOHO}, the Advanced Composition Explorer \\cite[ACE:][]{Stone98_ACE}, the WIND spacecraft \\cite{ogilvie95}, and DSCOVR \\cite[][]{burt12}. At $\\sim$1AU with variable longitudinal angles from Earth, there is the Solar TErrestrial RElations Observatory \\cite[STEREO:][]{kaiser08_STEREO} consisting of two identical spacecraft named STEREO-Ahead and STEREO-Behind (lost signal end of 2014). The combination of remote sensing image data and in-situ measurements is found to be optimal for enhancing our knowledge about the physics of Space Weather phenomena. For better understanding large eruptive activity phenomena, multi-viewpoint and multi-wavelength data are exploited (e.g., combined L1, STEREO as well as ground-based instruments). The various available data from spacecraft orbiting around planets (e.g., VEX (2006--2014), MESSENGER (2011--2015), MAVEN (2014--), BepiColombo (2018--)) also enable to analyze the evolution of Space Weather phenomena as function of distance and longitude. \n\n\n \n\\begin{figure}\n \\includegraphics[width=\\textwidth]{intro3.pdf}\n\\caption{Each image shown here is a snapshot of the Sun taken every spring with the SOHO Extreme ultraviolet Imaging Telescope (EIT) in the 284\\AA\\ wavelength range. It shows the variations of the solar activity in terms of increasing and decreasing number of bright active regions visible in the corona. Courtesy: NASA\/ESA}\n\\label{fig:soho22} \n\\end{figure}\n\n \n \nA flagship of international collaboration and boost for Space Weather research, is SoHO which now achieved 25 years in space. Figure~\\ref{fig:soho22} shows SOHO\/EIT \\citep{delaboudiniere95} EUV image data covering the variations of the solar corona over a full magnetic solar cycle (Hale cycle). Long-term observations are of utmost importance for monitoring and learning about the interaction processes of solar activity phenomena with Earth and other planets as well as for improving our capabilities in Space Weather forecasting. Most recent and unprecedented missions are Parker Solar Probe, launched in August 2018 \\citep{fox16}, and Solar Orbiter, launched in February 2020 \\citep{mueller20} having on-board imaging and in-situ facilities with the goal to approach the Sun as close as never before ($\\sim$0.05AU and $\\sim$0.3AU) and investigating the Sun out of the ecliptic ($\\sim$30 degrees). To support space missions and for providing valuable complementary data, we must not forget the importance of ground based observatories that observe the Sun over broad wavelength and energy ranges allied in international networks such as the Global high-resolution H$\\alpha$ network\\footnote{\\url{http:\/\/bbso.njit.edu\/Research\/Halpha\/}}, the Global Oscillation Network Group\\footnote{\\url{https:\/\/gong.nso.edu}}, the database for high-resolution Neutron Monitor measurements\\footnote{E.g., \\url{http:\/\/www01.nmdb.eu\/nest\/search.php}}, muon telescope networks, or the Worldwide Interplanetary Scintillation Stations Network \\footnote{\\url{http:\/\/helios.mexart.unam.mx\/pruebas\/wipss\/index.html}}. \n\n\nFrom the derived research results based on the observational data, over the past years a plethora of models could be developed for predicting Space Weather and their geomagnetic effects. The permanent monitoring of the Sun and provision of data in almost real-time enabled to apply those results and even to install operational services that produce forecasts mostly in an automatic manner (e.g., facilitated by ESA\/SSA\\footnote{\\url{http:\/\/swe.ssa.esa.int}}; NASA\/CCMC\\footnote{\\url{https:\/\/ccmc.gsfc.nasa.gov}}; NOAA\/SWPC\\footnote{\\url{https:\/\/www.swpc.noaa.gov}}). However, the operational services also clearly demonstrated the limitations in the forecasting accuracy as on average the errors are large and get worse with increasing solar activity. This is mainly due to the large uncertainties coming from the model input, namely observational parameters at or close to the Sun. It also reveals the complexity of the interplay between the different driving agents of Space Weather, that makes it difficult to fully capture the physics behind and to improve models. Reliable Space Weather forecasting is still in its infancy. \n\n\n\\section{Space Weather}\n\nFrom the historical perspective, the so-called ``Carrington-event'' from September 1, 1859 is the reference event for referring to extreme Space Weather and with that the beginning of Space Weather research \\citep[see also][]{Schwenn06}. At that time only optical observations of the solar surface were performed and the observed emitted radiation in white-light for that event showed impressively the vast amount of energy that was distributed to the dense lower atmospheric layers of the Sun where it heated the photosphere. At Earth, the associated geomagnetic effects were observed in terms of aurora occurring from high to low latitudes (e.g., Honolulu at 20 degrees northern latitude) and ground-induced currents in telegraph wires \\citep[see][]{Eastwood17}. The associated SEP event is thought to be about twice as large as the huge SEP events from July 1959, November 1960, or August 1972 \\citep{cliver13}. Only several years after the Carrington event, the usage of spectroscopes enabled to regularly observe prominence eruptions revealing the dynamic changes of the solar corona and material ejections with speeds exceeding hundreds of km\/s \\citep{tandberg95}. The continuous monitoring of the Sun was intensified in the 1940's, when solar observations in radio, white-light and in the H$\\alpha$ wavelength range were performed. At that time also galactic cosmic rays were studied and found that they are anti-correlated with solar activity \\citep[so-called Forbush decrease, measured as sudden drop in the cosmic ray flux due to interplanetary disturbances; see also][]{cane00}. In the early 1960's magnetic structures driving shocks were inferred from observations in the metric radio observations and geomagnetic storm sudden commencements \\citep{gold62,fokker63}. The \\textit{transient events with mass moving through the solar corona and actually leaving the Sun}, i.e., CMEs, that were associated with the prominence\/filament eruptions were discovered only in the early 1970's with the advent of the space era \\citep[see][]{Tousey71,macqueen74}. Recent reviews on the history of prominences and their role in Space Weather can be found in \\cite[][and references therein]{vial15,Gopalswamy16_CME_history}. While most of the extreme space weather events happen during the solar cycle maximum phase, occasionally strong geoeffective events may occur close to the solar cycle minima and also during weak solar activity, provided there are appropriate source regions on the Sun \\citep[see also e.g.,][]{Vennerstrom16,hayakawa20}. For more details about the solar cycle see the Living Reviews by \\cite{hathaway10}.\n\n\nNowadays, a wealth of space and ground-based instruments are available, delivering valuable observational data, as well as modeling facilities. This enables to study in rich detail the manifold processes related to Space Weather events and to better understand the physics behind. To forecast the geomagnetic effects of an impacting disturbance at Earth (e.g., by the Dst\\footnote{The disturbance storm time (Dst) index monitors variations in the Earth's equatorial ring current.} or Kp index\\footnote{The planetary K index (Kp) monitors variations in the horizontal component of the Earth's magnetic field.}), the most common parameters we need to know in advance - and various combinations of these - are the amplitude\/orientation and variation of the north-south component of the interplanetary magnetic field ($B_{\\rm z}$), speed ($v$), and density ($n$). Especially, the electric field $vB_{\\rm s}$ ($B_{\\rm s}$ = $B_z<0$) is found to show a high correlation with the Dst storm index \\citep[see e.g.,][]{baker81,wu02,gopalswamy08}. For details on the geomagnetic effects of Space Weather phenomena as described here, see the Living Reviews by \\cite{pulkkinen07}.\n\nThe Space Weather ``chain of action'' from the solar perspective is described best by the recent example of the multiple Space Weather events that occurred in September 2017 (see Section~\\ref{sec:active}). But before that, we elaborate the physical basis. \n\n\n\\section{Magnetic reconnection: common ground}\nThe commonality that unites everything and yet produces such different dynamic phenomena is \\textit{magnetic reconnection} and the release of free magnetic energy. This leads to particle acceleration, heating, waves, etc. and to a restructuring of the (local) magnetic field in the corona by newly connecting different magnetic regimes and with that changing magnetic pressure gradients. Especially the latter shows to affect the solar corona globally.\n\n\n\\begin{figure*}\n \\includegraphics[width=1.8\\textwidth,angle=90]{intro4.pdf}\n\\caption{Left: Stochastic acceleration model for solar flares. Magnetic field lines (green) and turbulent plasma or plasma waves (red circles) generated during magnetic reconnection. Blue arrows and areas mark accelerated particles impinging on the lower denser chromosphere where they produce Bremsstrahlung and on the upside may escape to interplanetary space where they are detected as SEPs \\citep[adapted from][]{Petrosian04,Vlahos19_SEP}. Middle: CME-flux rope configuration in the classical scenario (CSHKP) covering also the post eruptive arcade usually observed in SXR and EUV wavelength range \\citep[adapted from][]{Lin00}; Right: CME flux rope acting as driver of a bow shock (black arc) may accelerate SEPs (black dots) in the corona or heliosphere via diffusive shock acceleration \\citep[adapted from][]{Mikic06}. }\n\\label{fig:flare_CME_SEP} \n\\end{figure*}\n\nIn order to derive a complete picture about Space Weather, we first need to understand the interrelation between these many individual processes starting at the Sun. This covers a cascade of small and large scale phenomena varying over different time scales. The primary source of Space Weather producing phenomena, i.e., CMEs-flares-SEPs (note that in the following eruptive phenomena are considered and not stealth CMEs), are active regions representing the centers of strong magnetic field and energy \\citep[more details on the evolution of active regions, see the Living Reviews by][]{van_driel15,toriumi19}. However, in detail the energy build-up and release processes are not well understood. The key-driver certainly is the magnetic field configuration below the visible surface (photosphere), that cannot be directly observed and characterized for giving reliable predictions of its status and further development. The lack of magnetic field information is also given in the upper atmospheric layers. There are currently no instruments enabling measurements of the magnetic field in the corona, hence, we need to rely on models simulating the coronal and, further out, interplanetary magnetic field \\citep[see, e.g., the Living Reviews by][on coronal and solar wind MHD modeling ]{gombosi18}. While active regions are characterized by closed magnetic field, coronal holes cover mainly open magnetic field from which high speed solar wind streams emerge. They structure interplanetary space and set the coupling processes between continuous solar wind flow and transient events. To better understand the propagation behavior of transient events, we also need to study the evolution and characteristics of the solar wind flow, and hence, the interplay between open and closed magnetic field. \n\n\nFigure~\\ref{fig:flare_CME_SEP} sketches three different time steps in the evolution of an eruptive flare event, causing a CME and SEPs, as a consequence of magnetic reconnection \\citep[see][]{Petrosian04,Lin00,Mikic06}. The left panel of Figure~\\ref{fig:flare_CME_SEP} focuses on the early evolution stage of the eruptive event, introducing stochastic acceleration processes causing high energetic particles to precipitate along magnetic field lines towards and away from the Sun. Flare emission is observed on the solar surface due to the acceleration of particles towards the Sun. Particles that escape into interplanetary space along the newly opened magnetic field, produce SEPs. The middle panel of Figure~\\ref{fig:flare_CME_SEP} shows the creation of the CME body, i.e., the production of a closed magnetic field structure (flux rope), as well as the related post-eruptive arcade which is formed below. The exact acceleration mechanism(s) of SEPs is still an open issue, hence, cartoons as shown here usually present both possible driving agents, the flare and the CME shock. To complete the picture for a flare-CME-SEP event, the right panel of Figure~\\ref{fig:flare_CME_SEP} depicts the interplanetary magnetic field and its behavior which differs from the typical Parker spiral orientation due to the propagating CME shock component causing SEP acceleration in interplanetary space. The deviation of the interplanetary magnetic field from the nominal Parker spiral is an important issue when dealing with magnetic connectivity for studying SEPs and propagation behavior of CMEs. \n\nIn the following, we will discuss in more detail the characteristics of the different manifestations occurring in an eruptive flare event.\\\\\n\\section{Solar Flares} \\label{sec:flares}\n\n\\subsection{Eruptive capability of an active region}\n\nActive regions may be classified either by the morphology of an active region using the McIntosh classification \\citep{mcintosh90} or the magnetic structure using Hale's\/K\\\"unzel's classification \\citep{kunzel60}. Due to the emergence of magnetic flux the degree of complexity in the magnetic field of an active region grows, which increases the likeliness to create strong flares and CMEs \\citep[e.g.,][]{sammis00,toriumi17}. The probability that an X-class flare is related to a CME is found to be larger than 80\\% \\citep{yashiro06}, however, there are well observed exceptions reported. So-called confined flares are neither accompanied by a CME nor a filament eruption \\citep[e.g.,][]{moore01}. Their special magnetic field configuration allows particle acceleration (observed as flare), but they do not escape into interplanetary space and, hence, do not produce SEPs \\citep{gopalswamy09}. Therefore, confined flares may produce strong X-ray emission but, presumably due to a strong bipolar overlying coronal magnetic field configuration, are not related to the opening of the large-scale magnetic field \\citep[e.g.,][]{wang07,sun15,Thalmann15}. The electromagnetic radiation of confined flares can still instantaneously cause sudden changes in the ionospheric electron density profile (disturbing radio wave communication or navigation), also known as \\textit{solar flare effect} or geomagnetic crochets \\citep{campbell03} but occurring rather rarely. However, confined flares are also potential candidates for false Space Weather alerts in terms of an erroneous forecast of geomagnetic effects due to the magnetic ejecta that would have arrived tens of hours later at Earth. \n\n\nTherefore, the manifestation of the eruptive capability of an active region is one of the prime targets for prospective forecasting of SEPs and CMEs. For example, the length of the main polarity inversion line of an active region or the magnetic shear and its sigmoidal morphology, is obtained to be highly indicative of the potential to open large scale magnetic field and to produce CMEs and SEPs \\citep[e.g.,][]{Canfield99}. Studies also showed that active regions, for which the polarity inversion line quickly changes with height into a potential field configuration, are more favorable for producing non-eruptive events \\citep{Baumgartner18}. Likewise, the decay index of the horizontal magnetic field (ratio of the magnetic flux in the lower corona to that in the higher corona) is found to be lower for failed eruptions compared to that for full eruptions \\citep[cf.,][]{torok05,fan07,guo10,olmedo10}. \n\n\nFor more details on the issue of flare-productive active regions I refer to the Living reviews by \\cite{toriumi19}. See also \\cite{forbes00}, \\cite{webb12}, \\cite{parenti14}, or \\cite{chen17} for a more theoretical approach on that issue.\n\n\\subsection{Eruptive solar flares: general characteristics}\\label{sec:flare}\n\n\\begin{figure*}\n \\includegraphics[width=0.6\\textwidth]{flare1.pdf}\n\\caption{Flare-CME-SEP relation in time. The onset of the solar flare is indicated by the vertical red line. The grey\nshaded area marks the time difference between flare start and SEP flux increase for MeV energies. Taken from \\cite{Anastasiadis19} who adapted it from \\cite{miro03}.}\n\\label{fig:timing} \n\\end{figure*}\n\n\nFlares are observed to release a huge amount of energy from 10$^{19}$ up to 10$^{32}$ erg over a timescale of hours\\footnote{An automatically updated list of flares is available under \\url{http:\/\/www.lmsal.com\/solarsoft\/latest_events\/} or \\url{https:\/\/www.solarmonitor.org}}. With the advent of modern ground-based and space-borne instruments, our small optical window was massively enlarged and it is now well-known that this energy is radiated over the entire electromagnetic spectrum from decameter radio waves to gamma-rays beyond 1 GeV. Figure~\\ref{fig:timing} depicts the temporal relation between flare emission, observed in different wavelength ranges, CME kinematics and SEP flux profiles in the GeV and MeV energy range. The flare activity profile consists of a so-called pre-flare phase, showing thermal emission in SXR and EUV, as well as H$\\alpha$ kernel brightenings. If related to a filament eruption, this phase partly coincides with the slow rise phase of the filament\\footnote{Filament detection and eruption catalogues can be found e.g., under \\url{http:\/\/cesar.kso.ac.at\/sn_iv\/filaments.php} or \\url{http:\/\/aia.cfa.harvard.edu\/filament\/}}. This is followed by the impulsive flare phase during which most of the energy is released and non-thermal emission in terms of hard X-ray (HXR) footpoints appears due to particles accelerating out of the localized reconnection area and bombarding the denser chromosphere where they emit Bremsstrahlung \\citep[for a review on solar flare observations see e.g.,][]{fletcher11}. At this point also the CME body forms as consequence of the closing of the magnetic field lines in the upper part of the reconnection area revealing a flux rope structure (note that the most compelling argument for an already existing flux rope is actually a filament). As the flare emission increases also the SEP flux in the GeV energy range starts to rise. After the flare reaches a maximum in intensity, the decay phase is observed during which the intensity level goes back to the background level from before the flare start. The exact timing of the rise and decay phase is dependent on the energy release and the energy range in which the flare is observed which is known as the so-called Neupert effect \\citep[the HXR flux rise phase time profile corresponds to the derivative of the SXR flux time profile; see][]{neupert68}. The last phase may have a duration of several hours or longer. During that phase also post-eruptive arcades (or loops) start to form, that may still grow over 2--20 hours. The growth of the post-eruptive arcade is hinting towards an ongoing reconnecting process, which is not energetic enough to produce a significant emission in EUV or SXR \\citep[see e.g.,][]{tripathi04}. For more details on the global properties of solar flares I refer to the review by \\cite{hudson11}.\n\nFigure~\\ref{fig:moestl08} shows the temporal evolution of a flare and erupting filament observed in H$\\alpha$ and the associated CME observed in white-light coronagraph image data. The event is classified in the emitted SXR flux as GOES M3.9 flare (corresponding to the measured power of $3.6\\times10^{-5}$ W\/m$^{2}$) which occurred on November 18, 2003 in a magnetically complex $\\beta\\gamma$ active region. The associated CME caused two days later one of the strongest geomagnetic storms of solar cycle 23 having a minimum Dst value of $-$472~nT \\citep{gopalswamy05_nov}. Inspecting the time stamps on the image data of that event, about one hour after the appearance of the flare signatures, the CME became visible in the coronagraph. The filament started to rise some tens of minutes before the flare emission occurred.\n\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{flare2.pdf}\n\\caption{Global flare evolution and relation to CME from the November 18, 2003 event. Left panels: H$\\alpha$ filtergrams from the Kanzelh{\\\"o}he Solar Observatory (Austria). The associated erupting filament is indicated by arrows. Right panels: Temporal evolution of the CME in coronagraph images from SOHO\/LASCO. Taken from \\cite{mostle08}.}\n\\label{fig:moestl08} \n\\end{figure*}\n\nThe orientation of the magnetic structure, especially of the $B{\\rm z}$ component, of an ICME is key to forecast its geoeffectiveness and poses the Holy Grail of Space Weather research. Knowing the flux rope orientation already at the Sun could provide information on the impact of CMEs early in advance, hence, as soon as they erupt or even before. While the handedness of flux ropes can be well observed from in-situ measurements \\citep{Bothmer98,mulligan98}, on the Sun observational proxies need to be used. Figure~\\ref{fig:palmerio17} shows several surface signatures from which the magnetic helicity (sense of twist of the flux rope: right-handed or left-handed) can be inferred. Typically, in EUV observations these are sigmoidal structures (S- or reverse S-shaped) or post-eruptive arcades (skewness of EUV loops and polarity of the underlying magnetic field), in H$\\alpha$ the fine structures of filaments are used (orientations of barbs) or statistical relations like the \\textit{hemispheric helicity rule} \\cite[see][]{wang13}. However, strong coronal channeling, latitudinal and also longitudinal deflection and\/or rotation, that the magnetic component of the CME undergoes as it evolves through the low solar corona, may change those parameters as shown in various studies by e.g., \\citet{shen11,Gui11,bosman12,Panasenco13,WangY14,kay15,moestl15}, or \\citet{Heinemann19}. Recent approaches in ICME $B{\\rm z}$ forecasting can be found in, e.g., \\cite{savani15}, \\cite{palmerio17}, or \\cite{kay17}.\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{flare3.pdf}\n\\caption{(a) SDO\/HMI data from June 14, 2012 showing the magnetic tongues of the erupting active region revealing a positive chirality. (b) Forward\u2010S sigmoidal structure from the coronal loops observed by SDO\/AIA 131\\AA, indicating a right\u2010handed flux rope. (c) SDO\/HMI magnetogram showing the approximated polarity inversion line (red line). (d) Base\u2010difference SDO\/AIA 131\\AA~image overlaid with the HMI magnetogram contours saturated at $\\pm$200~G (blue = negative polarity; red = positive polarity). The dimming regions indicating the flux rope footpoints are marked by green circles. Panels (a--d) are adapted from \\cite{palmerio18}. (e) The cartoon shows the handedness inferred from the magnetic field and sigmoidal structures or orientation of post eruptive loops.}\n\\label{fig:palmerio17} \n\\end{figure*}\n\n\n\n\nFor more details about the energetics and dynamics of solar flares I refer to the Living Reviews by \\cite{benz17} and for the magnetohydrodynamic processes in active regions responsible for producing a flare to the Living Reviews by \\cite{Shibata11}. \n\n\n\n\\section{Coronal mass ejections (CMEs)}\\label{subsec:cme}\n\n\\subsection{General characteristics}\n \nCMEs are optically thin large-scale objects, that quickly expand, and are traditionally observed in white-light as enhanced intensity structures. The intensity increase is due to photospheric light that is Thomson scattered off the electrons forming the CME body and integrated over the line-of-sight \\citep{Hundhausen93}. Due to strong projection effects their apparent morphology greatly depends on the viewpoint and, hence, makes CMEs a rather tricky object to measure \\citep[see e.g.,][]{burkepile04,cremades04}.\n\nBy using coronagraphs, CMEs are visible with teardrop-like shapes that are characterized by multiple structures. Figure~\\ref{fig:CME_struct} shows SOHO\/LASCO \\citep{Brueckner1995TheLASCO} coronagraph white-light images of two CMEs having different propagation directions. For the CME that leaves the Sun in a rather perpendicular angle to the observer (left panel of Figure~\\ref{fig:CME_struct}), the various CME structures are well visible. In general, we distinguish between the shock (yellow arrow) and CME body (green arrow) that are followed by some cavity created by the expanding magnetic flux rope (red arrow) and an increased brightness structure (orange arrow). Partly these structures are detected also from in-situ measurements for the interplanetary counterparts of CMEs (ICMEs; see Section~\\ref{sec:sub3-5}). The increased brightness structure consists of prominence material \\citep{Vourlidas13} or is suggested to appear due to a brightness increase of the two overlapping CME flanks \\citep{howard17}. The sheath region behind the shock has less clear signatures in coronagraph images taken close to the Sun as it is generated later when the solar wind plasma gets piled-up in interplanetary space \\citep[see e.g.,][]{kilpua17,salman20}. For CMEs propagating in the line-of-sight towards or away from the observer (right panel of Figure~\\ref{fig:CME_struct}), the different structures are less well visible. As these CMEs are launched close to the central meridian of the observed disk, they most severely suffer from projection effects. Energetic ones are frequently observed as so-called \\textit{halo} CMEs, revealing extensive white-light signatures made of compressed plasma material surrounding the occulting disk of a coronagraph. For halo CMEs, evidence that the CME is actually moving towards the observer is given from the associated activities observed on the solar disk (such as filament eruptions, flare emission, dimming regions, or coronal wave signatures). Highly relevant for Space Weather, halo CMEs are of special interest and are diversely studied mostly by using single spacecraft data from the coronagraphs aboard SoHO.\n\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{cme1.pdf}\n\\caption{LASCO CME excess mass images showing the expanding shock wave front (yellow arrow) and the CME leading edge density enhancement (green arrow) for two different events. For the CME propagating rather in the plane of sky (left panel), typical structures such as the cavity due to the expanding magnetic ejecta} (red arrow) followed by some intensity enhancement (orange arrow) can be observed, that is less well visible for the halo CME (right panel). The projected LASCO CME speeds are given in the legend \\citep[adapted from][]{Vourlidas13}.\n\\label{fig:CME_struct} \n\\end{figure*}\n\n\nUp to the distance of about 30 solar radii (LASCO\/C3 field of view), statistical studies showed that CMEs undergo several phases in their dynamics. Before the actual launch a slow rising phase occurs (initiation phase), continued by the acceleration phase over which a rapid increase in speed is observed in the inner corona, that is followed by a rather smooth propagation phase as the CME leaves the Sun \\citep{Zhang2006AEjections}. On average, over the coronagraphic field of view, CME fronts reveal radial speeds in the range of 300--500~km\/s with maximum values observed up to 3000~km\/s, accelerations of the order of 0.1--10~km\/s$^2$, angular widths of about 30--65 degrees and masses of $\\sim$10$^{14}$--10$^{16}$\\,g \\citep[e.g.,][]{vourlidas10,Lamy19}. The ratio in density between the CME body and surrounding solar wind decreases from $\\sim$11 at a distance of 15 solar radii to $\\sim$6 at 30 solar radii \\citep{temmer21}. However, CMEs vary in their occurrence rate as well as in their characteristics over different solar cycles. While flare rates and their properties have not changed much over the past solar cycles, the CME properties for solar cycle 24 are significantly different as given in recent statistics \\citep{Lamy19,dagnew20Cycle}. CMEs were found to be more numerous and wide compared to solar cycle 23. Close to the Sun, the CME expansion is driven by the increased magnetic pressure inside the flux rope, while further out they most probably expand due to the decrease of the solar-wind dynamic pressure over distance \\citep{lugaz20}. Therefore, the increased width for CMEs of cycle 24 may be explained by the severe drop ($\\sim$50\\% ) in the total (magnetic and plasma) heliospheric pressure \\citep[see e.g.,][]{mccomas13,gopalswamy14,gopalswamy15,dagnew20}. Interestingly, also the maximum sunspot relative number in cycle 24 reached only 65\\% of that from cycle 23\\footnote{\\url{http:\/\/sidc.be\/silso\/cyclesminmax}}. The different expansion behaviors have consequences also for Space Weather effects in terms of their abilities in driving shocks \\citep[see e.g.,][]{lugaz17_radial}. \n \n\n\n\n\n\\subsection{CME early evolution}\\label{sec:sub2-5}\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{cme2.pdf}\n\\caption{STEREO-B observations of the CME from August 24, 2014. The images show combined EUVI (304\\AA) and COR1 image data. Filament plasma material is ejected into space forming the bright CME core following the cavity. Plasma that is lacking sufficient kinetic energy to escape from the Sun's gravity, falls back onto the solar surface. Credit: STEREO\/NASA. The movie is online available.\n}\n\\label{fig:euv_cor1} \n\\end{figure*}\n\n\nBesides the traditional observations in white-light images, also EUV or SXR imagery reveal CME signatures, presumably due to compression and heating that makes it visible in filtergrams sensible for high temperatures \\citep[see e.g.,][]{glesener13}. Satellite missions that carry EUV instruments having large field of views can be effectively used with combined white-light coronagraph data to track CME structures for deriving the kinematical profile over their early evolution covering the CME main acceleration phase. The SECCHI instrument suite \\citep{Howard08_secchi} aboard STEREO provides EUV and white-light data that seamlessly overlap\\footnote{SoHO EIT and C1 also provided that possibility but C1 was lost in June 1998 due to spacecraft failure. For a couple of events the usage of combined EIT-C1 data could be shown \\citep[see][]{Gopalswamy00_earlyCME,Zhang01,cliver04b}.}. as shown in Figure~\\ref{fig:euv_cor1}. For such studies one needs to keep in mind that the observational data image different physical quantities (density and temperature in EUV, and density in white-light), hence, dark and bright features in both image data do not necessarily match. \n\nFrom combined high temporal resolution EUV and white-light data a more detailed understanding about the energy budget (see also Section~\\ref{sec:budget}) and relation between flares, filaments and CMEs is revealed providing relevant information for SEP acceleration and generation of radio type II bursts. It is found that the thermal flare emission observed in SXR and the CME speed profile show similar behavior in timing \\citep{Zhang01,Zhang04,chen03,Maricic07}. For strong eruptive events an almost synchronized behavior between flare HXR emission and CME acceleration is obtained through a feedback relation \\citep{Temmer08,temmer10}. The CME acceleration is found to peak already as low as about 0.5 solar radii above the solar surface \\citep[for statistics see][]{Bein2011ImpulsiveCharacteristics}. Figure~\\ref{fig:temmer16} gives the schematic profiles and distances over time between non-thermal (HXR) and thermal (SXR) flare energy release and CME kinematics (acceleration, speed). The flare-CME feedback loop can be well explained by the CSHKP standard model \\citep{Carmichael64,Sturrock66,Hirayama74,Kopp76} through the magnetic reconnection process underlying both activity phenomena. In a simplistic scenario, we may summarize that magnetic reconnection drives particle acceleration (neglecting details on the actual acceleration process) leading to flare emission and closes magnetic field increasing the magnetic pressure inside the presumable CME flux rope (neglecting details on the actual magnetic configuration of the active region and surrounding). For strong flares that are related to CMEs of high acceleration values, the available free magnetic energy might be larger. This occurs preferably for CMEs initiated at lower heights where the magnetic field is stronger. With that, particles get accelerated to larger energies, hence, producing stronger flares, and more poloidal flux can be added per unit time, hence, generating a stronger expansion of the flux rope and a faster CME eruption. This is supported by theoretical investigations on the feedback process, covering magnetic reconnection with the ambient coronal magnetic field \\citep[reconnective instability][]{welsch18}. More details are found in, e.g., \\cite{chen03,Vrsnak2007AccelerationScales,vrsnak08,jang17}. \n\n\nAssociated to the erupting CME, we frequently observe coronal dimming regions that evolve over a few tens of minutes \\citep{1996hudson,webb00}. Core dimming regions are assumed to be located at the anchoring footpoints of the associated magnetic flux rope and reveal the loss of plasma from the corona into the CME structure adding mass to the CME body \\citep[see][]{Temmer17_flar-cme}. Secondary dimming regions most probably refer to mass depletion in the wake of the large-scale magnetic field opening as the CME fully erupts \\citep[for more details on core and secondary dimming regions, see][]{mandrini07}. Recent studies discovered a strong relation between dimming intensity and flare reconnected flux as well CME speed \\citep[e.g.,][]{dissauer18,dissauer19}. Also the final width of the CME can be estimated from the amount of magnetic flux covered by the CME associated post-eruptive flare arcade as the surrounding magnetic field prevents the CME flux rope from further expansion \\citep{moore07}. On the contrary, the CME surrounding shock as well as associated coronal waves on the solar surface, that are ignited by the lateral CME expansion, are freely propagating and are not limited in their spatial extend \\citep[for more details on globally propagating coronal waves I refer to the Living Reviews by][]{Warmuth15}.\n\n\\begin{figure*}\n \\includegraphics[width=1.6\\textwidth, angle=90]{cme3.pdf}\n\\caption{CME-flare relation. a) schematics of the thermal (SXR) and non-thermal (HXR) flare energy release in comparison to the CME kinematical evolution close to the Sun. It is found that CME acceleration and HXR emission as well as the CME speed and SXR emission, respectively, are closely related. Taken from \\cite{Temmer16}. b) observational results for the December 22, 2009 CME event revealing the early evolution from combined EUV and coronagraph data (STEREO-B spacecraft) and GOES SXR flux profile for the related flare and derivative (proxy for HXR emission). Taken from \\cite{Bein2012ImpulsiveEruptions}.}\n\\label{fig:temmer16} \n\\end{figure*}\n\n\nTo derive in more detail the temporal linking of flare-CME-SEP events, image data covering large field of views for observing the lower and middle corona is of utmost importance. Figure~\\ref{fig:mid-corona} shows the different field of views of currently available and future EUV instruments to observe and study the middle corona (distance up to about 4 solar radii). The Extreme EUV Imager suite aboard Solar Orbiter works at the 174\\AA~and 304\\AA~EUV passbands \\citep[EUI:][]{Rochus20_SoloEUI}. The EUVI-LGR instrument aboard ESA's \\textit{Lagrange} L5 mission (launch planned for 2027) has an extended field of view to the West limb of the Sun, that is perfectly suited to track the early evolution of Earth directed CMEs from L5 view (60 degrees separation with Earth). We must not forget the capabilities of ground-based coronagraph instruments such as the COSMO K-Cor at the Mauna Loa Solar Observatory in Hawaii (replaced in 2013 the aging MLSO Mk4 K-coronameter\\footnote{Details can be found under: \\url{https:\/\/www2.hao.ucar.edu\/cosmo\/documentation}}) observing a field of view starting as low as 1.15 solar radii, however, quite restricted in observational time compared to satellite data. \n\n\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{cme4.pdf}\n\\caption{EUV image from Proba-2\/SWAP combined with a LASCO\/C2 coronagraph image covering in total a field of view up to $\\sim$4 solar radii. The colored boxes mark the relative nominal field of views of different EUV observing instruments. FSI (Full Sun Imager is part of the EUI suite aboard Solar Orbiter), EUVI-LGR (aboard the planned L5 \\textit{Lagrange} mission), and SoHO\/EIT. STEREO\/EUVI, Proba-2\/SWAP, GOES\/SUVI are instruments with the largest field of view of about 1.7 solar radii. Taken from \\url{http:\/\/middlecorona.com}.\n}\n\\label{fig:mid-corona} \n\\end{figure*}\n\n\nFor more details on CME trigger mechanisms I refer to recent review articles by \\cite{schmieder15} or \\cite{green18}. For a more specific background on CME initiation models, see, e.g., the Living Reviews by \\cite{webb12}.\n\n\n\n\\subsubsection{Shock formation, radio bursts, and relation to SEPs}\\label{shock-form}\nClosely related to studies of the CME early evolution and acceleration profiles, are shock formation processes. To generate a shock wave, a short-duration pulse of pressure is needed. Besides the CME, acting as piston, there is also the possibility that a strong flare energy release initiates a blast wave or simple-wave shock \\citep[e.g.,][]{vrsnak08b}. At which height shocks are formed by an erupting disturbance is important for understanding particle acceleration processes. The acceleration profile derived from tracking the CME frontal part suggests its formation at rather low coronal heights $<$1.5 solar radii. The shock formation height itself is also strongly depending on the plasma environment. From model calculations a local minimum of the Alfv{\\'e}n speed is derived for a distance of about 1.2--1.8 solar radii and a local maximum around 3.8 solar radii from the Sun \\citep{mann99,Gopalswamy01_CME+longwave_typeII,vrsnak02}. Hence, the statistical maximum of CME acceleration profiles is also in accordance with the local minimum of the Alfv{\\'e}n speed. The occurrence of such local extrema has major consequences for the formation and development of shock waves in the corona and the near-Sun interplanetary space as well as their ability to accelerate particles. \n\n\nThe most compelling argument for shock formation is the observation of a radio type II burst. In the case of being driven by a CME, they are reported not only to occur at the apex of a CME shock front, but also to originate from the lateral expansion of the CME as observed with LOFAR\\footnote{Low Frequency Array \\citep{haarlem13}. Recent attempts to use LOFAR for Space Weather purposes are reported under \\url{http:\/\/lofar4sw.eu}.} \\citep[e.g.,][]{zucca18}. Due to the large density in the lower coronal heights a large compression appears with a quasi-perpendicular geometry, favoring the shock formation process. In that respect, moving type IV radio bursts might actually represent shock signatures due to CME flank expansion, that can be used as additional diagnostics for studying the lateral evolution of a CME \\citep{morosan20_flank}. The SEP intensity is found to be correlated with the width of a CME, and as such identifies the CME flank region to be an efficient accelerator of particles \\citep[see][]{holman83,mann05,richardson15}. Comparing the CME apex and flanks, the field lines are disturbed at different heights that may lead to different onset times for the acceleration of SEPs (cf., Figure~\\ref{fig:reames09}). The time needed for shock formation also leads to a temporal delay of the onset of SEP events with respect to both, the initial energy release (flare) and the onset of the solar type II radio burst (evidence of shock formation). Hence, the timing is an important factor and has to be taken into account when relating these phenomena to each other.\n\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{cme5.pdf}\n\\caption{Two CME events and associated coronal waves (June 12 and 13, 2010) are investigated with high cadence SDO EUV 211\\AA~data. Manually tracked positions of the wavefronts are marked by dashed black lines. The connection between coronal surface wave and CME front is nicely observed. The right panels give radio data (Learmonth observatory, Australia) revealing type III and type II bursts (upper and lower frequency band) and measurements from particle detectors at L1 (bottom panel; vertical dashed lines show AIA waves onsets). Adapted from \\cite{Kozarev11}.}\n\\label{fig:kozarev11} \n\\end{figure*}\n\n\nShocks may also be formed at larger distances from the Sun (several tens of solar radii), depending on the acceleration phase duration, the maximum expansion velocity and the width of the CME \\citep{Zic2008CylindricalShocks}. Due to the declining magnetic field with distance \\citep[well defined band-splits in type II bursts can be used to estimate the magnetic field in the corona; see e.g.,][]{vrsnak02}, shocks forming at larger heights are related to softer SEP spectra \\citep[e.g.,][]{gopalswamy17_SEP}. As can be seen, CME acceleration, shock formation height and hardness of SEP spectra is closely connected. Compared to SEPs, which strongly depend on the magnetic connectivity with the observer, type II bursts can be observed without connectivity issues and thus, give additional information about particle acceleration processes driven by CME shocks. In that respect, type II radio bursts may be used for predicting SEPs as well as shock arrival times \\citep[e.g.,][]{gopalswamy08_typeII,cremades15}. In combination, these parameters have strong implications for Space Weather impact, revealing the importance of monitoring and studying the early evolution phase of solar eruptive events. More details on SEPs are given in Section \\ref{sec:4}.\n\n\n\nFigure~\\ref{fig:kozarev11} presents a case study about the evolution of a CME front close to the Sun by using high cadence EUV images from SDO for June 12 and 13, 2010. The derived kinematics of the CME reveals a fast acceleration of its frontal part with about 1~km\/s$^2$ over the distance range 1.1--2.0 solar radii. The almost vertical traces in the radio spectra are type III bursts, identified with streams of electrons (radio emission due to particles moving along open magnetic field lines), followed by diagonal structures of a moving type II bursts, identified with shock waves. The onset of the type II burst appears together with the CME shock front, as observed in EUV, with a bit of a delay with respect to the shock formation that occurs close to the maximum CME acceleration. By the time the CME occurs in the LASCO field of view, the CME speed decreased to the sub-Alfv{\\'e}nic regime. The event produced an enhanced proton flux at 1AU. However, the complex magnetic topology related to the active region prevents from making strong conclusions about the possible sites of particle acceleration \\citep[see also][]{Kozarev11,suli11,gopalswamy12}. Definitely, more such detailed case studies combined with improved modeling of the magnetic environment is needed for advancing our understanding in the processes that accelerate particles. \n\n\n\\subsubsection{Stealth CMEs}\nIn contrast to fast and massive CMEs and their related cascade of solar surface signatures, there exist so-called \\textit{stealth CME} events that are most probably caused by some simple (low-energetic) magnetic field reconfiguration in the upper corona releasing magnetic flux ropes of low density that usually do not exceed the solar wind flow speed. Actually, they were recognized already in the mid 1980's and were identified as \\textit{spontaneous CMEs} or \\textit{unassociated CMEs} (meaning no surface signatures) by \\cite{wagner84}. Later studies showed, that they start at very large heights in the corona without noticeable signatures, such as flare emission, filament eruptions, coronal waves, or coronal dimmings \\citep{Robbrecht09}. Stealth CMEs are potential candidates to cause \\textit{problem} storms and missed Space Weather events, as they are hardly recognized in white-light data and due to the lack of observational imprints on the solar disk. In the recent years several studies have been published on this issue discussing those events \\citep[see e.g.,][]{DHuys2014ObservationalSignatures,nitta17,vourlidas18}. \n\n\n\n\n\\subsection{Advantages due to multi-viewpoint observations}\\label{sec:sub1-5}\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{cme6.pdf}\n\\caption{Earth-directed CME from December 12, 2008 as observed from multiple perspectives. STEREO-A (left) and STEREO-B (right) are separated from Earth by an angle of about 45 degrees. The running difference image from LASCO\/C2 (middle panel) observes the CME as weak partial halo event. The inlay in the middle panel gives the spacecraft location (STEREO-A red filled circle; STEREO-B blue filled circle) with respect to Earth (green filled circle) and the CME propagation direction (yellow arrow). White arrows in each panel point to roughly similar parts of the CME observed with the different instruments.} The closer the CME propagates in the plane-of-sky of the instrument, the higher the intensity in white-light. Adapted from \\cite{Byrne10}.\n\\label{fig:multi_byrne} \n\\end{figure}\n\n\nIn contrast to a flare, which is a rather localized phenomenon, the analyzes of CMEs and related coronal waves, propagating over large areas of the solar surface, as well as SEPs profit enormously from at least two viewpoints. The twin-spacecraft STEREO unprecedentedly provides, since its launch end of 2006, image data in EUV and white-light from multiple perspectives. STEREO consists of two identical spacecraft, STEREO Ahead (A) and Behind (B; lost signal October 2014), orbiting the Sun in a distance close to Earth, with STEREO-A being closer and STEREO-B further away from the Sun. The separation angle between the two spacecraft increases by about 45 degrees per year\\footnote{Current position of STEREO and other spacecraft can be found under \\url{https:\/\/stereo-ssc.nascom.nasa.gov\/where.shtml}}. There are four instrument packages mounted on each of the two STEREO spacecraft, SECCHI comprising EUV and white-light coronagraphs and imagers \\citep{Howard08_secchi}, IMPACT sampling the 3D distribution of solar wind plasma and magnetic field \\citep{Luhmann08_IMPACT,Acuna08_IMPACT_MAG}, SWAVES tracking interplanetary radio bursts \\citep{Bougeret08_SWAVES}, and PLASTIC measuring properties of the solar wind plasma characteristics \\citep{galvin08_PLASTIC}. Conjoined with instruments from Earth perspective, such as SoHO (1995--), Hinode (2006--) and SDO (2010--), as well as ground based observatories (covering the radio and visual wavelength range, e.g., chromospheric H$\\alpha$ and Ca II lines), the evolution of active regions together with flares, CMEs, and SEPs could be for the first time stereoscopically observed. Unfortunately, a big drawback for multi-viewpoint magnetic field investigations was the lack of magnetographs onboard STEREO \\citep[this might be overcome by the ESA \\textit{Lagrange} L5 mission planned to be launched in 2027; see also e.g.,][]{gopal11_L5,Lavraud2016AScience}. \n\n \nBesides having more than one vantage point, STEREO carries the heliospheric (HI) instruments, enabling to seamlessly observe the entire Sun-Earth line in white-light. They provide a unique long-term, synoptic data-set to be exploited for Space Weather application. Wide-angle image data allow to undoubtedly link CMEs to their interplanetary counterparts (ICMEs) as measured in-situ and to investigate in detail the in-situ signatures caused by the different CME structures and orientations. More details on ICMEs are given in Section~\\ref{sec:sub3-5}. \n\nFigure~\\ref{fig:multi_byrne} shows an Earth-directed CME observed from multiple perspectives and over a large distance range using STEREO data. From Earth perspective (shown in the middle panel), the CME is observed as weak halo event which makes it almost impossible to reliably determine a propagation direction and its radial speed. From STEREO perspective, the CME is observed close to the plane of sky of the instruments, lowering the projection effects for deriving its radial kinematics. Hence, the multiple viewpoints and homogeneous dataset of STEREO, enable to do 3D reconstructions of solar structures and to investigate projection effects with the attempt of correcting them, or at least limit and assess the uncertainties of the projected measurements. For SEPs, the identical instruments aboard the two spacecraft bring the advantage of having the same energy threshold, allowing systematic studies of SEPs coming from the same active region but related to a different magnetic connectivity and to probe the longitudinal dependencies.\n\n\n\n\n\n\\begin{figure}\n \\includegraphics[width=1.\\textwidth]{cme7.pdf}\n\\caption{CME from March 7, 2011: (a) Excess brightness image from STEREO-A COR2. 3D shock front (green mesh in panel (b)) projected on the image plane is shown with the dotted line. The diamond marks the geometric center of the ellipsoid model projected onto the same plane. (b) Excess brightness in panel (a) with the 3D shock front (green mesh) modeled with the ellipsoid model described in \\cite{kwon17}. (c) Geometric relation among the Sun, shell-like sheath, and line-of-sight. A partial circle around the origin O is the solar disk. A shell-like sheath is represented in gray color. Arrows in black, blue and red are the line-of-sight, the projected shock normal on the image plane, and the actual shock normal in 3D, respectively. Taken from \\cite{kwon18}.}\n\\label{fig:kwon} \n\\end{figure}\n\nMulti-spacecraft views enable to study especially the CME geometry and its substructures in more detail. With that, the different manifestations of shock and driver could be well confirmed and it is now well acknowledged that the outer envelope of the observed CME presents the expanding shock or compressed shell that encompasses the driver \\citep[e.g.,][]{Ciaravella06,Ontiveros09,Vourlidas13}. As the different parts have a different impact on Earth, for forecasting purposes, measurements of the CME's outer front should be clearly specified (e.g., shock front versus magnetic structure). In addition, the long-standing question whether halo CMEs would be different compared to limb events \\citep[see e.g., the Living Reviews by][]{chen11} could be solved. The shock shell of the CME can be presented as sphere-like structure expanding over 360 degrees (see Figure~\\ref{fig:kwon}). It was found that especially strong events (having a large compression) can be observed as halo CME independent of the viewpoint \\citep{kwon15,kwon18}. In that respect STEREO data also showed that the outermost shock component of the CME matches well the solar surface structure of coronal EUV waves \\citep{Kienreich09,Patsourakos09,Veronig2010FirstWave,kwon17}. Therefore, observations of the surface signatures of CME related coronal waves give supportive information about the CME expansion and propagation direction and should be closely monitored for early Space Weather forecasting.\n\n\n\nThe CME speed is actually a mixture of lateral and radial expansion dynamics making it tricky to derive the ``true'' propagation behavior. Multiple viewpoints enable to separately study projected versus deprojected speeds and radial versus lateral expansion behaviors of CMEs. Comparing single and multiple spacecraft data revealed that single viewpoint measurements are definitely valid. However, especially measurements of the CME width (or lateral expansion) and speed for slow CMEs (deprojected speeds below 900km\/s) reveal high uncertainties depending on the perspective \\citep{Shen13,Balmaceda18}. Models taking into account projection effects showed to significantly decrease the uncertainties in forecasting the arrival times of CMEs \\citep[e.g.,][]{colaninno13,Mishra13,shi15,makela16,rollett16}. A well known empirical relation exists between the radial and the lateral expansion speed, $V_{\\rm rad} = 0.88 V_{\\rm exp}$, as described by \\cite{dalLago03} and \\cite{schwenn05}. Follow-on studies showed that this relation can also be described by the CME half-width, $w$ (assuming a cone model), given by $f(w)=1\/2(1+\\cot{w})$ and that the kinematics of extremely fast CMEs is better estimated by $V_{\\rm rad} \\approx V_{\\rm exp}$ \\citep{michalek09}. Moreover, statistical studies revealed that the relationship between the radial and lateral expansion speed is a linear function, hinting towards the self-similar expansion behavior of CMEs already close to the Sun \\citep{Vourlidas17,balmaceda20}. However, in the low corona, for some events a strong overexpansion is observed \\citep[e.g.,][]{Patsourakos10}. The assumption of a rather self similar expansion is found to be valid for most of CME events when propagating in interplanetary space \\citep[e.g.,][]{Bothmer98,Leitner07,demoulin08,Gulisano12,vrsnak19}. \n\n\n\\begin{figure}\n \\includegraphics[width=1.\\textwidth]{cme8.pdf}\n\\caption{March 7--11, 2012: GCS fitting (green mesh) of two CMEs (CME1: top panels. CME2: bottom panels - note that CME1 is visible as extended bright structure in these images) using white-light data from the three spacecraft, STEREO-A, SOHO and STEREO-B. The first, second, and third columns contain coronagraph images from COR2 aboard STEREO-B, C3 aboard SOHO, and COR2 aboard STEREO-A, respectively. Taken from \\cite{patsourakos16}.}\n\\label{fig:patsourakos16} \n\\end{figure}\n\n\n\nSince we cannot gather the full complexity of the CME structure, idealized geometries assuming self-similar expansion, act as basis of many CME models and 3D reconstruction techniques that were developed over the past years. Basic models make use of a simple cone-type geometry \\citep[e.g.,][]{St.Cyr00,schwenn05,Michalek06,Xie04}. With the availability of image data from multiple views, those tools were refined and full 3D reconstructions were enabled from which estimates of the deprojected kinematics, geometry, and propagation direction are derived. Methods comprise, e.g., inter-image tie points and triangulation in various wavelength ranges \\citep[see e.g.,][]{Harrison08,Howard08,maloney09,Reiner09,Temmer09,Liewer10}, forward models related to white-light data \\citep[e.g.,][]{wood09}, or center of mass calculations \\citep{Colaninno09}. Also online tools were made available, such as e.g., the CCMC tools StereoCat\\footnote{StereoCat \\url{https:\/\/ccmc.gsfc.nasa.gov\/analysis\/stereo\/}}. A well known and widely used technique is the graduated cylindrical shell (GCS) forward model developed by \\cite{Thernisien06,Thernisien09}. Coronagraph image data showing the CME from at least two different vantage points are required, on that an idealized flux rope structure in the form of a croissant is fitted. The GCS model depends on a number of free parameters, such as the flux-rope height and angular width as well as the aspect ratio which determines the rate of self-similar expansion. Figure~\\ref{fig:patsourakos16} gives the 3D reconstruction of two CME events using GCS applied on STEREO and LASCO data in a study by \\cite{patsourakos16}. Especially for multiple events, the investigation and determination of the cause of geoeffectiveness is rather challenging as the processes happening on the Sun are complex. In that respect, geometrical fitting methods help to derive the propagation direction of a particular solar event in order to reliably link it to a geoeffective event at Earth. In a similar way, stereoscopy can be applied also on radio data. Figure~\\ref{fig:gono_radio} shows results from so-called goniopolarimetric observations using WIND and STEREO spacecraft data studying the location of radio type II bursts. That method is used to derive the direction of arrival of an incoming electromagnetic radio wave, its flux, and its polarization.\n\n\n\n\\begin{figure}\n \\includegraphics[width=1.\\textwidth]{cme9.pdf}\n\\caption{The CME flux rope obtained from GCS 3D reconstruction (black grid croissant). The 3D reconstruction of the radio type II burst (dark and light blue spheres) using gonopolarimetric technique. The yellow sphere represents the Sun (left panel). View of the flux rope and radio sources as seen from Earth (right panel) SOHO\/LASCO C3 image showing the CME as seen from Earth, for comparison with panel. Adapted from \\cite{magdalenic14}.}\n\\label{fig:gono_radio} \n\\end{figure}\n\n\nBesides geometry related information, STEREO coronagraph data can also be used to derive the CME deprojected mass close to the Sun \\citep{Colaninno09,Bein2013TheObservations} which, together with the early acceleration phase, are taken for better estimating the energy budget between flares and CMEs (see also Section~\\ref{sec:budget}). STEREO and its wide-angle HI instruments also enable to derive the 3D geometry of compressed density structures like CIRs \\citep[see][]{Rouillard08,wood10}. Sometimes the disentanglement between CMEs and CIRs in HI is tricky, hence, it needs careful inspection of the data when tracking specific features \\citep[see e.g.,][]{Davis10}. \n\nUsing multi-viewpoint data and applying different reconstruction techniques we vastly gained important insight about CME characteristics. Moreover, the results stemming from STEREO observations clearly challenged existing CME and SEP models. However, using idealized geometries, the real 3D structure of a CME or SEP paths can only be approximated and we need to keep in mind that there are strong deviations from these. Especially, in interplanetary space, the geometry of the CME front clearly changes, as flanks and nose interact differently with the non-uniform solar wind \\citep[pancaking effect; see e.g.,][]{riley04,Nieves-Chinchilla12}. In addition, the CME shape might vary due to intensity changes as the relative position to the Thomson sphere changes, that makes the tracking of specific white light structures complicated \\cite[e.g.,][]{vourlidas06}. Therefore, the derived deprojected values and forecasts based on them still need to be treated with caution \\citep[see also][]{Mierla10,riley18}. For more details on the complex interactions of CMEs with their surroundings I refer to the review by \\cite{manchester17}.\n\n\n\n\nFor more information on solar stereoscopy and tomography techniques, applied to various large-scale structures in the solar corona, I refer to the Living Reviews by \\cite{aschwanden11}. For comprehensive investigations, the EU funded project HELCATS\\footnote{Heliospheric Cataloguing, Analysis and Techniques Service; \\url{https:\/\/www.helcats-fp7.eu}} established databases ready to use for analyzing STEREO 3D CME characteristics and HI CME tracks on a statistical basis \\citep[see e.g.,][]{Murray18,Barnes19}. Out of that an extensive ICME catalogue\\footnote{\\url{https:\/\/helioforecast.space\/icmecat}} was compiled by \\cite{moestl17} and \\cite{palmerio18}. A conjunction catalogue covering CME in-situ measurements by two or more radially aligned spacecraft (MESSENGER, Venus Express, STEREO, Wind\/ACE) is given by \\cite{salman20}.\n\n\n\\section{Interplanetary counterparts of CMEs: ICMEs}\\label{sec:sub3-5}\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{icme1.pdf}\n\\caption{From running difference STEREO-A HI1+2 image data the central rows are extracted (right) at each time step and rotated by 90 degrees (middle). From this a time-elongation plot (so-called Jmap) is constructed (left). The CME front is marked by a yellow arrow in the direct image as well as in the Jmap. Adapted from \\cite{davies09}.}\n\\label{fig:jmap} \n\\end{figure*}\n\n\nNewly developed imaging capabilities clearly enhanced our understanding about the relation between solar eruptions, CMEs, and their counterparts in interplanetary space (ICMEs). SMEI, the Solar Mass Ejection Imager \\citep[SMEI:][]{Eyles03} on the Earth-orbiting Coriolis spacecraft, was the first heliospheric white-light imaging instrument covering the Sun-Earth space \\citep[for more details see the review by][]{howard13}. The successor of SMEI are the heliospheric imagers \\citep[HI:][]{Eyles09} aboard STEREO \\citep{kaiser08_STEREO}. The WISPR instrument \\citep{vourlidas19} aboard the Parker Solar Probe mission and the SoloHI instrument \\citep{howard20} aboard Solar Orbiter build upon the STEREO\/HI heritage and make similar observations of the inner heliosphere. The observational principle is like a coronagraph, but as these are wide-angle instruments, they observe much larger distances from the Sun enabling the tracking of CMEs throughout interplanetary space. The unprecedented image data facilitated the tracking of CMEs through interplanetary space and with that could unambiguously relate the CME white-light structure to in-situ measurements \\citep[see e.g.,][]{moestl09_APJL,moestl14} and moreover, to get better insight on how CMEs interact with the ambient solar wind structures. Figure~\\ref{fig:jmap} shows a so-called Jmap which is constructed from running difference white-light HI data covering the Sun-Earth distance range. By extracting the central part of the HI images in the horizontal direction, the ICME front can be rather easily followed as function of the elongation angle. Before further analysis, the elongation-time measurements need to be converted into radial distance. For that, methods assume either a certain CME geometry and apply the propagation direction of the CME \\cite[see e.g.,][]{lugaz10} or use fitting functions \\cite{Rouillard08}. These procedures cover rather high uncertainties in the derived kinematical profiles, that needs to be taken into account when interpreting CME propagation profiles for interplanetary space \\citep[e.g.,][]{rollett12,liu13}.\n\nIt is well known that CMEs during their propagation phase tend to get adjusted to the ambient solar wind flow owing to the drag force exerted by the ambient solar wind \\citep{Gopalswamy01,wang04}. As consequence, CMEs which are faster than the ambient flow speed get decelerated while those which are slower get accelerated. This alters their speed, hence, travel time and with that has impact on Space Weather forecasting. The adjustment to the ambient flow speed happens most probably in interplanetary space \\citep[e.g.,][]{Sachdeva15}. At which distance exactly depends on the competing forces acting on a CME, Lorentz versus drag force \\citep[e.g.,][]{vrsnak08}. The longer the CME is driven, hence, the longer the magnetic reconnection process is ongoing (which might be inferred from flare emission and growing post eruptive arcades), the farther away from the Sun the adjustment may occur. Empirical relations found between CME kinematics and flare properties (flare ribbons, coronal dimmings, or post-eruptive arcade regions) actually may be used to estimate the reconnected flux that empowers the CME \\citep[e.g.,][]{gopalswamy17,Temmer17_flar-cme,dissauer18,Tschernitz18}\\footnote{A database of more than 3000 solar flare ribbon events observed by SDO and reconnection flux is given in \\cite{kazachenko17}.}. The amount of drag from the solar wind depends on the relative speed and density between the solar wind and the CME as well as the CME width\/size. It is found that wide CMEs of low mass tend to adjust rather quickly to the solar wind speed and, hence, their transit time (i.e., how long a CME needs to traverse a certain distance) is determined primarily by the flow speed in interplanetary space. Narrow and massive CMEs propagating in a fast solar wind have the shortest transit times \\citep[see e.g.,][]{vrsnak10}. \n\n\n\n\n\nFigure~\\ref{fig:ICME} shows typical in-situ signatures of a well-defined ICME at 1~AU, revealing a simultaneous jump of all measured components (shock) with a subsequent sheath structure (compressed plasma) of increased density, speed and turbulence, that is followed by signatures of a smooth and enhanced magnetic field together with a rotation as observed in the vector components (changing from plus to minus or vice versa). The ICME magnetic structure is usually identified by that smooth field rotation (flux rope), a plasma-beta lower than 1 (referring to a dominant magnetic component), a low temperature, and a linearly decreasing proton speed \\citep[see also the Living Reviews by][]{kilpua17}. Sometimes that flux rope can be associated with twisted structures observed already in white-light image data. \nHaving a long-lasting southward directed magnetic field (measured in the $B_{\\rm z}$ component), flux ropes are the main contribution of strong geoeffectiveness. The passage of rather isolated magnetic ejecta at 1~AU typically takes about 1 day \\citep[cf.][]{RC10}\\footnote{A near-Earth ICME catalogue is given under: \\url{http:\/\/www.srl.caltech.edu\/ACE\/ASC\/DATA\/level3\/icmetable2.htm}}. Hence, geomagnetic disturbances may last for many hours. Flank hits, interacting CMEs and complex ejecta, can have passage durations of about 3 days at 1~AU \\citep[see][]{Burlaga02,Xie06,Marubashi2007Long-durationModels,Mostl2010STEREO2010}, affecting the Earth's atmospheric layers over a much longer time range, and, hence, causing stronger geomagnetic effects (see also Section~\\ref{sec:preconditioning}).\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{icme2.pdf}\n\\caption{STEREO-A in-situ measurements and identification of a CME together with its closed magnetic structure. From top to bottom: pitch-angle distribution data of suprathermal electrons, total magnetic field intensity, magnetic field vectors (in RTN coordinates), solar wind proton bulk speed, proton number density, proton temperature \\citep[in red the expected proton temperature is given calculated from an empirical relation to the solar wind speed as given by][]{Richardson95}, plasma-beta, total pressure, distribution of the iron charge state. Vertical dashed lines mark the shock-sheath, and the boundaries of the magnetic structure. Taken from \\cite{Jian18}.}\n\\label{fig:ICME} \n\\end{figure*}\n\n\nBy combining remote sensing and in-situ data using multi-spacecraft reconstruction methods, it is revealed that from in-situ measurements we observe localized variations of the magnetic field behavior that may not be representative of the global structure \\citep[see][]{mostle08,moestl09_APJL,rouillard10,farrugia11,DeForest2013TrackingEjection}. Studies using multi-spacecraft encounters separated in radial distance and longitude give insight on the magnetic coherence of ICMEs on various scales and with that raise questions on the inner structure of CMEs as well as their interaction processes with the interplanetary magnetic field \\citep[see e.g.,][]{good18,lugaz18}. Using flux rope reconstructions methods applied on in-situ measurements \\citep[see e.g.,][]{al-haddad13} a comparison between the physical parameters derived close at the Sun with those measured in-situ can be performed enabling to interpret changes in the mass, flux, etc. due to the interaction with the interplanetary solar wind \\citep[see e.g.,][]{bisi10,Temmer17_flar-cme,temmer21}. Especially the reconnection of the magnetic flux rope with the interplanetary magnetic field is found to lead to either a loss of magnetic flux (so-called erosion) or adding of magnetic flux \\citep[see e.g., ][]{Dasso07,manchester14_CMEerosion,Ruffenach15}. Removing or adding magnetic flux may lead to a change in the ICME propagation behavior. Filament material is found less often from in-situ measurements (identified by low charge state species) despite the fact that most CMEs are accompanied by filament eruptions. Heating mechanisms or simply missing the cold filament material due to the localized in-situ measurements might be a reason for that \\citep{filippov02}. This is supported by findings that magnetic ejecta are only partly filled with hot plasma related to heating by the flare \\citep[e.g.,][]{gopalswamy13_filament}.\n\n\n\n\n\nMore details on the relation between white light remote sensing image data and in-situ measurements, including proper nomenclature, is given by \\cite{rouillard11}. A review on multi-point ICME encounters before and during the early years of STEREO is given by \\cite{kilpua11}. The recent review by \\cite{luhmann20} comprises a thorough overview on the ICME propagation in the inner heliosphere.\n\n\n\n\\section{Solar Energetic Particles (SEPs)}\\label{sec:4}\n\\subsection{General characteristics}\nSEP events are observed in-situ as enhanced electron, proton and heavy ion flux (and as increased level of cosmic rays on ground) largely exceeding the thermal energy levels, ranging from keV to GeV. Strong fluxes of energetic protons (so-called proton events) cause strongest geoeffective phenomena. High energy SEPs in the range of GeV reach the Earth within less than 10 minutes and may produce ground level enhancements (GLE; measurable in neutron monitors at Earth surface), that are of special interest as they have major effects on crewed spaceflight and aircraft due to the increased radiation exposure \\citep[e.g.,][]{Malandraki18_book}. \n\nIn general, there are two populations of SEP events, gradual and impulsive ones \\citep[e.g.,][cf. Figure~\\ref{fig:reames99}]{Reames99}. It is the different temporal scaling which is disentangling those two populations. Driving agents acting on longer time scales are related to CME shock acceleration mechanisms. However, gradual events seem to be accompanied as well by an impulsive part which is thought to be related to short-time magnetic reconnection processes, as observed in flares. Obviously gradual events are caused by both driving agents prolonging the acceleration process but on a less energetic level \\citep[see also][]{Anastasiadis19}. Impulsive events are also obtained to be related to those SEP events where the location of the particle accelerator is magnetically well connected to the observer. SEP\/GLE events are found to have the hardest spectra and the largest initial acceleration \\citep{Gopalswamy16_backsideSEP}. There are still many open issues about the processes leading to energetic particles as well as about their (suprathermal) seed populations in the corona and interplanetary space \\citep[e.g.,][]{Mason99,Desai06,Mewaldt12}. Clearly, the primary condition for the production of SEPs is the opening of magnetic field lines into interplanetary space (as for eruptive events) and that accelerated particles have access to that open field lines. It is confirmed that for confined flares no SEPs are observed \\citep[e.g.,][]{Trottet15}. As shocks play an important role in the acceleration of particles, coronal shock waves on the solar surface and interplanetary space related to CMEs as well as interacting CMEs are investigated in relation to SEPs \\citep[see e.g.,][and related MHD modeling results, e.g., \\citeauthor{Pomoell08}, \\citeyear{Pomoell08}]{Park13,Lario14,Miteva14}. \n\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{sep1.pdf}\n\\caption{Separation into gradual and impulsive SEP events and their suggested driving mechanisms. a) Gradual SEP events result from diffusive acceleration of particles by large-scale shocks produced by CMEs and populate interplanetary space over a wide range of longitudes. b) Impulsive SEP events result from acceleration during magnetic reconnection processes in solar flares and are observed well when magnetically connected to the flare site. Intensity-time profiles of electrons and protons in c) gradual and d) impulsive SEP events. Taken from \\cite{Desai16} after \\cite{Reames99}.\n}\n\\label{fig:reames99} \n\\end{figure*}\n\nSEP events that become Space Weather effective are controlled by many factors, such as the source region location of the eruption (longitude and latitude) and width of the CME, background solar wind, seed populations, multiple CMEs and their interaction, or magnetic field configuration near the shock. For example, narrow CMEs (slow or fast) do not efficiently accelerate particles \\citep[see e.g.,][]{kahler19}, and CMEs originating from the eastern hemisphere are less likely to create a SEP event near the Earth because of the weak magnetic connection between Sun and the Earth. The highest-energy particles are most likely accelerated close to the shock nose where the shock is strongest, while the lower energy particles are accelerated at all regions \\citep[see e.g.,][]{bemporad11,gopalswamy18}. Hence, also the ecliptic distance to the shock nose, i.e., the event source region latitude, is an important parameter for SEP prediction \\citep[see e.g.,][]{gopalswamy13}. \n\nKnowledge about the magnetic connectivity is a crucial parameter in order to detect SEPs in-situ and to relate them to the proper driving agent \\citep{reames09}. Figure~\\ref{fig:reames09} depicts the propagating idealized circular-shaped CME shock front in relation to the radially oriented magnetic field lines near the Sun. The cartoon describes a scenario in that a narrow range of heights (2--4 solar radii) exists where compression is sufficient for effective particle acceleration \\citep[e.g.,][]{Cliver04} and for having a good connection to the observer, with heights increasing at the eastern and western flanks. Concluding, the connectivity changes with distance from the Sun. On the other hand, flare locations lying close to open structures like coronal holes, have different magnetic configuration and facilitate the acceleration of particles into the heliosphere \\citep{cane88,Reames96,shen06}. In that respect, the interplay between open and closed magnetic field is important to know and due to the lack of observations needs to be supported by reliable coronal modeling. \n\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{sep2.pdf}\n\\caption{Cartoon showing a possible acceleration scenario for SEPs. The radial field lines (black lines) are hit by the CME shock front (blue) at different heights for the nose and the flank. The solar particle release (SPR) likely begins at a 2--4 solar radii (marked by the red region) for the apex or higher up for the flanks. Taken from \\cite{reames09}.}\n\\label{fig:reames09} \n\\end{figure*}\n\n\nFor Space Weather forecasting purposes, it is desired to derive clear signatures showing that particles associated to eruptive events were able to escape to the high corona and interplanetary space. Therefore, the monitoring of possible radio emission (from decimetre and longer waves) is found to be of utmost importance \\citep[e.g.,][]{Klein10}. Observations of flares in the high energy range provide additional information on the location, energy spectra, and composition of the flare accelerated energetic particles at the Sun that can be compared to 1AU SEP events \\citep{lin06}. RHESSI imaging capabilities could show that flare $\\gamma$-ray sources are not co-spatial with flare HXR sources \\citep{fletcher11}. \\cite{Laurenza09} developed a technique for short-term forecasting of SEPs based on flare coordinates and flare flux together with the time\u2010integrated intensity of SXRs and type III radio emission ($\\sim$1 MHz). Similar, the forecast of the occurrence of SEP events could be determined using the peak ratios in flare fluxes measured over (0.05\u20130.4 nm)\/(0.1\u20130.8 nm) as described by \\cite{kahler18}. For improved SEP forecasting, it is suggested to take into account parameters from both driving agents, flares and CMEs \\citep[see][]{Klein17}. Figure~\\ref{fig:stcyr17} shows for an eruptive event the timing between flare SXR emission (GOES flux), radio type III burst (WIND), electron and proton spectra measured at 1AU (SOHO), and combined white-light image data from the ground-based K-Cor coronagraph and LASCO\/C2 instrument. Especially, observations of the early evolution of CMEs and derivation of shock formation heights (see also Sect.~\\ref{shock-form}) as well as distribution of Mach numbers along the shock surface \\citep{rouillard16} might give some lead time for SEP forecasting. Monitoring the generation of flare associated coronal surface waves \\citep[EIT\/EUV waves; see][]{Thompson98}, gives additional hints on shocks ignited by the CME lateral expansion (for confined events no coronal waves are observed).\n\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{sep3.pdf}\n\\caption{In the top row, the January 1, 2016, eruptive event appears in the sequence of images from SDO AIA (gold), MLSO K-Cor (blue), and SOHO\/LASCO (red). A fast CME associated with an SEP event detected near Earth, is seen appearing off the southwest limb of the Sun. The time profiles reveal that data from the ground-based K-Cor coronagraph could be used for a timely warning of particle events as described in that case study. Taken from \\cite{stcyr17}.\n}\n\\label{fig:stcyr17} \n\\end{figure*}\n\n \n\n\n\\subsection{SEPs observed from multiple viewpoints}\n \n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{sep4.pdf}\n\\caption{February 25, 2014 SEP event and detected proton intensities (red = STEREO-A\/HET, blue = STEREO-B\/HET, green = SOHO\/ERNE). Inset: relative locations of the STEREO spacecraft and the Earth during the event. The arrow pointing out from the Sun shows the location of the SEP producing active region (at longitude E82), and the asterisks mark the nominal Parker spiral magnetic field lines connecting each observer to the Sun. Taken from \\cite{Paassilta18}.}\n\\label{fig:sep-stereo} \n\\end{figure*}\n\n\nUsing STEREO data, the observation of wide-spread SEP events shed new light on the possible generation mechanisms together with the lateral expansion of CMEs and their interaction with other coronal structures \\citep[see][]{rouillard12}. Figure~\\ref{fig:sep-stereo} gives for the February 25, 2014 SEP event intensity profiles as measured by different spacecraft that are separated from Earth by 152 degrees (STEREO-A) and 160 degrees (STEREO-B). The SEP producing eruptive event is located at E82 (marked by the green arrow). As STEREO-B is closest to the SEP source, that flux profile reveals the highest intensity. The related CME was observed as halo event from Earth, having a projected speed of more than 2000~km\/s and before that, other CMEs were launched from that region. SEPs may be directed to wide-spread angles by field line draping around the closed magnetic field of the CME and\/or complex magnetic fields due to CME-CME conglomerates, or CME-CIR interaction \\citep[e.g.,][]{Dresing16,Dresing18,Gomez-Herrero11,Gomez-Herrero17,Xie17,Guo2018ModelingMeasurement}. For more details on wide-spread SEP events, including a comprehensive catalogue see \\cite{Paassilta18}. \n\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{sep5.pdf}\n\\caption{Coronagraph images from SOHO\/LASCO (left panel: direct image; other panels to the right: difference images) showing the evolution of two CMEs from August 9, 2011 and from May 17, 2012 (GLE event). Inlay images are from SDO\/AIA 193\\AA~showing the solar sources. Red arrows point to the CME nose. The May 17, 2012 event remains bright over the LASCO field of view and its shock structure is located close to the CME ((g), (h)). The August 9, 2011 event reveals a smaller CME main body and a wider shock structure ((c), (d)), which is a sign of a weak shock. Taken from \\cite{gopalswamy13}.}\n\\label{fig:gopal_GLE} \n\\end{figure*}\n\n\nDuring solar cycle 24 strong SEP events produced only two GLEs, that could be related to CMEs launched from the Sun on May 17, 2012 and September 10, 2017. \\cite{gopalswamy13} did a comparative study between the GLE May 17, 2012 and a non-GLE CME event from August 9, 2011 using STEREO image data as given in Figure~\\ref{fig:gopal_GLE}. The study revealed for the GLE event a shock formation height very low in the corona (1.38 solar radii from solar center) and that the shock remained closer to the driver structure over the coronagraphic field of view. This is indicative of a stronger shock that is driven over a longer time and, hence, can produce very energetic particles. \\cite{rouillard16} modelled for the May 17, 2012 event the background topology of the magnetic field using multiple viewpoints to derive the geometry of the shock front and to find where particles get accelerated most efficiently \\citep[see also][]{Plotnikov17,Kouloumvakos19}. Particles that get magnetically trapped in between CME structures pose a particle reservoir that may play a key role in the late acceleration of gradual SEPs related to CME-CME interaction events \\citep[e.g.,][]{lugaz17}. Despite these tremendous enhancements in our knowledge gained from stereoscopic observational data, still a major drawback in unraveling the SEP nature is the unknown configuration of the interplanetary magnetic field along which SEPs propagate and internal distribution \\citep{Kahler13}. Therefore, the acceleration process of SEPs seems not to be spatially limited but happens over a wide range of longitudes including transport before being injected at distant longitudes \\citep[e.g.,][]{Vlahos19_SEP,Kozarev17,Malandraki18_book}. \n\n\n\nFor recent reviews on SEP events covering in detail the production and acceleration processes I refer to the Living Reviews by \\cite{Desai16} or the book by \\cite{Reames17_book}. The review by \\cite{lugaz17} is focusing on SEPs with respect to CME-CME interaction events and the Living Reviews by \\cite{kilpua17} on particle acceleration due to ICME shocks.\n\n\n\n\\section{Energy budget between flares, CMEs, and SEPs}\\label{sec:budget}\n\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{budget1.pdf}\n\\caption{Schematic diagram showing the different energy dissipation processes of the observed activity phenomena. The chart covers the energy input (light yellow shaded boxes) and energy dissipation via primary (colored boxes) and secondary processes (white boxes). Those are the major processes identified for studying their energy closure relationship. Taken from \\cite{Aschwanden17}.}\n\\label{fig:energy_budget} \n\\end{figure*}\n\n\nFigure~\\ref{fig:energy_budget} schematically shows the relevant components of energy build-up and dissipation processes. How much of the free energy is actually released and to which parts that released energy partitions into primary and secondary processes can only be answered by statistics. In cutting-edge studies performed by \\cite{emslie04,Emslie12}, a sample of eruptive events was investigated with respect to the energy release and its distribution into different components, concluding that about a third of the total available energy might be released in an eruptive event. Similar conclusions are drawn by \\cite{gopalswamy17_extreme} from calculating for extreme eruptive events the ratio of the total reconnected flux to the available active region flux. However, discrepancies were found from simulation studies \\citep[e.g.,][]{reeves10}. Recent extensive statistics by \\cite{Aschwanden17}, yield that $\\sim$87\\% of the magnetic energy is released. About 10\\% of the released energy drives the CME and $\\sim$80\\% goes into particle acceleration. In total about 10\\% percent of the free magnetic energy goes particularly into SEP acceleration. With respect to the CME, SEPs dissipate about 3\\% of the CME kinetic energy \\citep[similar as derived by][]{Emslie12}. The CME velocity shows strongest correlations with SEP characteristics and all that is consistent with CME-driven shock acceleration \\citep[see][]{mewaldt06,Papaioannou16}. \n\nThe discrepancies in the results show that there might be processes that cannot be disentangled from each other, cover energy conversion (e.g., non-thermal into thermal due to cooling), or are simply not well observed. Nevertheless, the conclusion is that the free magnetic energy of an active region is sufficient to generate flare-CME-SEP events and with that confirms their common magnetic origin. For the interested reader I refer to the book by \\cite{aschwanden19}.\n\n\n\n\n\n\n\n\n\\section{Structuring of interplanetary space: the solar wind}\\label{sec:SWstructure}\n\nThe solar wind is the major hub in interplanetary space dictating how fast disturbances may evolve and guiding the motion of accelerated particles. Knowledge about the prevailing structure of the solar wind in terms of plasma and magnetic field distribution is therefore of utmost importance in order to obtain reliable Space Weather forecasts. In turn, this also leads to a better understanding and interpretation of the propagation behavior of CMEs as well as the occurrence and energetics of SEPs. Interplanetary space is strongly shaped by the interplay between slow and fast solar wind flows, causing stream interaction and compression regions, transient disturbances such as shocks and closed magnetic structures (flux ropes) of evolving CMEs. These structures pose magnetic barriers that are able to change the propagation characteristics of a specific CME and affect SEP fluxes. In the following I will focus on solar wind structures relevant for CMEs and SEPs. For more details on the heliospheric magnetic field I refer to the Living Review by \\cite{owens13} and for solar wind stream interaction regions throughout the heliosphere to the Living Review by \\cite{Richardson18}.\n\n\n\n\\subsection{General characteristics}\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{wind1.pdf}\n\\caption{Solar wind variation over the solar cycle. Outward interplanetary magnetic field in blue, inward interplanetary magnetic field in red. The bottom panel shows the timeline and line graphs of the relative smoothed sunspot number. Note also the inverted interplanetary magnetic field lines between the first and third orbit due to the reversal of the magnetic field. Courtesy: ESA}\n\\label{fig:ulysses} \n\\end{figure*}\n\n\n\nThe solar wind is a continuous flow of charged particles propagating radially outward from the hot solar corona into interplanetary space. The characteristics of the solar wind are measured in-situ at specific locations such as the Lagrangian point L1 close to Earth (ACE, WIND, DSCOVR), from satellites orbiting planets (e.g., BepiColombo, VEX, MESSENGER, MAVEN), STEREO having varying longitudinal separation close to Earth's orbit, or PSP and Solar Orbiter with special mission trajectories in the inner heliosphere (cf. Figure~\\ref{fig:insitu}). Figure~\\ref{fig:ulysses} shows the solar wind speed measurements from spacecraft Ulysses which had the goal to examine the poles of the Sun \\citep{Wenzel92_ULYSSES}. In total, Ulysses performed three polar orbits, with each one taking six years to complete, over different phases of the solar cycle 22 and 23. The first one covers the solar minimum phase revealing slow solar wind streams over the equator and a fast wind over the poles where CHs are situated. The second orbit happened over solar maximum activity and shows the intermix of fast and slow winds at all latitudes. Three quarters of the third orbit were completed during the minimum of the next solar cycle. \n\n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{wind2.pdf}\n\\caption{Radial profiles of solar-wind parameters for the Sun-Earth distance. Median values are obtained from Helios and OMNI measurements and are extrapolated to the PSP orbit region as close as 10 solar radii. The lower edges of the shaded areas correspond to solar minimum and the upper edges to solar maximum. As comparison, overplotted are model results from \\cite{Banaszkiewicz98}, from \\cite{Sheeley97} and \\cite{WangY00} for the slow solar wind speed, from \\cite{Leblanc98} for the density and the range of temperature measurements given by \\cite{Billings59} and \\cite{Liebenberg75}. Taken from \\cite{Venzmer18}.}\n\\label{fig:sw_venzmer} \n\\end{figure*}\n\nBesides the observed changes over latitude, the solar wind characteristics differ over distance. Early missions like Helios 1 and 2 \\citep{Schwenn75,Rosenbauer77}, achieved a perihelion of 0.29~AU and gathered valuable information about the solar wind characteristics close to the Sun. Figure~\\ref{fig:sw_venzmer} shows the radial dependence of the solar wind parameters over the Sun-Earth distance derived from HELIOS (1974--1981) and OMNI (1963--2016) observations at 1~AU. The study by \\cite{Venzmer18} extrapolates that information to regions as close as 10 solar radii, based on an empirical solar-wind model for the inner heliosphere in dependence on the solar cycle. Solar wind measurements from PSP at a distance of about 35 solar radii basically confirm the results from these earlier missions and their derived radial scalings, but also obtain that the magnetic field is very strongly fluctuating \\citep{Bale19,kasper19}\\footnote{For recent PSP result see also special issues of the Astrophysical Journal Supplement series and Astronomy \\& Astrophysics under \\url{https:\/\/iopscience.iop.org\/journal\/0067-0049\/page\/Early_Results_from_Parker_Solar_Probe} and \\url{https:\/\/www.aanda.org\/component\/toc\/?task=topic&id=1326}.}.\n \n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{wind3.pdf}\n\\caption{Left: schematic drawing of the different large scale structures in interplanetary space \\citep[taken from][]{Yermolaev09}. Digits designate (1) heliospheric current sheet, (2) slow streams from coronal streamers, (3) fast streams from coronal holes, (4) compressed plasma (CIR on the front of fast and slow streams, and sheath region before the leading edge of a \"piston\"), (5) \"pistons\" (such as a magnetic cloud or ejecta), (6) rarefaction region. Right: Large-scale properties of the inner heliosphere (out to 1 AU) for Carrington Rotation 2068 from a global MHD solution. The two meridional slices in each panel show the radial velocity and radial magnetic-field strength, scaled to 1 AU. The slice in the equatorial plane shows the scaled number density. The sphere at 30 solar radii shows the scaled radial magnetic-field strength. Taken from \\cite{Riley10}.}\n\\label{fig:yermo_riley} \n\\end{figure*}\n\n\nThe left panel of Figure~\\ref{fig:yermo_riley} schematically depicts various large-scale structures in the interplanetary space. We differentiate between the heliospheric current sheet, separating opposite magnetic field, solar wind streams of different speeds, and closed magnetic fields of CMEs that, due to their rapid expansion, act as pistons creating shocks. The interaction between fast and slow streams (plasma volume with frozen-in magnetic field) leads to compression, forming so-called stream interaction regions (SIRs) to the West, and rarefaction regions to the East. On the large scale it is assumed that the interplanetary dynamics can be described by ideal MHD equations. The right panel of Figure~\\ref{fig:yermo_riley} gives the results from a simulation using a 3D MHD model \\citep[see more details on coronal and solar wind MHD modeling in the Living Reviews by][]{gombosi18}. The model results obtain a warping of the heliospheric current sheet and show the latitudinal dependence of the fast solar wind stream in the radial direction \\citep{wilcox80}. All that reveals the complexity and interplay between open and closed magnetic field structures, occurring with different dynamics. For more details on the multi-scale nature of the solar wind I refer to the Living Reviews by \\cite{verscharen19}.\n\n\n\nThe observed large-scale structures of the solar wind are intimately connected to the coronal magnetic field originating from the solar photosphere. The quasi-steady fast wind ($>$ 450 km\/s) emanates from coronal holes, locations of predominantly open magnetic field, while the variable slow component is believed to originate mostly from closed magnetic field configurations around the streamer belt \\citep{McComas00_firstOrbit}. Coronal holes are observed as low density and low temperature structures, and therefore appear as dark areas in the wavelength ranges of EUV and SXR, imaging coronal temperatures of a few million Kelvin. Figure~\\ref{fig:CH-CIR} depicts the interplay between slow and fast solar wind streams. After a coronal hole passed the central part of the solar disk, in-situ measurements reveal about 1--2 days later an increase in the density and magnetic field, and about 3--4 days later in the plasma speed \\citep[][]{vrsnak07_sw}. Since coronal holes, and with that the fast component of the solar wind, are long-lived structures, SIRs can often be observed for several solar rotations, and are correspondingly called co-rotating interaction regions (CIRs) when observed more than once. The leading edge of a CIR represents a forward pressure wave and the trailing edge of a CIR a reverse pressure wave \\citep[cf.\\, right panel of Figure~\\ref{fig:CH-CIR}; for more details see the review by][]{cranmer17}. These waves may develop into shocks, and as such, large periodically recurrent coronal holes may cause geomagnetic storms roughly appearing with the frequency of the solar rotation, i.e., every 27 days \\citep[e.g.,][]{Rotter12}. During times of low solar activity, induced storms by recurrent CIRs may put equally much energy into the Earth's magnetosphere-ionosphere system as CMEs \\citep[e.g.,][]{Richardson01,tsurutani06}. On average, the strongest geomagnetic storms due to CIRs occur during the early declining phase of a solar cycle \\citep[e.g.,][]{Verbanac13,Grandin19}. Compared to CMEs, CIRs may drive prolonged geomagnetic activity and cause strong high energy particle enhancements in the Earth's radiation belts \\citep[e.g.,][]{Reeves03,Miyoshi13,Kilpua15}. As the solar wind parameters vary with the level of solar activity, so does their geoeffectiveness \\citep[see e.g.,][]{Jian11,Richardson12,Watari18}. There is substantial effort in the solar and heliospheric physics community to improve the understanding and modeling of the spatial and temporal distribution of solar wind plasma and magnetic field properties \\citep[see][]{Cranmer19}. \n\n\\begin{figure*}\n \\centering\n\\includegraphics[width=1.\\textwidth]{wind4.pdf}\n\\caption{Left: SDO\/AIA composite image of the wavelength channels 211-193-171\\AA~from June 30, 2012 showing the reduced density region of a coronal hole (shaded area). At the time t0, the coronal hole reaches a central position. From in-situ data at 1 AU about 1 day later the maximum in the density\/magnetic field is measured and about 4 days later the maximum in the speed\/temperature. Right: Fundamental processes involved in the 3D dynamics of stream evolution, adapted from \\cite{pizzo78}.}\n\\label{fig:CH-CIR} \n\\end{figure*}\n\nSince the source regions of slow and fast streams on the Sun, namely closed and open magnetic field, are different, their intermix affects detailed analyzes of the solar wind. Therefore, it is suggested that solar wind studies should be organized by the origin of the solar wind plasma \\citep{schwenn83,Zhao09,Borovsky19}. As minimum requirement, it is accepted to distinguish between the different solar wind structures by their in-situ measured plasma (density, speed, temperature) and magnetic field characteristics. A categorization scheme developed by \\cite{xu15} can be applied to separate the solar wind plasma into four types, namely, coronal-hole-origin (fast solar wind), streamer-belt-origin (slow solar wind), sector-reversal-region (plasma from top of helmet streamers), and ejecta (solar transients, such as CMEs). \n\n\n\\begin{figure*}\n \\centering\n\\includegraphics[width=0.7\\textwidth]{wind5.pdf}\n\\caption{STEREO-A in-situ solar wind measurements of a SIR and identification of the stream interface (SI; given by a magenta vertical line marking the peak of the total perpendicular pressure). As blue vertical line, the heliospheric current sheet (HCS) is marking the sector structure of different interplanetary magnetic field polarity. Top to bottom panels: pitch-angle distribution data of suprathermal electrons, solar wind proton bulk speed, proton number density, proton temperature, entropy, total magnetic field intensity, the ratios of the radial and transversal component of the magnetic field, total perpendicular pressure. Taken from \\cite[][]{jian09}. }\n\\label{fig:jian09} \n\\end{figure*}\n \n\nFigure~\\ref{fig:jian09} shows typical plasma and magnetic field characteristics for a well defined SIR at 1~AU. The SIR measurements reveal an abrupt drop in density, a simultaneous rise in the proton temperature, and an East-West flow deflection \\citep[see e.g.,][]{Jian06,Jian19}. The stream interface (SI), given by a rather symmetric profile in the total perpendicular pressure peaking shortly after the density maximum, separates originally slow, dense plasma from originally fast, thin plasma back at the Sun \\citep[e.g.,][]{Wimmer-Schweingruber97}. The behaviour of the suprathermal electrons gives additional information about the topology, hence, connectivity of the interplanetary magnetic field lines between the Sun and the observer. That also allows to investigate, e.g., the Parker spiral versus non\u2010Parker spiral orientation of the magnetic field and the related open flux versus closed or disconnected flux natures of the magnetic field \\citep[see][]{owens13}. SIR signatures differ clearly from those of ICMEs (cf.\\,Figure~\\ref{fig:ICME}) and as such, the specific characteristics can be applied to identify SIRs and CMEs from visual inspection of the in-situ solar wind plasma and magnetic field measurements. Together with the information of transit times (can be roughly derived from the average in-situ speed of the specific structure) we can link the in-situ signatures back onto the solar surface to study their relation. Table~\\ref{tab:1} gives average values and standard deviations for the different structures observed, such as slow and fast speed solar wind streams, CME sheath and magnetic ejecta. The time range used for that statistics covers the years 1976--2000 \\citep[see][]{Yermolaev09}. As can be seen, the values reveal large standard deviations reflecting the variations of the interplanetary conditions over the solar cycle. Similar as the activity level changes over time with respect to the occurrence of sunspots, active regions, flares and CMEs (see also Section\\ref{subsec:cme}), also the occurrence of coronal holes varies having implications on the global structuring of interplanetary space \\cite[e.g.,][]{harvey02, Heinemann2019StatisticalCATCH}.\n\n\n\\begin{table}\n\\caption{Properties of the different solar wind types derived from OMNI data analyzed over the period 1976\u2013-2000. Average values and standard deviations are given for the flow speed ($v_{\\rm p}$), proton density ($n_{\\rm p}$), total magnetic field ($B$), and proton temperature ($T_{\\rm p}$)}. In addition, the geoeffectiveness of the different solar wind types is given as measured by the disturbance storm time (Dst). Taken from Table 3 in \\cite{Yermolaev09}.\n\\label{tab:1} \n\\begin{tabular}{lllll}\n\\hline\\noalign{\\smallskip}\n& fast wind & slow wind & CMEs (shock-sheath) & CME (magnetic ejecta) \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$v_{\\rm p}$, km~s$^{-1}$ & $>$ 450--500 & $<$ 400--450 & $\\sim$450$\\pm$110 & $\\sim$410$\\pm$110 \\\\\n$n_{\\rm p}$, cm$^{-3}$ & 6.6$\\pm$5.1 & 10.8$\\pm$7.1 & 14.3$\\pm$10.6 & 10.1$\\pm$8.0 \\\\\n$B$, nT & 6.4$\\pm$3.5 & 5.9$\\pm$2.9 & 8.5$\\pm$4.5 & 12.0$\\pm$5.2 \\\\\n$T_{\\rm p}$ $\\times10^4$ K & 13.1$\\pm$11.8 & 4.4$\\pm$4.4 & 12.9$\\pm$17.6 & 4.5$\\pm$6.6 \\\\\nDst, nT & $-$28.7$\\pm$25.9 & $-$10.7$\\pm$18.2 & $-$21.5$\\pm$33.0 & $-$52.1$\\pm$45.8 \\\\\n\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Background solar wind}\nOpen magnetic flux from the solar surface structures interplanetary space and with that influences CME and SEP propagation behavior. We assume that the majority of open flux originates within coronal holes. This is supported by an empirical relation linking the size of coronal holes observed on the solar surface to the in-situ solar wind plasma and magnetic field measured at 1AU a few days later \\citep[][]{vrsnak07_sw}. In detail, the width of a coronal hole, i.e., its longitudinal extension, is found to be strongly related to the in-situ measured peak speed \\citep{garton18}. Deriving the solar wind characteristics stemming from a specific coronal hole, is found to be tricky as each coronal hole evolves rather individually and the surrounding solar surface structures may play a major role in shaping the interplanetary solar wind \\citep{Heinemann20}. Still, an open discussion is the discrepancy between estimates of open solar magnetic flux from remote photospheric and in-situ spacecraft observations that may differ by as much as a factor of two \\citep[e.g.,][]{Arden14,Linker17,wallace19}. This suggests a fundamental issue in our understanding about the topology of the coronal magnetic field and the energization of plasma, hence, the acceleration of fast solar wind flows. Recent studies find that the open magnetic field within coronal holes is predominantly concentrated in unipolar magnetic flux tubes with high outflow velocities but covering only a fraction of about 10\\% of the coronal hole area \\citep[see][]{akiyama13,wiegelmann14,Hofmeister2017Characteristics24}. For more details on modeling the coronal magnetic field and open solar flux see e.g., the Living Reviews by \\cite{mackay12} and \\cite{lockwood13}. \n\n\nDue to the rather slow evolution of coronal holes, a legitimate assumption is that the solar wind parameters do not vary strongly over the duration of an entire solar rotation. Based on that, to forecast the occurrence of high speed solar wind streams at Earth, L1 spacecraft data may simply be forward shifted over one Carrington rotation period (27.28 days). These so-called persistence models are found to work remarkably well \\citep[see e.g.,][]{Owens13_persistence,Reiss2016VerificationModels}. However, spacecraft used for forecasting may be located over different latitudes and in-situ measurements gather the characteristics of rather localized structures within that large scale three dimensional objects. The spatial restrictions of such localized structures are demonstrated in Figure~\\ref{fig:gomez11}, showing how solar wind streams appear differently as measured by STEREO-A and STEREO-B spacecraft that are separated in latitude by $\\sim$10 degrees. The data cover two Carrington rotations (2076 and 2077) revealing a solar wind stream well observed by STEREO-B, but clearly missed by STEREO-A. For comparison, the top panel of Figure~\\ref{fig:gomez11} gives the photospheric magnetic field configuration on the Sun using magnetic field extrapolations from GONG data marking open and closed field lines. With such inevitable differences in the latitude between the measuring spacecraft, rather large uncertainties in the 1~AU measured speed of a particular coronal hole are obtained \\citep[see][]{Hofmeister18,Owens19}. Recent studies suggest that solar wind forecasting based on persistence models may work best when using a combination of spacecraft located behind Earth (L5 position or STEREO data from time-varying spacecraft position) and empirical or numerical solar wind modeling \\citep[see e.g.,][]{Opitz10_Multispacecraft,Temmer18,Owens19,Bailey20}.\n\n \n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{wind6.pdf}\n\\caption{Top panel: synoptic map from GONG overlaid with magnetic field extrapolation results from the PFSS \\citep[potential field solar surface; see e.g.,][]{Schrijver03} model showing the location of open field lines at the ecliptic plane. Bottom panel: solar wind bulk speed measured by PLASTIC instruments aboard STEREO-A (red) and STEREO-B (blue). Colored bands on the top part of this panel represent the interplanetary magnetic field polarity. Black arrows mark the solar wind stream observed over two rotations (CR 2076 and 2077) by STEREO-B, but missed by STEREO-A due to the latitudinal separation of the spacecraft. Taken from \\cite{Gomez-Herrero11}.}\n\\label{fig:gomez11} \n\\end{figure*}\n\n\n\n\\subsection{Solar wind structures affecting CME and SEP evolution}\\label{sec:affect}\nCMEs are known to change shape, accelerate or decelerate, depending on the background solar wind flow properties, as well as may change their propagation direction if encountering other magnetic structures (cf.\\,Section \\ref{subsec:cme}). From recent results it is found that many coronal fine structures and small scale magnetic flux ropes are embedded in the solar wind \\citep[first PSP\/WISPR results see][]{howard19}. Likewise, PSP in-situ measurements show that low-latitude coronal holes might be an additional source of slow solar wind, causing close to the Sun strong fluctuations in the solar wind flow \\citep[][]{Bale19}. Such local solar wind dynamics, comprising of numerous voids, compact small- to large-scale sized structures (``woodgrain'' appearance), clearly have impact on the evolution of a CME and with that complicates forecasting. As example, Figure~\\ref{fig:sw-structure} shows a disrupted CME front close to the Sun as consequence of an interaction with a high speed solar wind stream emanating from the southern polar coronal hole. The increased speed of the ambient solar wind flow and stretched magnetic field, causes a weaker compression and, hence, deviations from the shape of a circular\/elliptic CME front. The structured solar wind in the outer corona is also revealed from specially noise reduced post-processed STEREO\/HI image data applying an algorithm to dim the appearance of bright stars and dust \\citep[right panel of Figure~\\ref{fig:sw-structure}; cf.,][]{DeForest18}. See also recent results from PSP image data resolving small-scale flux ropes, density structures and fluctuations in solar wind streamers \\citep{howard19}. Besides adjusting to the ambient solar wind flow speed, CMEs also tend to rotate for adjusting to the ambient magnetic field \\citep[e.g.,][]{Yurchyshyn01,Yurchyshyn09,Vourlidas11,Isavnin14}. \n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{wind7.pdf}\n\\caption{a) Left panel shows a STEREO\/HI difference image with the distorted CME front (red arrow). Right panel gives SDO\/AIA 193\\AA~EUV image with the CME source region (red box) and the southern coronal hole. The fast solar wind stream out of the coronal hole (depicted with blue arrows) deforms the CME front. b) Highly structured solar wind flow close to the Sun from a STEREO\/HI image after computer processing (credit: NASA's Goddard Space Flight Center\/Craig DeForest, SwRI).}\n\\label{fig:sw-structure} \n\\end{figure*}\n\n\nSEPs propagate along the magnetic field lines, hence are directly guided by the interplanetary magnetic field structure. During CME events, SEP path lengths are found to be much longer compared to quiet solar wind conditions \\citep[e.g.,][]{masson12}. For SEPs, which are injected into the legs of a CME, the magnetic path lengths from the Sun to Earth may vary largely, up to a factor of two, depending on the specific width of the CME \\citep{reames09}. CME geometry as well as kinematics (propagation of shock front) and time-dependent changes play an important role with respect to particle acceleration processes. In that respect, SEPs can be taken as probes as they map the interplanetary magnetic field structure \\citep{Reames17_book}. \n\n\n\n\\subsection{Preconditioning of interplanetary space}\\label{sec:preconditioning}\nThe correct simulation of the prevailing background solar wind structures in interplanetary space is important for a reliable CME-SEP forecasting. Using MHD modeling to simulate the solar wind distribution works well for low solar activity. However, increased solar activity changes the magnetic field in the photosphere, which serves as main observational model input, more quickly and propagating CMEs strongly disturb the interplanetary background solar wind making models tend to fail \\citep[see e.g.,][]{Lee09,Gressl14}. This effect is commonly known as \\textit{preconditioning} of interplanetary space, which alters the initial conditions for CME and SEP evolution. It is found that single CME events may disturb the slowly evolving solar wind flow for 2--5 days \\citep{Temmer17_preconditioning,janvier19} and most strong preconditioning effects are obtained due to CME-CME or CME-CIR interacting events as they form complex magnetic structures \\citep[e.g.,][]{gopalswamy00_accelCME,Burlaga02,Harrison12,dumbovic19}. Multiple CME activity is supposed to cause a decrease of density in interplanetary space and a radial stretching of the interplanetary magnetic field. That leads to a low drag force acting on subsequently propagating CMEs \\citep[see e.g.,][]{Farrugia04,Maricic14}. The STEREO-A directed July 23, 2012 event was one of the fastest CMEs ever recorded and propagated over a 1AU distance in less than 21 hours \\citep{liu14_nature}. It could be shown that the drag parameter, due to a preceding CME from July 19, 2012, was lowered by one order of magnitude \\citep{temmer15}. If the super-fast CME from July 2012 would have been Earth directed, it would have caused an extreme Space Weather event with an estimated Dst of $-$600 to $-$1100~nT \\citep[e.g.][]{ngwira14,baker13}. Due to increased fluctuations and extended periods of negative $B_{\\rm z}$ most intense geomagnetic storms occur for complex interacting CMEs \\citep[e.g.,][]{wang03,Farrugia06,Xie06,dumbovic15,scolini20}. \n\n\nFor SEPs, the preconditioning that comes from a preceding CME has consequences in the seed population. The presence of a previous CME is found to increase the probability for the subsequent fast CME to be SEP-rich \\citep{Gopalswamy02,Gopalswamy04,Kahler05}. The amount of particles that can get accelerated is increased for multiple CMEs and the increased turbulence in the interaction region is likely to accelerate the particles more efficiently to higher energies \\citep[so-called twin-CME scenario as proposed by][]{Li05}. Furthermore, the magnetic structure configuration in the sheath region for ICMEs is changed as the CME propagates in interplanetary space, by which the magnetic connectivity is altered \\citep[see also review by][on CME sheath regions]{kilpua17}. Interacting CMEs are also found to be more often related to widespread SEPs that can be observed all around the Sun using multiple viewpoints \\citep[e.g.,][]{Dresing16,Gomez-Herrero17}. For more details on CME-CME interaction and SEPs, I refer to the review by \\cite{lugaz17}.\n\n\n\n \nOver the solar cycle, the CME occurrence rate lies on average in the range of 0.3 per day during solar minimum phase and about 4--5 per day during solar maximum phase \\citep[e.g.,][]{St.Cyr00}. With average CME transit times from Sun to 1AU of the order of 1--4 days we can assume that during times of increased solar activity CME-CME interaction happens rather frequently. Reliably modeling these dynamic conditions in interplanetary space is therefore key for improving space weather forecasting capabilities. \n \n\n\n\\section{The chain of action on the example of the September 2017 events}\\label{sec:active}\n\n\nThe September 2017 activity phenomena are so far the most well studied strong Space Weather events in our modern space research era. Therefore, the chain of action can be described in great detail, especially with respect to the solar surface signatures and deduced parameters. The multiple disturbances can be well connected from Sun to Earth (and also up to Mars) and show how complex interactions lead to preconditioning effects and strong geoffectiveness.\n\n\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{action1.pdf}\n\\caption{High-resolution SDO\/AIA multi-wavelength imagery from September 6, 2017 (left: 171\\AA, middle: composite from 211-193-171\\AA) and SDO\/HMI line-of-sight magnetic field (right panel). The prominent active region NOAA 12673 caused the strongest eruptions during solar cycle 24 including SEPs. Courtesy of AIA team, adapted from \\url{http:\/\/suntoday.lmsal.com}.} \n\\label{fig:sdo} \n\\end{figure}\n\nA strong emergence of magnetic flux in the southern hemisphere of the Sun rapidly led to the development of active region NOAA 12673 into a complex $\\alpha\\beta\\gamma$ magnetic field configuration. As consequence, between September 4--10, 2017 that active region released a series of major flare events, which were actually the largest in more than a decade. These multiple events caused very strong geomagnetic disturbances with a minimum Dst of $-$142 nT on September 7, 2017. Additional minor storms were produced by high speed solar wind streams that arrived together with the transient events. Figure~\\ref{fig:sdo} gives EUV image data taken with SDO\/AIA on September 6, 2017 in different wavelength ranges showing the hot corona from about 0.6 to 2~MK. The wavelength ranges cover 171\\AA\\ (left panel), and with a triple-filter 211\\AA\\ (red), 193\\AA\\ (green), and 171\\AA\\ (blue) to highlight different temperatures (middle panel). The line-of-sight magnetogram for the same day taken with SDO\/HMI reveals the magnetic field in the photosphere (right panel). \n\n\\begin{figure}\n \\includegraphics[width=1.9\\textwidth,angle=90]{action2.pdf}\n\\caption{September 2017 flare-CME-SEP series taken and adapted from the CDAW catalogue (proton-height\/time-X-ray plots - PHTX - can be found under \\url{https:\/\/cdaw.gsfc.nasa.gov\/CME\\_list}). Top panel shows solar energetic particle events for protons in the GOES energy channels $>$10, $>$50 and $>$100~MeV. Middle panel gives the CME height-time profile as measured from LASCO (colors give the main propagation direction - see legend to the left in the middle panel). Bottom panel gives the GOES flare SXR emission disk integrated over the wavelength ranges 0.5--4.0 and 1.0--8.00~\\AA. }\n\\label{fig:cdaw} \n\\end{figure}\n\nFigure~\\ref{fig:cdaw} gives a combined overview on the series of activity pulses covering the two major flare-CME-SEP events during September 6--10, 2017. In total, active region NOAA 12673 produced an intense solar storm period revealing five X-class flares and 39 M-class flares (including the two largest flares from solar cycle 24, the X9.3 flare on September 6, 2017 and the X8.2 flare on September 10, 2017). The first SEP event was measured in the GOES channels at 1AU over September 6, 2017 12:15UT--September 7, 2017 23:25UT and is related to the halo CME that occurred on the Sun on September 6, 2017 12:24UT (first observation in LASCO\/C2) with a projected speed of 1570~km\/s over the coronagraph field of view. The CME has the source region location coordinates S08W33 and is launched together with a flare that started on September 6, 2017 11:53UT and reached X9.3 class in the measured GOES SXR flux. The second flare-CME-SEP event occurred September 10, 2017 (SEP: September 10, 2017 16:25UT--September 11, 2017 11:40UT; halo CME: September 10, 2017 16:00 with 1490~km\/s; flare: X8.2 class on September 10, 2017 15:35UT) with the source region located behind the west limb. For both events long-duration high-energy gamma-ray emission was detected by the Fermi-Large Area Telescope, having durations exceeding 15 hours and with that being the third and fifth largest among all detected \\citep[][]{longo17,gopalswamy18,omodei18}. \n\n \n\\begin{figure}\n \\includegraphics[width=\\textwidth]{action3.pdf}\n\\caption{Top panels: EUV images from September 6, 2017 observed with SDO AIA (running ratio of composite image data) and Proba-2\/SWAP (difference image of 171\\AA\\ data). Bottom panels: LASCO\/C2 and C3 coronagraphs covering a field of view up to 30 solar radii. The generated SEPs are accelerated to relativistic speeds producing spikes in the image data (``snowstorm'' effect). This event was the first flare event in a sequence of X-class flares on 6, 7, and 10 September 2017 causing strong disturbances at Earth and Mars. Note that the field of view of Proba-2 is larger compared to AIA and that due to different CCD techniques no saturation effects are visible in Proba-2. Movies for each panel are available online.}\n\\label{fig:sep17} \n\\end{figure}\n\nFigure~\\ref{fig:sep17} impressively shows the manifestation of the eruptive event from September 6, 2017 around 12~UT in solar observations using EUV image and white-light coronagraph data. To make changes visible, different techniques are used, such as a running ratio images as applied on the SDO\/AIA composite image and difference images as applied on the Proba-2\/SWAP 171\\AA\\ image or on the LASCO white-light images. SDO and Proba-2 processed images nicely reveal the flare component (X9.3 class), a coronal surface wave that was ignited, and the CME flank on-disk as well above the limb visible in EUV due to compression and heating of the plasma. The off-disk counterpart of the coronal wave is observed in white-light as the shock wave surrounding the CME body. In white-light coronagraph data from LASCO C2 and C3 the CME is detected as partial halo event. The projected CME speed over the field of view up to 20 solar radii was measured from LASCO data with $\\sim$1500~km\/s. SEPs that are produced by relativistic protons hit the CCD camera of LASCO within minutes and generate the well-known ``snowstorm'' effect. Heavy snowstorms strongly disturb the image quality which may complicate the analysis of halo events (especially during times of increased solar activity). \n\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{action4.pdf}\n\\caption{Left panel: location of flare ribbons (orange contour), dimming areas (cyan contour) and post-eruptive arcade (PEA, magenta contour) derived for the eruptive event from September 6, 2017. The greyscale background HMI line-of-sight magnetogram is scaled $\\pm$100~G with black and white representing negative and positive polarities, respectively \\citep[adapted from][]{scolini20}. Right panel: cartoon giving the relation between flare, CME erupting flux rope, coronal wave and dimming areas. }\n\\label{fig:scolini} \n\\end{figure}\n\n\nFigure~\\ref{fig:scolini} shows for the September 6, 2017 event the associated dimming region and post-eruptive arcade together with the magnetic field. The core dimmings are accompanied by secondary dimmings covering larger areas. The cartoon to the right in Figure~\\ref{fig:scolini} depicts the relation between different features of an eruptive event and associated solar surface signatures. E.g., measurements of the post-eruptive arcade (PEA) areas and their underlying magnetic field were taken to derive the reconnected flux to feed CME propagation models using magnetized CMEs \\citep[][]{scolini20}. The orientation of the PEA with respect to the underlying magnetic field, revealed the handedness of the flux rope. Coronal waves initiated close to the eruption side, hint towards the main compression direction of the eruption, hence, a southward propagation direction of the associated CME (confirmed by the coronagraph images; cf.\\,Figure~\\ref{fig:sep17}). Therefore, monitoring surface structures gives additional (and early) information about a potential Space Weather event that might affect Earth, and moreover, provides valuable input for modeling efforts.\n\n\nThe multiple CME events from September 6--9, 2017 preconditioned interplanetary space and were interacting, which intensified their geomagnetic effects. Figure~\\ref{fig:werner} shows the complex in-situ signatures revealing multiple shocks and magnetic ejecta regions. Note that the shock of ICME2 propagated into the magnetic structure of ICME1. These so-called ``shock-in-a-cloud'' events are found to cause stronger geomagnetic responses than isolated geoeffective CMEs \\citep{lugaz15}. For the September 2017 events, the shock compression might have enhanced the geoeffectiveness by a factor of 2 \\citep{shen18}.\n\n\\begin{figure}\n \\includegraphics[width=\\textwidth]{action5.pdf}\n\\caption{Interacting in-situ signatures for the multiple CME events from September 6--9, 2017. Magnetic field and plasma data are given from WIND at L1. Top to bottom: Total magnetic field and vector components, dynamical pressure, proton temperature, and plasma-beta. Taken from \\cite{Werner2019ModelingWSA-ENLIL+Cone}. }\n\\label{fig:werner} \n\\end{figure}\n\nAs the region rotates close to the West limb of the Sun, the event from September 10 occurred as partially occulted and, because of the favorable magnetic connectivity, generated a GLE. Figure~\\ref{fig:bruno19} gives the SEP proton fluxes measured near Earth (ACE, GOES) together with the geomagnetic Dst index and neutron monitor profiles showing the Forbush decrease related to the arrival of the closed magnetic structure \\citep[for more details see][]{bruno19}. Due to the eruption site located close to the limb, measurements of the CME kinematics were less strongly affected by projection effects. From EUV observations of the wide field of view Solar Ultraviolet Imager on board the GOES-16 spacecraft exceptionally high values for the CME acceleration and speed were derived, revealing the huge expansion in both, radial and lateral direction \\citep{Gopalswamy18_sept17,seaton18,veronig18}. This caused the initiation of a coronal wave propagating over the entire solar surface \\citep{liu18}.\n\n\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{action6.pdf}\n\\caption{Proton intensities measured by ACE\/EPAM, GOES\/EPEAD, and GOES\/HEPAD. Dst index and count rate variations registered by SOPO and MGDN neutron monitor stations. The vertical dotted and dashed lines mark the onset of the SEP events and the time of the shocks, respectively. The green, orange, and gray areas indicate the periods of the ICME, magnetic cloud, and high speed streams, respectively. Taken from \\cite{bruno19}.}\n\\label{fig:bruno19} \n\\end{figure*}\n\n\nThe September events did not only affect the Earth, but were also registered at Mars. Energetic particles were observed by MAVEN orbiter, and on the surface by the Curiosity Mars rover \\citep[e.g.,][]{hassler18}. Many further aspects of the September 2017 activity period are well represented in a collection of publications in the AGU Space Weather journal\\footnote{\\url{https:\/\/agupubs.onlinelibrary.wiley.com\/doi\/toc\/10.1002\/(ISSN)1542-7390.SW-SEPT2017}}.\n\n\n\\section{Space weather forecasting models}\\label{sec:6}\n``Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.'' by \\cite{box76}. \\\\\n\n\nFor comprehensive investigations of the structure of interplanetary space, propagating transient events, and magnetic connectivity, we need to rely on modeling efforts. Having only scarcely distributed single point in-situ measurements and the lack of plasma and magnetic field information derived by applying remote sensing imaging techniques, we are clearly limited in our ability to assess the performance and reliability of these models hindering improvements. Nevertheless, the close collaboration between model developers and observational community is a need to push forward our understanding of CME-solar wind coupling and interaction processes in interplanetary space. Also on kinetic scales, particle acceleration processes and trigger mechanisms of flare-CME-SEP events need to be supported by modeling, as from observational data the information are not sufficient to make conclusive interpretations about the underlying physics. In recent years substantial progress has been made in efficiently combining models with observations. Data assimilation techniques are known to hugely improve operational forecasting models, as observational data are incorporated in a self consistent way into numerical models for increasing the model accuracy. Examples about the application to solar data can be found in e.g., \\cite{Schrijver03,arge10}. In heliospheric physics this method is difficult to apply due to the relatively sparse observations. Recent efforts comprise in-situ measurements from 1~AU to update and improve inner-boundary conditions of solar wind models \\citep[see e.g.,][]{lang19}. \n\n\nForecasting flares and SEPs with a lead time of at least a few hours is related to the forecasting of the evolution of active regions producing eruptive solar flare events. However, that means to estimate the emergence of magnetic flux from below the photosphere that is not accessible from direct observations. Therefore, statistical relations between photospheric magnetic field characteristics of active regions and flare occurrence are usually used for solar flare forecasting. More precisely, it is the time evolution of the magnetic parameters that plays a major role, but is very complex to derive. The prediction of major flares seems to be more easily achievable compared to flares of medium to low energy, however, the uncertainties coming from the statistical approach are rather large preventing from more accurate single event prediction \\citep[see e.g.,][]{schrijver07,georgoulis07,bloomfield12}. Real-time solar flare forecasting is provided by e.g., the EU project Flarecast\\footnote{\\url{http:\/\/flarecast.eu}} with a fully automated system and the NASA\/CCMC flare forecasting scoreboard provides a platform to test different forecasting methods fostering further development of the available algorithms\\footnote{\\url{https:\/\/ccmc.gsfc.nasa.gov\/challenges\/flare.php}}. In general, having standardized metrics is a necessity in order to reliably validate and cross-check the performance of the different flare forecasting tools \\citep{barnes08}. See also a review on the origin, early evolution and predictability of solar eruptions by \\cite{green18}.\n\n\nFor forecasting CME arrival times and impact, a vast amount of models is currently available in various levels of complexity \\citep[see, e.g., Table~1 in][]{riley18}. Simple models, such as empirical relations between CME transit time and speed from statistical results provide tools to derive an average behavior of CME propagation in interplanetary space \\citep{Gopalswamy01}. The forecasting accuracy can be significantly improved, when using observational data that can track the CME kinematics to beyond a distance of 50 solar radii (e.g., using image data from STEREO\/HI, or radio IPS) where the CME is assumed to evolve in a linear way \\citep[see][]{colaninno13,rollett16,hess17}. Several analytical models include the physics of drag force (viscous, aerodynamic, hybrid - that means a linear or quadratic relation between solar wind and CME speed difference) that a CME experiences in interplanetary space. A widely used analytical model is the drag-based-model \\citep[DBM; see][]{vrsnak13}. It applies the aerodynamic drag as analogon for the MHD drag force exerted on the CME embedded flux rope. \nMore sophisticated are numerical MHD models such as e.g., EUHFORIA \\citep{pomoell18_euhforia}, ENLIL \\citep{odstrcil99_enlil}, CORHEL \\citep{Riley12} or SUSANOO \\citep{shiota16}. Besides simulating transient events, MHD models also cover the variation in the background solar wind \\citep[see also][]{arge00}. A compilation of data, services and tools can be found at CDPP (Plasma physics data center in France)\\footnote{\\url{http:\/\/cdpp.irap.omp.eu}}. For CMEs, the disentanglement between shock and magnetic ejecta is an important issue highly relevant for forecasting. Numerical models usually use a simple pressure pulse to ignite a CME shock front, but do not include the magnetic structure. There are recent efforts of magnetized CMEs incorporating the observed reconnected magnetic flux at the Sun as model input parameter for the CME flux rope \\citep[e.g.][]{Scolini19,singh19}. \n\nFor validation purposes and for increasing the awareness of the limitations of the accuracy of the model results, the uncertainties of observational model input data need to be quantified. \\cite{riley18} summarized hit and miss statistics from the CCMC scoreboard\\footnote{\\url{https:\/\/kauai.ccmc.gsfc.nasa.gov\/CMEscoreboard\/}}, a CME prediction board that gives the possibility to use different models in real-time forecasting (crude facts approach that challenges models and their users). Metadata and metrics are suggested to give the community a common base for an objective inter-comparison of their models \\citep{Verbeke19}. A review on the current status and open issues on CME propagation and forecasting methods is given by \\cite{vourlidas19}.\n\nCompared to the prediction of CME arrival times and impact speeds, SEP forecasting is a more tricky issue as the accelerated particles propagate with fractions of the speed of light and the lead time is of only a few minutes. Several models are available, mostly using empirical relations based on statistical relations with CME-flare locations or type II radio burst occurrence at decametric\u2013hectometric (DH) wavelengths. Current models cover e.g., PROTONS \\citep{balch08}, PPS \\citep{kahler07}; ESPERTA \\citep{Laurenza09}, FORSPEF \\citep{anastasiadis17} as well as physics based models such as e.g., SOLPENCO \\citep{Aran06} or HESPERIA \\citep[][and references therein]{Malandraki18_book}. Incorporating CME characteristics is difficult as the real-time coronagraph image data, from which usually the CME parameters are derived, are delivered with some delay. Nevertheless, the SEPForecast tool resulting from the EU project COMESEP\\footnote{\\url{http:\/\/comesep.aeronomy.be}} also uses CME parameters as model input \\citep{Dierckxsens15}. The application of solar surface proxies for some CME parameters might overcome that drawback. A detailed comparison of false alarm rates for SEP forecasting is presented in e.g., \\cite{Alberti17}. The current status of forecasting and nowcasting of SEPs and open questions is given in a review by \\cite{Anastasiadis19}. \n\n\nFurther approaches for improving forecasting purposes also cover ensemble models incorporating the uncertainties in the observational data \\citep[e.g.,][]{lee13,mays15,Dumbovic2018ThePropagation,Amerstorfer2018EnsembleImagers}, and machine learning techniques \\citep[see e.g.,][]{camporeale19}. In addition to methods covering large statistics, case studies that model Space Weather events from Sun to Earth, give a wealth of detailed information from which we can hugely improve our understanding of flare-CME-SEP events. Community centers like ESA\/VSWMC\\footnote{\\url{https:\/\/esa-vswmc.eu}} or NASA\/CCMC\\footnote{\\url{https:\/\/ccmc.gsfc.nasa.gov}} cover the increased need of computational power and appropriate IT infrastructure and provide a platform for models to be tested and actually used. Such platforms are also the driveway for R2O (research to operation) activities and where scientists and users meet. A collection of Space Weather tools is presented at the ESA\/SSA website, including services from the European Expert Service Centers and their individual Expert Groups (cf.\\,Figure~\\ref{fig:ssa}). \n\n\\begin{figure*}\n \\includegraphics[width=1.\\textwidth]{model1.pdf}\n\\caption{Webpage of the SSA Space Weather Service Network where Expert Service Centers provide their tools and services covering Space Weather forecast and overview from the solar-heliosphere perspective to space radiation, ionospheric and geomagnetic conditions. ESA\/SSA: \\url{http:\/\/swe.ssa.esa.int}}\n\\label{fig:ssa} \n\\end{figure*}\n\n\n\\section{Concluding remarks}\nNowadays, the term Space Weather covers basic research as well as application and is a platform for strong interdisciplinary research bringing the domains of solar-, heliospheric, and geo-physics closer together. The real-time forecasting results of flare-CME-CIR-SEP events at Earth using operational tools reveal that we still face large uncertainties. The reason is on the one hand the huge complexity of the solar phenomena and their unknown coupling processes and propagation behavior in interplanetary space. On the other hand, there is a clear lack of accurate enough measurements to properly feed the available models. The source of high errors and inaccuracies in the measurements is due to extraction of remote sensing data which are affected by projection effects, idealized assumptions of magnetic field extrapolations (lack of coronal magnetic field measurements) and 3D reconstructions, as well as localized in-situ measurements making it tricky to validate all the results. For understanding and forecasting the geoeffectiveness of Earth affecting Space Weather events, a far bigger challenge for the future is the modeling of the interplanetary magnetic field orientation and variation. \n\n\nA clear advantage is given by the combination of several instruments having multiple views on the Sun covering different longitudes and latitudes. STEREO-A is still operational providing data material from varying angles. PSP will be the first spacecraft that comes as close to the Sun as 10 solar radii and aims to answer questions about the origin of the solar wind, how is it accelerated or about SEP acceleration and transport processes. The mission already gathered unprecedented data enabling new interpretation and finding on the source regions of the solar wind. Solar Orbiter revealed first data in July 2020 and will study the solar surface and its magnetic field in great detail. The spacecraft will have a highly elliptical orbit to progressively move to a more inclined orbit out of ecliptic. That perspective will give new insight onto the polar regions of the Sun and is expected to improve magnetic field modeling and to better understand the solar wind from coronal holes in the polar regions.\nConjunctions with other active missions (such as STEREO, L1 missions, MAVEN), complemented by ground-based instruments, will give unique possibilities to investigate the evolution of solar flares, CMEs, or SEPs, as well as coronal holes and solar wind high speed streams. Future missions, such as the ESA \\textit{Lagrange} mission to L5 (launch is planned for 2027) or polar missions, as the proposed Solaris Solar Polar Mission \\citep{hassler20}, will provide more valuable data and fill the gaps in our understanding of magnetic field connectivity and coupling processes between open and closed magnetic field structures in interplanetary space. \n\n\nWe are facing challenging and exciting times having a wealth of data at hand and promising future missions yet to come. Many of the currently available data are well prepared in ready-to-use catalogues, waiting to be explored. However, we still need to obtain data from L4\/L5 Lagrange points revealing the necessary side views on a regular basis to fully track eruptions with low projection effects and for gaining a better understanding of their characteristics that affect Earth. Moreover, we need to improve solar wind models to be used in MHD simulations for properly determining CME propagation. With that we will successfully enhance the reliability of Space Weather forecasting tools and models.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro_sec} \n\nNon-local models are used for the description of several complex phenomena for which a local approach is inappropriate or limiting. In the past, they have been successfully employed in different fields, such as material sciences (\\cite{bates2006some}), dislocation dynamics in crystals (\\cite{dipierro2015dislocation}), image processing (\\cite{gilboa2008nonlocal}) or elasticity (\\cite{silling2000reformulation}).\n\nNon-local terms appear also in many diffusion models in biology, where they describe biological systems on scales that are convenient to observation and data collection. A classical example is the chemotaxis phenomena, which can be modeled by non-local differential equations of diffusion type. For instance, in \\cite{burger2006keller} a non-local Keller-Segel model is used to study the chemotaxis of a cell population in which the chemicals are produced by the cells themselves. Furthermore, one can find numerous examples of other non-local diffusion phenomena in the book \\cite{okubo2013diffusion}, where they are presented several systems involving, for instance, random walks and random population diffusion. \n\nMotivated also by the aforementioned applications, these types of problems have recently raised the interest of the mathematical community. In particular, we may refer to the monograph \\cite{andreu2010nonlocal} for an extended mathematical survey containing different examples of non-local diffusion models. \n\nIn this paper, we consider a diffusion spatially non-local equation, where the non-locality is given by the presence of an integral kernel. In particular, we are interested in the analysis of control properties. \n\nControl Theory deals with dynamical systems that can be controlled, i.e. whose evolution can be influenced by some external agent. In this framework, a first problem one could face is the one of \\textit{controllability}, that is, to study the possibility of steering the system from one configuration to another by employing one or more controls applied through actuators. If controllability to a given final state is granted, one can try to reach this state by minimizing\nsome cost, thus defining an \\textit{optimal control} problem. The mathematical theory of optimal control has rapidly developed in the last decades into an important and separate field of applied mathematics. Actually, applications of optimal control of parabolic equations span a large spectrum of fields including aviation and space technology, robotics, movement sequences in sports, and the control of chemical processes.\n\nFor the non-local diffusion models that we are going to consider in the present work, which will be introduced in the next section, controllability has been already treated in some recent paper (see, for instance, \\cite{biccari2017null,fernandez2016null,lissy2018internal,micu2017local}). On the other hand, to the best of our knowledge, the analysis of optimal control problems was never addressed in this setting. Then, the main purpose of our work will be to give an insight on some classical optimal control question which arises naturally from the biological applications behind the mentioned models. In this framework, we will focus our attention on two different situations:\n\\begin{enumerate}\n\t\\item In a first moment, we will deal with a classical optimal control problem, in which we are interested in minimizing a given cost functional coming from the controllability theory. This cost functional will typically characterize the optimal control of minimal energy which allows to reach some predetermined configuration.\n\t\\item Secondly, we will consider the case of missing or incomplete initial data. This is a very typical situation when studying certain natural phenomena, for instance in ecology or genetics, where the employment of models with no missing data may lead to a misunderstanding of the underneath processes. \n\\end{enumerate} \n\nThis second problem will be addressed by using the concepts of \\textit{low-regret} or \\textit{least-regret} controls, which have been introduced by J.-L. Lions in \\cite{lions1992controle}, and may be well-adapted to the non-local problem we are considering. \n\nThese techniques were designed precisely for dealing with the optimal control of models where missing data are incorporated. They consist of transferring the optimal control problem with missing data to a classical one, relaxing the problem of control by a sequence of low-regret controls. This idea was firstly applied to linear models (\\cite{gabay1994decisions,lions1999duality}), and it was later extended to the nonlinear setting in \\cite{nakoulima2002no}. Afterwards, many other authors applied these ideas to other control problems for PDEs with incomplete data. The interested reader may refer, for instance, to \\cite{dorville2004low,jacob2010optimal,louison_1,mahoui2017pointwise}. In this work, we will see how the low-regret and least-regret approaches can be applied also in the the non-local context.\n\nThe present paper is organized as follows: in Section \\ref{sec:2}, we present the problem we are going to consider. In Section \\ref{sec:3}, we address some elementary optimal control question associated with our model, while in Section \\ref{sec:4}, we address the issue of missing initial data. Finally, Section \\ref{sec:5} is devoted to some numerical experiment. \n\n\\section{Problem formulation and existing controllability results}\\label{sec:2} \n\nLet $\\Omega$ be a bounded domain of $\\RR^d$, $d\\geq 1$, with boundary $\\partial\\Omega$ of class $\\mathcal{C}^2$. Given a time horizon $T>0$, we set $Q:=\\Omega\\times (0,T)$ and $\\Sigma:=\\partial\\Omega\\times(0,T)$. Let $K(x,\\theta,t)\\in L^\\infty(\\Omega\\times\\Omega\\times(0,T))$, and consider the following linear non-local parabolic problem\n\\begin{align}\\label{direct_pb_y}\n\t\\begin{cases}\n\t\t\\dis\\frac{\\partial y}{\\partial t}-\\Delta y+ \\int_\\Omega K(x,\\theta,t)\\,y(\\theta,t)\\,d\\theta = v{\\bf 1}_{\\mathcal{O}}, & (x,t)\\in Q,\n\t\t\\\\\n\t\ty = 0, &(x,t)\\in \\Sigma,\n\t\t\\\\\n\t\ty(0,x) = y_0(x), & x\\in\\Omega.\n\t\\end{cases}\n\\end{align}\n\nIn \\eqref{direct_pb_y}, $y:=y(x,t)$ is the state function and $v:=v(x,t)$ is the control function. The latter acts on the system through the non-empty set $\\mathcal{O}\\in\\Omega$. Here, ${\\bf 1}_{\\mathcal{O}}$ denotes the characteristic function of $\\mathcal{O}$. \n\nMoreover, we assume that $y_0\\in L^2(\\Omega)$ and $v\\in \\mathcal{U}:=L^2(\\mathcal{O}\\times(0,T))$, so that the system \\eqref{direct_pb_y} admits a unique solution\n\\begin{align*}\n\ty\\in L^2((0,T); H^1_0(\\Omega))\\cap H^1(0,T; H^{-1}(\\Omega)),\n\\end{align*}\nwhich satisfies classical energy estimates. Actually, this remains true also if $v{\\bf 1}_{\\mathcal{O}}$ is replaced by a more general right-hand side $f\\in L^2(0,T; H^{-1}(\\Omega))$.\n\nNon-local diffusion equations in the form \\eqref{direct_pb_y} were introduced as an extension of already existing advection-diffusion-reaction models of multi-species ecosystems. They take into account the fact that, in many biological situations, local movement is also coupled with long-range influences, such as the combination of clonal growth and a dispersing phase like seeds. One example of a model for this situation was developed by Furter and Grinfeld (\\cite{furter1989local}), who examined diffusion-reaction models of single-species dynamics that incorporate a reaction term dependent on characteristics of the population as a whole.\n\nConcerning the controllability of \\eqref{direct_pb_y}, this issue has been firstly studied in \\cite{fernandez2016null}, where the authors proved an interior null controllability result by imposing analyticity assumptions on the kernel in order to obtain unique continuation properties. In this framework, also coupled systems have been treated in \\cite{lissy2018internal}. Moreover, in \\cite{micu2017local}, analogous results have been obtained for a one-dimensional equation with a kernel in separated variables, by means of classical spectral analysis techniques. Later on, these mentioned results have been extended in \\cite{biccari2017null}, where the null-controllability is proved under weaker assumptions on the kernel and for both the linear and the semilinear case, by using a Carleman approach. In particular, the authors there proved the following result.\n\n\\begin{theorem}[{\\cite[Theorem 1.1]{biccari2017null}}]\\label{theo1_bhs}\nLet $T>0$ and assume that $K\\in L^\\infty(\\Omega\\times\\Omega\\times(0,T))$ satisfies \n\\begin{align}\\label{K_est_weak}\n\t\\mathcal{K}:=\\sup_{(x,t)\\in\\overline{Q}}\\exp\\left(\\frac{\\mathcal{A}}{t(T-t)}\\right)\\int_{\\Omega} |K(x,\\theta,t)|\\,d\\theta <+\\infty,\n\\end{align}\nfor a given positive constant $\\mathcal A$ only depending on $\\Omega$. Then, for any $y_0\\in L^2(\\Omega)$, there exists a control function $v\\in \\mathcal U$ such that the associated solution $y$ of \\eqref{direct_pb_y} satisfies $y(x,T) = 0$.\t\n\\end{theorem}\t\n\nWe shall stress that, according to the hypothesis \\eqref{K_est_weak}, to get a positive controllability result the kernel $K$, as a function of $t$, should behave like \n\\begin{align*}\n\tK(\\cdot,\\cdot,t) \\sim \\exp\\left(-\\frac{\\mathcal B}{t(T-t)}\\right),\n\\end{align*}\ni.e. it should decay exponentially as $t$ goes to $0$ and $T$. \n\nThis is however a quite strong restriction on the admissible kernels. Actually, only rapidly decaying or compactly supported (in time) kernels verify \\eqref{K_est_weak}.\n\nIn some specific situations, this assumption it is actually not necessary. For instance, in the particular case when $K=|\\Omega|^{-1}\\textnormal{cst.}$, assumption \\eqref{K_est_weak} has been recently removed in \\cite{hernandez2020local}. Using the so-called shadow model (see \\cite{HR00,MCHKS18} or \\cite{HSZ20} for other applications in control), the authors have proved that the system \n\\begin{equation}\\label{eq:nonlocal_nonlinear}\n\t\\frac{\\partial y}{\\partial t}-\\Delta y + a y + b\\, \\Xint-_{\\Omega} y(\\theta,t)d\\theta = v \\mathbf{1}_{\\mathcal O}\n\\end{equation}\nwhere $a,b\\in\\mathbb R$ and \n\\begin{align*}\n\t\\Xint-_\\Omega y(\\theta,t)d\\theta:=|\\Omega|^{-1}\\int_{\\Omega}y(\\theta,t)d\\theta,\n\\end{align*}\nis null-controllable at time $T>0$. Obviously, since the kernel $K$ is constant, the controllability result in Theorem \\ref{theo1_bhs} does not apply directly. We also emphasize that this result extends the one in \\cite{fernandez2016null} since the constant kernels do not satisfy the analyticity assumptions considered in that work. \n\nWhile the controllability of \\eqref{direct_pb_y} has been already studied in the aforementioned references, to the best of our knowledge the optimal control for this same problem is yet to be addressed. This is exactly the purpose of the present paper, in which this issue will be treated from two different viewpoints. We will firstly address a classical optimal control problem, in which we are interested in minimizing a given functional characterizing the control $v$ employed in \\eqref{direct_pb_y}. In more detail, we consider the strictly convex cost functional $J:\\mathcal U\\to\\RR$ defined by\n\\begin{equation}\\label{quadratic_cost}\n\tJ(v)={\\beta}\\norm{y(v)-\\bar{z}}{L^2(Q)}^2 +\\mu\\norm{v}{\\mathcal{U}}^2, \\qquad \\beta,\\,\\mu>0,\n\\end{equation}\nwhere we denote by $y=y(v)$ the dependence of the state in terms of the control function $v$, and we will study the optimal control problem \n\\begin{equation}\\label{min_J_part1}\n\t\\inf_{v\\in\\mathcal{U}} J(v).\n\\end{equation}\n\nTo this end, we will firstly prove the existence of a minimizer and, in a second moment, we will characterize it through a suitable optimality system.\n\nIn \\eqref{quadratic_cost}, $\\bar{z}\\in L^2(\\Omega\\times(0,T))$ is the target we aim to reach and $\\mathcal U$ is a given set of admissible controls. Here we will consider the simple case in which $\\mathcal U = L^2(\\mathcal O\\times(0,T))$, although other different choices are possible. Actually, in many practical situations realistic controls depend on the physical process that the equation is describing, and the set of admissible controls may be characterized by taking into account several constraints.\n\n\n\n\\section{Optimal control}\\label{sec:3}\n\nAs we mentioned in Section \\ref{intro_sec}, the mathematical theory of optimal control is nowadays very rich in the context of local equations, covering for instance linear and non-linear models, convex and non-convex functionals, and problems with control constraints. On the other hand, way less results are available in the framework of non-local models. Actually, to the best of our knowledge, the optimal control problem for \\eqref{direct_pb_y} has never been addressed before. \n\nIn this section, we will study the optimal control problem \\eqref{min_J_part1}. In more detail, we will firstly prove the existence of a minimizer for the functional $J$ and, in a second moment, we will give a characterization through the optimality system. Moreover, by means of classical techniques in control theory, in what follows we will need to rely on the so-called \\textit{adjoint equation}, which is given by \n\\begin{align}\\label{adjoint_pb_p}\n\t\\begin{cases}\n\t\t\\displaystyle -\\frac{\\partial p}{\\partial t}-\\Delta p+\\int_\\Omega K(\\theta,x,t)\\,p(\\theta,t)\\,d\\theta = y(v)-\\bar{z}, & (x,t)\\in Q,\n\t\t\\\\\n\t\tp = 0, & (x,t)\\in \\Sigma,\n\t\t\\\\\n\t\tp(x,T) = 0, & x\\in\\Omega.\n\t\\end{cases}\n\\end{align}\n\n\\subsection{Existence of an optimal control}\n\nWe have the following.\n\\begin{proposition}\\label{exist_prop_sect1} \nThere exists an optimal control function $u\\in\\mathcal{U}$, unique solution to the minimization problem \\eqref{quadratic_cost}, \\eqref{min_J_part1}.\n\\end{proposition}\n\n\\begin{proof}\nThe result is a consequence of the so-called \\textit{Direct Method of the Calculus of Variations} (see \\cite{brezis2010functional}), which ensures the existence and uniqueness of a minimizer provided the functional $J$ satisfies the following assumptions:\n\\begin{itemize}\n\t\\item[1.] $J$ is lower semi-continuous.\n\t\\item[2.] $J$ is strictly convex, i.e. $J((1-\\lambda)v+\\lambda w) < (1-\\lambda)J(v)+\\lambda J(w)$ for all $\\lambda\\in (0,1)$ and $v,w\\in\\mathcal U$.\n\t\\item[3.] $J$ is coercive, i.e. $\\lim_{\\norm{v}{\\mathcal U}\\to+\\infty}J(v) = +\\infty$.\t\n\\end{itemize}\n\nThe first two properties are evident since the functional is quadratic. Concerning the third one, it is enough to see that $J(v)\\geq \\mu\\norm{v}{\\mathcal U}$. This ends the proof. \n \\end{proof}\n\n\\subsection{Characterization of the optimal control}\n\nNotice that Proposition \\ref{exist_prop_sect1} guarantees the existence of a unique solution to our optimal control problem \\eqref{quadratic_cost}-\\eqref{min_J_part1}, but it does not allow to characterize it. Notwithstanding, in practical applications it is important also an explicit knowledge of the control, which is typically computed through the so-called \\textit{optimality system}. In order to introduce this system, we will firstly need the following technical lemma.\n\n\\begin{lemma} \nLet $v\\in\\mathcal U$ be the optimal control minimizing the functional $J$. For all $w \\in \\mathcal{U}$, we have the identity\n\\begin{equation}\\label{euler_lagrange_part1}\n\t{\\beta}\\langle(y(v)-\\bar{z}),y(w)\\rangle_{L^2(Q)} + \\mu\\langle v,w\\rangle_{\\mathcal{U}} = 0.\n\\end{equation}\n\\end{lemma} \n\n\\begin{proof}\nWe use the necessary condition of Euler-Lagrange satisfied by the optimal control $v\\in \\mathcal{U}$. Thanks to the linearity of the problem with respect to the control function, for every $\\lambda>0$ and $w\\in \\mathcal{U}$ we have $y(v+\\lambda w)=y(v)+\\lambda y(w)$. Then,\n\\begin{align*}\n\tJ(v+\\lambda w)-J(v) &= {\\beta}\\norm{y(v+\\lambda w)-\\bar{z}}{L^2(Q)}^2 + \\mu\\norm{v+\\lambda w}{\\mathcal{U}}^2 - {\\beta}\\norm{y(v)-\\bar{z}}{L^2(Q)}^2 + \\mu\\norm{v}{\\mathcal{U}}^2\n\t\\\\\n\t&={\\beta\\lambda^2}\\norm{y(w)}{L^2(Q)}^2 + 2\\beta\\lambda\\langle y(v)-\\bar{z},y(w)\\rangle_{L^2(Q)} + \\mu\\lambda^2\\norm{w}{\\mathcal{U}}^2 + 2\\mu\\lambda\\langle v,w\\rangle_{\\mathcal{U}}.\n\\end{align*}\nHence, the necessary condition of Euler-Lagrange gives\n\\begin{align*}\n\t\\lim_{\\lambda\\rightarrow 0} \\left(\\frac{J(v+\\lambda w)-J(v)}{\\lambda}\\right) ={\\beta} \\langle y(v)-\\bar{z}, y(w)\\rangle_{L^2(Q)} + \\mu\\langle v,w\\rangle_{\\mathcal{U}} = 0, \\; \\mbox{ for all } w\\in\\mathcal{U}.\n\\end{align*}\n\\end{proof}\n\n\\noindent We can now give the following characterization of the optimal control.\n\n\\begin{proposition}\\label{prop_characterization_part1}\nThe optimal control $v$ is characterized by the triplet \n\\begin{align*}\n\t(v,y,p)\\in \\mathcal{U}\\times L^2(Q)\\times L^2(Q),\n\\end{align*}\nunique solution to the optimality system\n\\begin{equation}\\label{sos_part1}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{\\partial y}{\\partial t}-\\Delta y+\\int_\\Omega K(x,\\theta,t)\\,y(\\theta,t)\\,d\\theta = v{\\bf 1}_{\\mathcal{O}}, & (x,t)\\in Q,\n\t\t\\\\[9pt]\n\t\t\\displaystyle -\\frac{\\partial p}{\\partial t}-\\Delta p+\\int_\\Omega K(\\theta,x,t)\\,p(\\theta,t)\\,d\\theta = \\beta(y(v)-\\bar{z}), & (x,t)\\in Q,\n\t\t\\\\[9pt]\n\t\ty=p=0, & (x,t)\\in \\Sigma,\n\t\t\\\\[9pt]\n\t\ty(x,0)=y_0(x), \\quad p(x,T) = 0, & x\\in\\Omega,\n\t\\end{cases}\n\\end{equation}\nwith\n\\begin{equation}\\label{adjoint_state_eq_part1}\n\t p+\\mu v=0, \\qquad \\mbox{in}\\quad \\mathcal{O}\\times (0,T).\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nWithout losing generality we can assume here that $y_0=0$, since in the case of non-zero initial data the result may be equivalently obtained by means of a simple change of variables. \n\nWe multiply \\eqref{direct_pb_y} by $p$ solution of \\eqref{adjoint_pb_p} and we integrate over the domain $Q$. By taking into account the boundary and initial conditions, we get by Fubini's theorem\n\\begin{align*}\n\t\\int_Q {\\beta} (y(v)-\\bar{z})y(w)\\,dxdt &=\\int_Q\\left(-\\frac{\\partial p}{\\partial t}-\\Delta p+ \\int_\\Omega K(\\theta,x,t)p(\\theta,t)\\,d\\theta\\right)y(x,t;w)\\,dxdt\n\t\\\\\n\t&=\\int_Q\\left(\\frac{\\partial y}{\\partial t}-\\Delta y+\\int_\\Omega K(x,\\theta,t)y(\\theta,t;w)\\,d\\theta\\right)p(x,t)\\,dxdt \n\t\\\\\n\t&= \\int_0^T\\int_{\\mathcal{O}}pw\\,dxdt.\n\\end{align*}\nThen, from \\eqref{euler_lagrange_part1} we immediately obtain $\\langle\\, p+\\mu v,\\, w\\rangle_{\\mathcal{U}}=0$ for all $w\\in \\mathcal{U}$,\nfrom which we get \\eqref{adjoint_state_eq_part1}.\n\\end{proof}\n\n\\section{Incomplete data problems}\\label{sec:4}\n\n\nIn this section, we consider the case of a missing or incomplete initial datum $y_0$, which is a very typical situation when studying natural phenomena governed by \\eqref{direct_pb_y}. This problem is described below and will be addressed by using the concept of no-regret controls, which has been introduced by J.-L. Lions in \\cite{lions1992controle}.\n\n\n\\subsection{Preliminaries}\n\nWe are interested in studying the control of the model \\eqref{direct_pb_y}-\\eqref{quadratic_cost} when it describes the dynamics of the density population $y(x,t)$ in the case in which the initial data is missing or incomplete. \n\nThis is a very typical situation when modeling many phenomena in physics, ecology, dynamic population, and several other fields. As a matter of fact, in those frameworks we may face the problem of incomplete data because of their practical inaccessibility, or because sometimes we have a great variety of possibilities when choosing for instance the initial conditions. In addition, boundary conditions may also be unknown or only partially known on a subset of the boundary that may be inaccessible to measurements. The same goes for source terms that can be difficult to access, or the structure of the domain, which can also be imperfectly known (for example in oil well management). \n\nIn our case, we will focus on a problem with missing initial datum. This means that this time we are assuming that $y_0$ is an unknown function, belonging to some vector closed subspace $G$ of $L^2(\\Omega)$. We are still concerned with optimal controls $v\\in\\mathcal{U}$, i.e. in solving\n\\begin{align*}\n\t\\inf_{v\\in {\\mathcal U}}J(v,y_0),\\qquad \\mbox{ for all } y_0\\in G.\n\\end{align*}\nwhere $J(v,y_0)$ denotes the explicit dependence on $y_0$ of the functional $J$ (see Lemma \\ref{lem:equiv_rest} for a precise description). Nevertheless, this minimization problem has actually no sense since $G$ is either the empty space or it has an infinite number of elements. \n\nTo deal with this problem, we use the low-regret concept of J.-L. Lions (see \\cite{lions1992controle}), which is well suited for incomplete data problems, and it is based on replacing \\eqref{min_J_part1} by\n\\begin{equation}\\label{min_J_part2}\n\t \\inf_{v\\in {\\mathcal U}}\\Bigg(\\sup_{y_0\\in G} \\Big(J(v,y_0)-J(0,y_0)-\\gamma\\|y_0\\|_G^2\\Big)\\Bigg),\n\\end{equation}\nwhere $\\gamma$ is a small parameter.\n\nThe meaning of \\eqref{min_J_part2} is to look for the control not making things worse with respect to doing nothing (i.e. case $v=0$). A solution to \\eqref{min_J_part2} is called a low-regret optimal control (see \\cite{jacob2010optimal,lions1992controle,louison_1,nakoulima2000perturbations} for further information on the method).\n\nWe show two preliminary results.\n\n\\begin{lemma}\\label{lem:equiv_rest}\nFor every $v\\in\\mathcal{U}$ and for any $y_0\\in G$, we have the property\n\\begin{equation}\\label{form0_part2}\n\tJ(v,y_0)-J(0,y_0)= J(v,0)-J(0,0)+ \\beta\\langle\\,y(v,0),y(0,y_0)\\rangle_{L^2(Q)}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe denote by $y(v,y_0):=y(x,t,v,y_0)$ the solution to \\eqref{direct_pb_y} with control $v\\in \\mathcal U$ and initial datum $y_0\\in G$. From the linearity of the problem \\eqref{direct_pb_y} we have $y(v,y_0)=y(v,0)+y(0,y_0)$. Then\n\\begin{align*}\n\tJ(v,y_0)-J(0,y_0) &= {\\beta}\\norm{y(v,y_0)-\\bar{z}}{L^2(Q)}^2 + \\mu\\norm{v}{\\mathcal{U}}^2-{\\beta}\\norm{y(0,y_0)-\\bar{z}}{L^2(Q)}^2\n\t\\\\\n\t&= {\\beta}\\norm{y(v,0)}{L^2(Q)}^2 + 2\\beta\\langle y(v,0),y(0,y_0)-\\bar{z}\\rangle_{L^2(Q)} + \\mu\\norm{v}{\\mathcal{U}}^2\n\t\\\\\n\t&= {\\beta}\\norm{y(v,0)-\\bar{z}+\\bar{z}}{L^2(Q)}^2 + 2\\beta\\langle y(v,0),y(0,y_0)-\\bar{z}\\rangle_{L^2(Q)} +\\mu\\norm{v}{\\mathcal{U}}^2\n\t\\\\\n\t&= J(v,0)-J(0,0)+ 2\\beta\\langle y(v,0),y(0,y_0)\\rangle_{L^2(Q)}.\n\\end{align*}\n\\end{proof}\n\n\\begin{lemma}\\label{lem:adjoint} \nLet $\\xi:=\\xi(x,t,v,0)$ be the solution of the adjoint problem\n\\begin{equation}\\label{adjoint_pb_xi}\n\t\\begin{cases}\n\t\t\\displaystyle -\\frac{\\partial \\xi}{\\partial t}-\\Delta \\xi+\\int_\\Omega K(\\theta,x,t)\\,\\xi(\\theta,t)\\,d\\theta = y(v,0), & (x,t)\\in Q,\n\t\t\\\\\n\t\t\\xi = 0, & (x,t)\\in \\Sigma,\n\t\t\\\\\n\t\t\\xi(x,T) = 0, & x\\in\\Omega.\n\t\\end{cases}\n\\end{equation}\nThen, we have\n\\begin{equation}\\label{form_part2}\n\tJ(v,y_0)-J(0,y_0)= J(v,0)-J(0,0)+2\\beta\\langle \\xi(0)\\,,\\,y_0\\rangle_{G',G}\n\\end{equation}\nwhere $\\xi(0):=\\xi(x,0,v,0)$ is the solution of the adjoint problem (\\ref{adjoint_pb_xi}) at time $t=0$ and where $G'$ is the topological dual of $G$.\n\\end{lemma}\n\\begin{proof}\nIf we multiply the first equation of \\eqref{adjoint_pb_xi} by $y(0,y_0)$ and we integrate by parts, we obtain\n\\begin{align*}\n\t\\langle y(v,0),y(0,y_0)\\rangle_{L^2(Q)} &= \\int_Q y(v,0)y(0,y_0)\\,dxdt\n\t\\\\\n\t&=\\int_Q\\left(-\\frac{\\partial \\xi}{\\partial t}-\\Delta \\xi+ \\int_\\Omega K(\\theta,x,t)\\,\\xi(\\theta,t)\\,d\\theta\\right)y(0,y_0)\\,dxdt\n\t\\\\\n\t&=\\int_Q\\left(\\frac{\\partial y}{\\partial t}-\\Delta y+\\int_\\Omega K(x,\\theta,t)\\,y(\\theta,t,w)\\,d\\theta\\right)\\xi(x,t)\\,dxdt + \\displaystyle\\int_\\Omega \\xi(0)y_0(x)\\,dx\n\t\\\\\n\t&= \\int_\\Omega \\xi(0)y_0(x)\\,dx.\n\\end{align*}\nThen, the result follows immediately by applying \\eqref{form0_part2}.\n\\end{proof}\n\nLemma \\ref{lem:adjoint} can now be employed to transform the inf\/sup problem \\eqref{min_J_part2} into a classical minimization one. Indeed, using \\eqref{form_part2}, we obtain from \\eqref{min_J_part2} that\n\\begin{align*}\n\t\\displaystyle\\inf_{v\\in {\\mathcal U} }\\left[J(v,0)-J(0,0) + 2\\beta \\displaystyle\\sup_{y_0\\in G} \\Big(\\langle \\xi(0),y_0\\rangle_{G',G} - \\frac{\\gamma}{2\\beta}\\|y_0\\|_{G}^2\\Big)\\right].\n\\end{align*}\nThis, together with the fact that\n\\begin{align*}\n\t\\sup_{y_0\\in G} \\Big(\\langle \\xi(0),y_0\\rangle_{G',G} -\\frac{\\gamma}{2\\beta}\\|y_0\\|_{G}^2\\Big) =\\frac{\\beta}{2\\gamma}\\|\\xi(0)\\|^2_{G'},\n\\end{align*}\ngives us the following minimization problem equivalent to \\eqref{min_J_part2}\n\\begin{align}\\label{infgamma}\n\t&\\inf_{v\\in {\\mathcal U} }{\\mathcal{J}}^{\\gamma}(v) \n\t\\\\\n\t&\\mathcal{J}^{\\gamma}(v)=J(v,0)-J(0,0) + \\frac{\\beta^2}{\\gamma}\\Big\\|\\xi(0)\\Big\\|_{G'}^2. \\notag\n\\end{align}\nIn what follows, we will always focus on \\eqref{infgamma} instead of the original low-regret problem \\eqref{min_J_part2}.\n\n\\subsection{Existence of the optimal low-regret control}\n\nWe discuss here the existence of a low-regret control for \\eqref{direct_pb_y}. In particular, the main result of this section will be the following. \n\n\\begin{theorem} \nThere exists a unique low-regret optimal control function denoted by $v_\\gamma \\in{\\mathcal U}$, solution to the minimization problem \\eqref{infgamma}.\n\\end{theorem}\n\n\\begin{proof} \nFirst of all, notice that for all $v\\in {\\mathcal U}$ we have $\\displaystyle {\\mathcal J}^\\gamma(v)\\geq -J(0,0)$ and, therefore, $\\dis \\displaystyle\\inf_{v\\in {{\\mathcal U} }}{\\mathcal J}^\\gamma(v)$ exists. Let then $\\{v_\\gamma^n\\}$ be a minimizing sequence such that $\\dis d_\\gamma=\\lim_{n\\to+\\infty} {\\mathcal J}^\\gamma(v_\\gamma^n)$. We have\n\\begin{align*}\n\t-J(0,0)\\leq {\\mathcal J}^\\gamma(v_\\gamma^n) =J(v_\\gamma^n,0)-J(0,0) +\\frac{\\beta^2}{\\gamma} \\Big\\|\\xi(v_\\gamma^n,0)(0)\\Big\\|_{G'}^2 \\leq d_\\gamma+1,\n\\end{align*}\nfor $n$ sufficiently large. From this, we deduce that there exists a positive constant $c_\\gamma$, independent on $n$, such that the following estimates hold\n\\begin{equation}\n\t\\Big\\|v_\\gamma^n\\Big\\|_{\\mathcal{U}} \\leq c_\\gamma,\\quad \\Big\\|y(v_\\gamma^n,0)-\\bar{z}\\Big\\|_{L^2(Q)}\\leq \nc_\\gamma,\\quad\\mbox{and}\\;\\; \\frac{\\beta}{\\sqrt{\\gamma}} \\Big\\|\\xi(v_\\gamma^n,0)(0)\\Big\\|_{G'} \\leq c_\\gamma.\n\\end{equation}\n\nThen we deduce that the sequence $\\{v_\\gamma^n\\}$ is bounded in $\\mathcal{U}$ and therefore there exists a subsequence, converging weakly in $\\mathcal{U}$ to $v_\\gamma$. Moreover, thanks to the strict convexity of ${\\mathcal J}^\\gamma$, $v_\\gamma$ is unique.\n\\end{proof}\n\n\\subsection{Characterization of the low-regret control}\n\nThis section is devoted to a characterization of the low-regret control fro \\eqref{direct_pb_y} through the corresponding optimality system. To this end, we shall first prove the following result.\n\n\\begin{lemma}\\label{lem:EL}\nLet $v_\\gamma$ be the optimal low-regret control solution of the minimization problem \\eqref{infgamma}, and denote $y_\\gamma:=y(v_\\gamma,0)$ and $\\xi_\\gamma:=\\xi(v_\\gamma,0)$. Then, for all $w\\in\\mathcal U$ we have the inequality\n\\begin{equation}\\label{euler_lagrange_part2}\n\t\\beta\\langle y_\\gamma-\\bar{z},y(w,0)\\rangle_{L^2(Q)} + \\mu\\langle v_\\gamma,w\\rangle_{\\mathcal{U}} + \\frac{\\beta^2}{\\gamma}\\left\\langle \\xi_\\gamma(0),\\xi(w,0)(0) \\right\\rangle_{G'} = 0.\n\\end{equation} \n\\end{lemma}\n\n\\begin{proof}\nWe have\n\\begin{align*}\n\t\\frac{{\\mathcal J}^\\gamma(v_\\gamma+\\lambda w) -{\\mathcal J}^\\gamma(v_\\gamma)}{\\lambda} =&\\, J(v_\\gamma+\\lambda w,0) + \\frac{\\beta^2}{\\gamma}\\norm{\\xi(v_\\gamma+\\lambda w,0)(0)}{G'}^2 - J(v_\\gamma,0) - \\frac{\\beta^2}{\\gamma}\\norm{\\xi(v_\\gamma,0)(0)}{G'}^2\n\t\\\\\n\t=&\\, {\\beta}\\norm{(y_\\gamma-\\bar{z})+\\lambda y(w,0)}{L^2(Q)}^2 + \\mu\\norm{v_\\gamma+\\lambda w}{\\mathcal{U}}^2 -{\\beta}\\norm{y_\\gamma-\\bar{z}}{L^2(Q)}^2 - \\mu\\norm{v_\\gamma}{\\mathcal{U}}^2 \n\t\\\\\n\t&+ \\frac{\\beta}{\\gamma}\\norm{\\xi_\\gamma(0) +\\lambda \\xi(w,0)(0)}{G'}^2 -\\frac{\\beta}{\\gamma}\\norm{\\xi_\\gamma(0)}{G'}^2 \n\t\\\\\n\t=&\\, {\\lambda^2\\beta}\\norm{y(w,0)}{L^2(Q)}^2 +2\\lambda\\beta\\langle y_\\gamma-\\bar{z},y(w,0)\\rangle_{L^2(Q)} + \\mu\\lambda^{2}\\norm{w}{\\mathcal{U}}^2 +2\\mu\\lambda\\langle v_\\gamma,w\\rangle_{\\mathcal{U}} \n\t\\\\\n\t&+ \\frac{\\beta^2\\lambda^2}{\\gamma}\\norm{\\xi(w,0)(0)}{G'}^2 +\\frac{2\\beta^2\\lambda}{\\gamma}\\langle \\xi_\\gamma(0),\\xi(w,0)(0)\\rangle_{G'}.\n\\end{align*}\nIdentity \\eqref{euler_lagrange_part2} is then an immediate consequence of the Euler-Lagrange condition, which gives\n\\begin{align*}\n\t0&=\\lim_{\\lambda\\rightarrow 0} \\left(\\frac{{\\mathcal J}^\\gamma(v_\\gamma+\\lambda w) -{\\mathcal J}^\\gamma(v_\\gamma)}{\\lambda}\\right) \n\t\\\\\n\t&={\\beta} \\langle y_\\gamma-\\bar{z},y(w,0)\\rangle_{L^2(Q)} + N\\langle v_\\gamma,w\\rangle_{\\mathcal{U}} +\\frac{\\beta^2}{\\gamma} \\left\\langle \\xi_\\gamma(0),\\xi(w,0)(0) \\right\\rangle_{G'}, \\quad \\mbox{ for all } w \\in\\mathcal{U}.\n\\end{align*}\n\\end{proof}\n\n\\noindent By means of Lemma \\ref{lem:EL}, we can now obtain the following characterization of the low-regret control.\n\n\\begin{proposition}\\label{prop_characterization_part2}\nThe low-regret control $v_\\gamma$ is characterized by the quadruplet \n\\begin{align*}\n\t(v_\\gamma,y_\\gamma,\\xi_\\gamma,\\sigma_\\gamma) \\in \\mathcal{U} \\times L^2(Q)\\times L^2(Q)\\times L^2(Q),\n\\end{align*}\nunique solution to the optimality system\n\\begin{align}\\label{sos_part2}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{\\partial y_\\gamma}{\\partial t} -\\Delta y_\\gamma+ \\int_\\Omega K(x,\\theta,t)\\,y_\\gamma(\\theta,t)\\,d\\theta = v_\\gamma{\\bf 1}_{\\mathcal{O}}, & (x,t)\\in Q,\n\t\t\\\\[10pt]\n \t\t\\displaystyle -\\frac{\\partial \\xi_\\gamma}{\\partial t} -\\Delta \\xi_\\gamma+ \\int_\\Omega K(\\theta,x,t)\\,\\xi_\\gamma(\\theta,t)\\,d\\theta = y_\\gamma, & (x,t)\\in Q,\n \t\t\\\\[10pt]\n\t\t\\displaystyle\\frac{\\partial \\sigma_\\gamma}{\\partial t} -\\Delta \\sigma_\\gamma+ \\int_\\Omega K(x,\\theta,t)\\,\\sigma_\\gamma(\\theta,t)\\,d\\theta = 0, & (x,t)\\in Q,\n\t\t\\\\[10pt] \n\t\t\\displaystyle -\\frac{\\partial p_\\gamma}{\\partial t} -\\Delta p_\\gamma+ \\int_\\Omega K(\\theta,x,t)\\, p_\\gamma(\\theta,t)\\,d\\theta = {\\beta}(y_\\gamma-\\bar{z})-\\sigma_\\gamma, & (x,t)\\in Q,\n\t\t\\\\[10pt]\n\t\ty_\\gamma=\\xi_\\gamma=\\sigma_\\gamma =p_\\gamma = 0, & (x,t)\\in \\Sigma,\n\t\t\\\\[10pt]\n\t\t\\displaystyle y_\\gamma(0)=0,\\; \\sigma_\\gamma(0)=-\\frac{\\beta^2}{\\gamma}\\xi_\\gamma(0),\\; \\xi_\\gamma(T) =p_\\gamma(T) = 0, & x\\in\\Omega,\n\t\\end{cases}\n\\end{align}\nwhere $\\sigma_\\gamma:=\\sigma(v_\\gamma,0)$ and $p_\\gamma:=p(v_\\gamma,0)$, and with the adjoint state equation\n\\begin{equation}\\label{adjoint_state_eq_part2}\n\t p_\\gamma+\\mu v_\\gamma=0, \\qquad \\mbox{in} \\quad \\mathcal{O}\\times ]0,T[.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nWe introduce $\\sigma_\\gamma=\\sigma(v_\\gamma,0)$ unique solution to the problem\n\\begin{equation}\\label{direct_pb_sigma}\n\t\\begin{cases}\n\t\t\\displaystyle\\frac{\\partial \\sigma_\\gamma}{\\partial t} -\\Delta \\sigma_\\gamma+\\int_\\Omega K(x,\\theta,t)\\,\\sigma_\\gamma(\\theta,t)\\,d\\theta = 0, & (x,t)\\in Q,\n\t\t\\\\\n\t\t\\sigma_\\gamma = 0, & (x,t)\\in \\Sigma,\n\t\t\\\\\n\t\t\\sigma_\\gamma(0,x) = -\\displaystyle \\frac{\\beta}{\\gamma}\\xi_\\gamma(0), & x\\in\\Omega.\n\t\\end{cases}\n\\end{equation}\nThen we multiply (\\ref{direct_pb_sigma}) by $\\xi(w,0)$ and we integrate by parts over $Q$, thus obtaining\n\\begin{align*}\n\t0&= \\int_Q\\left(-\\frac{\\partial \\xi}{\\partial t}-\\Delta \\xi+ \\int_\\Omega K(\\theta,x,t)\\,\\xi(\\theta,t)\\,d\\theta\\right) \\sigma_\\gamma\\,dxdt - \\int_\\Omega \\xi(w,0)(0)\\,\\sigma_\\gamma(0)\\,dx\n\t\\\\\n\t&=\\int_Q y(w,0)\\,\\sigma_\\gamma\\, dxdt - \\int_\\Omega \\xi(w,0)(0)\\,\\sigma_\\gamma(0)\\,dx.\n\\end{align*}\nHence, \n\\begin{align*}\n\t\\left\\langle \\frac{\\beta^2}{\\gamma}\\xi_\\gamma(0),\\xi(w,0)(0)\n\t\\right\\rangle_{G'} = \t \\langle\\sigma_\\gamma,y(w,0) \\rangle_{L^2(Q)}.\n\\end{align*}\nThen (\\ref{euler_lagrange_part2}) reduces to\n\\begin{equation}\\label{simplification_euler_part2}\n\t\\langle {\\beta}(y_\\gamma-\\bar{z})-\\sigma_\\gamma, y(w,0)\\rangle_{L^2(Q)} + \\mu\\langle v_\\gamma,w\\rangle_{\\mathcal{U}} \\geq 0, \\quad \\mbox{ for all } w \\in\\mathcal{U}.\n\\end{equation}\nFinally, we define the adjoint state $p_\\gamma:=(x,t,v_\\gamma)$ as the unique solution of:\n\\begin{equation}\\label{adjoint_pb_p_gamma}\n\t\\begin{cases}\n\t\t\\displaystyle -\\frac{\\partial p_\\gamma}{\\partial t}-\\Delta p_\\gamma+\\int_\\Omega K(\\theta,x,t)\\,p_\\gamma(\\theta,t)\\,d\\theta = \\beta(y_\\gamma-\\bar{z})-\\sigma_\\gamma, & (x,t)\\in Q,\n\t\t\\\\\n\t\tp_\\gamma = 0, & (x,t)\\in \\Sigma,\n\t\t\\\\\n\t\tp_\\gamma(x,T) = 0, & x\\in\\Omega.\n\t\\end{cases}\n\\end{equation}\nWe multiply (\\ref{adjoint_pb_p_gamma}) by $y(w,0)$ and we integrate by parts over $Q$ to obtain\n\\begin{align*}\n\t\\displaystyle\\int_Q ({\\beta}(y_\\gamma-\\bar{z})-\\sigma_\\gamma)y(w,0)\\,dxdt =\\displaystyle\\int_0^T\\!\\!\\int_{\\mathcal{O}} p_\\gamma\\,w\\, dxdt.\n\\end{align*}\nThen (\\ref{simplification_euler_part2}) reads $\\langle\\, p_\\gamma+\\mu v_\\gamma,\\, w\\rangle_{\\mathcal{U}} = 0$, for all $w\\in \\mathcal{U}$,\nsince $\\mathcal{U}$ is a vector space.\n\\end{proof}\n\n\\section{Numerical experiments}\\label{sec:5}\n\nIn this section, we provide some numerical simulations and comment on the practical implementation of the optimal control and the low-regret control problems for non-local parabolic systems. In what follows, for all $a0,\n\\end{equation}\nwhere $y=(y^n)_{n\\in\\inter{1,M}}$ is the solution to \\eqref{eq:discr_nonlocal} and $\\bar z=(\\bar z^n)_{n\\in\\inter{1,M}}$ is a discretization of the target $\\bar z\\in L^2(\\Omega\\times(0,T))$. Without loss of generality, we have assumed that $\\mu = 1$. In \\eqref{eq:discr_func}, $L^2_{\\delta t}(0,T;\\RR^N)$ denotes the discretization of the space $L^2(\\Omega\\times(0,T))$, more precisely,\n\\begin{equation*}\n\tL^2_{\\delta t}(0,T;\\mathbb R^N):=\\Big\\{f=(f^n)_{n\\in\\inter{1,M}}, \\ f^n\\in\\mathbb R^N, \\ n\\in\\inter{1,M}\\Big\\},\n\\end{equation*}\nendowed with the norm\n\\begin{align*} \n\t\\norm{f}{L^2_{\\delta t}(0,T;\\RR^N)}:= \\left(\\sum_{n=1}^{M}\\delta t |f^n|^2\\right)^{1\/2},\n\\end{align*}\nwhere $|\\cdot|$ stands for the usual Euclidean norm in $\\RR^N$. The natural associated inner product will be defined as \n\\begin{align*}\n\t(f,g)_{L^2_{\\delta t}(0,T;\\mathbb R^N)}:= \\sum_{n=1}^{M}\\delta t (f^n,g^n),\n\\end{align*}\nwhere $(\\cdot,\\cdot)$ is the usual dot product in $\\RR^N$. For shortness, sometimes we simply write $\\|\\cdot\\|_{L^2}$ instead of $\\|\\cdot\\|_{L^2_{\\delta t}(0,T;\\RR^N)}$.\n\nOnce we have written \\eqref{eq:discr_func} in such form, the obtention of an optimal control is quite standard. For completeness, we sketch it briefly. A straightforward computation yields that\n\\begin{equation}\\label{eq:grad_J_h_delta}\n\t\\nabla J_{h,\\delta t}(v)=2 \\beta S_{h,\\delta t}^\\star(S_{h,\\delta t} v - \\bar{w}) +2 \\mathcal B_{h}^* \\mathcal B_h v .\n\\end{equation}\nIn \\eqref{eq:grad_J_h_delta}, $B_h^*$ denotes the matrix transpose of $B_h$, the operator $S_{h,\\delta t}$ is defined as\n\\begin{align*}\n\tS_{h,\\delta t}: L^2_{\\delta t}(0,T;\\mathbb R^N) &\\to L^2_{\\delta t}(0,T;\\mathbb R^N) , \\quad S_{h,\\delta t}v:=y,\n\\end{align*}\nwhere $y$ is the solution to the forward system \n\\begin{equation}\\label{eq:forward_GD}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{y^{n+1}-y^n}{\\delta t}+\\mathcal A_{h} \\, y^{n+1}=\\mathcal B_h v^{n+1}, & n \\in \\inter{0, M-1}, \n\t\t\\\\\n\t\ty^0=0,\n\t\\end{cases}\n\\end{equation}\nwhile the adjoint operator $S_{h,\\delta t}^\\star$ is defined as\n\\begin{align*}\n\tS_{h,\\delta t}^\\star: L^2_{\\delta t}(0,T;\\mathbb R^N) &\\to L^2_{\\delta t}(0,T;\\mathbb R^N), \\quad S_{h,\\delta t}^\\star z := p,\n\\end{align*}\nwhere $p$ can be found from the solution to the backward system\n\n\\begin{equation}\\label{eq:backward_GD}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{p^{n}-p^{n+1}}{\\delta t}+\\mathcal A_h^* p^{n}= z^n, \\quad n\\in\\inter{1,M}, \n\t\t\\\\\n\t\t^{M+1}=0,\n\t\\end{cases}\n\\end{equation}\nand, finally, $\\bar{w}:= \\bar{z}-\\mathring{y}$ where $\\mathring{y}$ denotes the solution to \\eqref{eq:discr_nonlocal} with $v\\equiv 0$. In \\eqref{eq:forward_GD}-\\eqref{eq:backward_GD} (and similar formulas below), we shall omit the boundary conditions since we always consider homogeneous Dirichlet ones.\n\nIn this way, the optimal control $v=(v^n)_{n\\in\\inter{1,M}}$ can be readily found by solving the linear problem\n\\begin{equation}\\label{eq:opt_control}\n\t(\\beta S_{h,\\delta t}^\\star S_{h,\\delta t}+ B_h^* B_{h})v=\\beta S_{h,\\delta t}^\\star \\bar{w}.\n\\end{equation}\nwhere $S_{h,\\delta t}^\\star S_{h,\\delta t} v$ corresponds to the evaluation of the cascade forward-backward system \\eqref{eq:forward_GD}-\\eqref{eq:backward_GD}. \n\n\\subsubsection{Numerical results for the optimal control problem}\n\nLet us set $T=1$ and consider the following parameters of the system\n\\begin{align}\\label{eq:kernel_num}\n\ty_0(x)=2\\sin(\\pi x), \\quad K_1(x)=\\sin(5\\pi x), \\quad K_2(\\theta)=20\\times \\mathbf{1}_{(0,0.5)}(\\theta)\\sin(\\pi \\theta) \n\\end{align}\n\nSince nothing changes in our theoretical analysis or the numerical implementation, in the remainder of this document we shall always consider a small diffusion parameter $\\nu=0.1$ multiplying the Laplace operator. In Figure \\ref{fig:soly_libre}, we plot the free solution of system \\eqref{direct_pb_y_simp}, that is, the solution with $v\\equiv 0$. We clearly observe the effect of the non-local kernel $K(x,\\theta)$ in the resolution of the equation. Here, we have used $M=100$ and $N=60$ for the number of points in the discrete grid. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics{.\/figures_final\/Pb_Kernel_with_Incomplete_Data_New_version-figure0.pdf}\n\\caption{Evolution in time of the uncontrolled solution.}\\label{fig:soly_libre}\n\\end{figure}\n\nNow, let us consider $\\mathcal O=(0.2,0.8)$ and set the time-independent target function $ \\bar{z}(x)=\\sin(2\\pi x)$. In Figure \\ref{fig:soly_control_beta}, we plot the solution using the control obtained by solving the linear problem \\eqref{eq:opt_control} for different values of $\\mathcal \\beta$. To solve this problem, we have used a standard Gradient Descent method. As usual in this type of problems, we observe that by increasing the parameter $\\beta$, the approximation to the target $\\bar{z}$ (i.e., $\\|y(v)-\\bar{z}\\|_{L^2}$) is overall better, but the control cost increases. \n\n\\begin{figure}[ht!]\n\\centering\n\\subfloat[$\\beta=10$. Cost of control $\\|B_h v\\|_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}=1.57137$]\n{\n\\includegraphics{.\/figures_final\/Pb_Kernel_with_Incomplete_Data_New_version-figure1.pdf}\n}\\qquad \n\\subfloat[$\\beta=10^2$. Cost of control $\\|B_h v\\|_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}=4.19168 $]\n{\n\\includegraphics{.\/figures_final\/Pb_Kernel_with_Incomplete_Data_New_version-figure2.pdf}\n} \\\\\n\\subfloat[$\\beta=10^3$. Cost of control $\\|B_h v\\|_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}=7.41933 $]\n{\n\\includegraphics{.\/figures_final\/Pb_Kernel_with_Incomplete_Data_New_version-figure3.pdf}\n}\n\\caption{Evolution in time of the controlled solution for different values of the penalization parameter $\\beta$.}\n\\label{fig:soly_control_beta}\n\\end{figure}\n\nAs for the uncontrolled solution displayed in Figure \\ref{fig:soly_libre}, we can clearly observe the effects of the non-local kernel. Indeed, even though the initial condition $y_0$, the target $\\bar z$ and the control region $\\mathcal O$ are somehow symmetric with respect to the point $x=0.5$, the distance of the solution $y$ to the target $z$ is not symmetric with respect to the point $x=0.5$ as it can be seen in Figure \\ref{fig:steady_state}.\n\n\\begin{figure}[ht!]\n\\centering\n\\subfloat[Non-local]{\n\\begin{tikzpicture}[scale=0.75]\n\n\t\\begin{axis}[xlabel={$x$},xmin=0,xmax=1,legend pos=outer north east,\n\tlegend plot pos=left,\n\tlegend style={cells={anchor=west},draw=none}]\n\t \n\t\\pgfplotstableread{figures_final\/target_60_07-Jun-2021_13h11.org}\\nonlocal \n\t \n\t\\addplot[very thick, solid, red] table[x=x,y=yT2] \\nonlocal; \n\t\\addplot[very thick,dashed] table[x=x,y=yT1] \\nonlocal;\n\n\t\\end{axis}\n\t\\end{tikzpicture}\\label{fig:bad_data1}\n}\n\\subfloat[Local]{\n\\begin{tikzpicture}[scale=0.75]\n\n\t\\begin{axis}[xlabel={$x$},xmin=0,xmax=1,legend pos=outer north east,\n\tlegend plot pos=left,\n\tlegend style={cells={anchor=west},draw=none}]\n \n\t\\pgfplotstableread{figures_final\/target_60_07-Jun-2021_13h17.org}\\local \n\t \n\t\\addplot[very thick, solid, red] table[x=x,y=yT2] \\local; \t\n\t\\addplot[very thick,dotted] table[x=x,y=yT1] \\local; \\label{local-st}\n\n\n\t\\end{axis}\n\\end{tikzpicture}\\label{fig:bad_data2}\n}\n\\caption{Comparison of the non-local steady state (dashed) and the local steady state (dotted) against the target function $\\bar z$ (red solid).}\n\\label{fig:steady_state}\n\\end{figure}\n\n\\subsection{Numerical implementation of the low-regret problem}\n\nWe turn our attention to the low regret problem. As introduced in Section \\ref{sec:4} this problem aims to solve an optimal control problem in the case when the initial data is missing or incomplete. As proposed in the classical work by J.-L. Lions, we can address this problem by replacing the classical optimal control functional \\eqref{min_J_part1} by the $\\min$-$\\max$ problem \\eqref{min_J_part2}. Although it is feasible to solve \\eqref{min_J_part2} (see e.g. \\cite{DP18} and the reference within), here we will use problem \\eqref{infgamma}, which transforms the original problem into a minimization one with two state variables. To this end, let us consider the fully-discrete version of the functional $J^\\gamma(v)$ (see eq. \\eqref{infgamma}) given by\n\\begin{align}\\label{eq:func_discr_low}\n\t&J^\\gamma_{h,\\delta t}(v)= \\beta \\|y(v,0)-\\bar z\\|_{L^2_{\\delta t}(0,T;\\mathbb R^N)}^2+\\|\\mathcal B_h v\\|^2_{L^2_{\\delta t}(0,T;\\mathbb R^N)}- \\beta\\|\\bar z\\|^2_{L^2_{\\delta t}(0,T;\\mathbb R^N)}+\\frac{\\beta^2}{\\gamma}|\\xi^{1}|^2 \\notag \n\t\\\\\n\t&v\\in L^2_{\\delta t}(0,T;\\mathbb R^N)\n\\end{align}\nwhere $y(v,0)=\\left(y^n(v,0)\\right)_{n\\in\\inter{1,M}}$ denotes the solution to \\eqref{eq:discr_nonlocal} with control $v$ and initial datum $y^0\\equiv 0$, $\\bar z=(\\bar z^n)_{n\\in\\inter{1,M}}$ is the discretization of the target $\\bar z\\in L^2(\\Omega\\times(0,T))$, and $\\xi^1$ can be found from the backward equation\n\\begin{equation}\\label{eq:backward_xi}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{\\xi^{n}-\\xi^{n+1}}{\\delta t}+\\mathcal A_h^* \\xi^{n}= y^n(v,0), \\quad n\\in\\inter{1,M}, \n\t\t\\\\\n\t\t\\xi^{M+1}=0.\n\t\\end{cases}\n\\end{equation}\nA straightforward computation gives\n\\begin{align}\\label{eq:deriv_J_gamma_discr}\n\t&\\displaystyle \\left(\\nabla J^\\gamma_{h,\\delta t}(v),\\hat{v}\\right)= 2\\beta \\left(y(v,0)-\\bar{z},y(\\hat{v},0)\\right)_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}+2\\left(\\mathcal B_h v,\\mathcal B_h \\hat{v}\\right)_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}+2\\beta\\left(\\sqrt{\\tfrac{\\beta}{\\gamma}}\\xi^1,\\sqrt{\\tfrac{\\beta}{\\gamma}}\\hat{\\xi}^1\\right), \\notag \n\t\\\\ \n\t&\\mbox{ for all } \\hat{v}\\in L^2_{\\delta t}(0,T;\\RR^N),\n\\end{align}\nwhere $\\hat{\\xi}^1$ comes from the sequence $\\hat{\\xi}=(\\hat{\\xi}^n)_{n\\in\\inter{1,M}}$, i.e., the solution to \\eqref{eq:backward_xi} with right-hand side $y^n(\\hat v,0)$.\n\nOur goal is now to find a suitable expression for the gradient $\\nabla J_{h,\\delta t}^\\gamma(v)$, since it will be used later in a gradient descent algorithm. We begin by defining the space $X_{h,\\delta t}:= \\mathbb R^N\\times L^2_{\\delta t}(0,T;\\mathbb R^N)$, endowed with the canonical inner product $\\langle\\cdot,\\cdot \\rangle_{X_{h,\\delta t}}:= (\\cdot,\\cdot)+(\\cdot,\\cdot)_{L^2_{\\delta t}(0,T;\\RR^N)}$. Consider the linear operator \n\\begin{equation*}\n\tR_{h,\\delta t}: L^2_{\\delta t}(0,T;\\mathbb R^N)\\to X_{h,\\delta t}, \\quad R_{h,\\delta t}v:=\\left(\\sqrt{\\tfrac{\\beta}{\\gamma}}\\xi^1,y\\right)\n\\end{equation*}\nwhere the pair $(\\xi^1,y)$ can be found by solving the forward-backward system\n\n\\begin{equation}\\label{eq:for-back_low}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{y^{n+1}-y^n}{\\delta t}+\\mathcal A_{h} \\, y^{n+1}=\\mathcal B_h v^{n+1}, & n \\in \\inter{0, M-1}, \n\t\t\\\\\n\t\t\\displaystyle \\frac{\\xi^{n}-\\xi^{n+1}}{\\delta t}+\\mathcal A_h^* \\xi^{n}= y^n, & n\\in\\inter{1,M}, \n\t\t\\\\\n\t\ty^0=0, \\quad \\xi^{M+1}=0\n\t\\end{cases}\n\\end{equation}\n\nWe observe that system \\eqref{eq:for-back_low} is in cascade form: given $v\\in L^2_{\\delta t}(0,T;\\mathbb R^N)$, we can solve first for $y=(y^n)_{n\\in\\inter{1,M}}$ forward in time and then use this information to solve for $\\xi=(\\xi^n)_{n\\in\\inter{1,M}}$ backwardly to recover the datum $\\xi^1$. \n\nNow, let us compute the adjoint operator $R_{h,\\delta t}^\\star$. We introduce the following system: for given $\\sigma_0\\in\\mathbb R^N$ and $f \\in L^2_{\\delta t}(0,T;\\mathbb R^N)$ we set\n\\begin{align}\\label{eq:back-for_low}\n\t\\begin{cases}\n\t\t\\displaystyle \\frac{\\sigma^{n+1}-\\sigma^n}{\\delta t}+\\mathcal A_{h} \\, \\sigma^{n+1}=0, & n \\in \\inter{0, M-1}, \n\t\t\\\\\n\t\t\\displaystyle \\frac{p^{n}-p^{n+1}}{\\delta t}+\\mathcal A_h^* p^{n}= f^n-\\sigma^n, & n\\in\\inter{1,M}, \n\t\t\\\\\n\t\t\\sigma^0=\\displaystyle -\\tfrac{\\beta}{\\gamma}\\sigma_0, \\quad p^{M+1}=0\n\t\\end{cases}\n\\end{align}\n\nAs before, we note that \\eqref{eq:back-for_low} is in cascade form, namely, given $\\sigma_0 \\in \\mathbb R^N$ we can solve for $\\sigma=(\\sigma^n)_{n\\in\\inter{1,M}}$ and with this we can solve for $p=(p^n)_{n\\in\\inter{1,M}}$ for any given $f\\in L^2_{\\delta t}(0,T;\\mathbb R^N)$.\n\nMultiplying the first equation of \\eqref{eq:for-back_low} by $p^{n+1}$ and summing over $n$, we have by a direct computation that\n\\begin{equation}\\label{eq:duality_1}\n\t\\left(y,f-\\sigma \\right)_{L^2_{\\delta t}(0,T;\\mathbb R^N)}=(v,p)_{L^2_{\\delta t}(0,T;\\RR^N)}\n\\end{equation}\nwhere we have used that $y$ and $p$ have zero initial and final datum, respectively. Analogously, multiplying the first equation of \\eqref{eq:back-for_low} by $\\xi^{n+1}$, summing over $n$ and using the initial and final conditions, we get\n\\begin{equation}\\label{eq:duality_2}\n\t(\\sigma,y)_{L^2_{\\delta t}(0,T;\\RR^N)}=-\\left(\\tfrac{\\beta}{\\gamma}\\sigma_0,\\xi^1\\right).\n\\end{equation}\nCombining \\eqref{eq:duality_1} and \\eqref{eq:duality_2} and recalling the definition of the operator $R_{h,\\delta t}$, we have\n\\begin{equation*}\n\t\\left\\langle R_{h,\\delta t} v, \\left(\\sqrt{\\tfrac{\\beta}{\\gamma}}\\sigma_0,f\\right) \\right\\rangle_{X_{h,\\delta t}}=\\left\\langle \\left(\\sqrt{\\tfrac{\\beta}{\\gamma}}\\xi^1,y\\right),\\left(\\sqrt{\\tfrac{\\beta}{\\gamma}}\\sigma_0,f\\right) \\right\\rangle_{X_{h,\\delta t}}=(v,p)_{L^2_{\\delta t}(0,T;\\RR^N)}\n\\end{equation*}\nThus, $R_{h,\\delta t}^\\star: X_{h,\\delta t}\\to L^2_{\\delta t}(0,T;\\RR^N)$ is defined as \n\\begin{align*}\n\tR^\\star_{h,\\delta t}(\\sqrt{\\tfrac{\\beta}{\\gamma}}\\sigma_0,f):= p,\n\\end{align*}\nwhere $p=(p^n)_{n\\in\\inter{1,M}}$ can be found from the solution to the forward-backward system \\eqref{eq:back-for_low}. By linearity and using the above definitions, we can then rewrite \\eqref{eq:deriv_J_gamma_discr} as\n\\begin{align*}\n\t\\left(\\nabla J^\\gamma_{h,\\delta t}(v),\\hat{v}\\right)= 2\\beta\\left\\langle R_{h,\\delta t} v, R_{h,\\delta t}\\hat v \\right\\rangle_{X_{h,\\delta t}}+ 2\\left(\\mathcal B_h v,\\mathcal B_h \\hat{v}\\right)_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}-2\\beta(\\bar{z},S_{h,\\delta t}\\hat v)_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}\n\\end{align*}\nwhence $\\nabla J^\\gamma_{h,\\delta t}(v)= 2 \\beta R^\\star_{h,\\delta t} R_{h,\\delta t} v+2\\mathcal B_h^*\\mathcal B_h v-2\\beta S_{h,\\delta t}^\\star \\bar z$, and thus the optimal control we are looking for can be computed by solving the linear problem\n\\begin{equation}\\label{eq:lin_regret}\n\t\\left(\\beta R^\\star_{h,\\delta t} R_{h,\\delta t}+\\mathcal B_h^*\\mathcal B_h\\right) v=\\beta S_{h,\\delta t}^\\star \\bar z.\n\\end{equation}\n\nObserve that the structure of the problem is the same as in \\eqref{eq:opt_control}, but this time we have to evaluate the operator $R_{h,\\delta t}^\\star R_{h,\\delta t}$ which amounts to solve four equations instead of two.\n\n\\subsubsection{Numerical results for the low-regret optimal control problem}\n\nWe fix $T=1$ and consider the kernel functions $K_i$, $i=1,2$ given in \\eqref{eq:kernel_num}. As before, we consider the control set $\\mathcal O=(0.2,0.8)$ and the time-independent target function $\\bar z(x)=\\sin(2\\pi x)$. As for the optimal control problem, we will solve the linear problem \\eqref{eq:lin_regret} by using a standard gradient descent method. \n\nIn Figure \\ref{fig:soly_control_regret}, we show the evolution in time of the state variable $y$ and the adjoint state $\\xi$ controlled with the low-regret control $v^\\gamma$ coming from the minimization of the functional \\eqref{eq:func_discr_low}. In this case, we have tuned the parameters to $\\beta=100$ and $\\gamma=1$. We observe in Figure \\ref{fig:soly_control_regret}a, the evolution of the state variable $y$ starting from 0 and transitioning into a steady state which seems to be similar to the steady state shown in Figure \\ref{fig:soly_control_beta}a. In Figure \\ref{fig:soly_control_regret}b, we observe the backward evolution of the adjoint variable $\\xi$ starting from the zero data at time $T=1$.\n\n\\begin{figure}[ht!]\n\\centering\n\\subfloat[$\\beta=10$. Cost of control $\\|B_h v\\|_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}=1.57137 $]\n{\n\\includegraphics{.\/figures_final\/Pb_Kernel_with_Incomplete_Data_New_version-figure6.pdf}\n} \\qquad \n\\subfloat[$\\beta=10^2$. Cost of control $\\|B_h v\\|_{L^2_{\\delta t}(0,T;\\mathbb R^{N})}=4.19168 $]\n{\n\\includegraphics{.\/figures_final\/Pb_Kernel_with_Incomplete_Data_New_version-figure7.pdf}\n}\n\\caption{Evolution in time of the controlled solution $y$ and the adjoint state $\\xi$ with low-regret control with parameters $\\beta=10$ and $\\gamma=0.1$.}\n\\label{fig:soly_control_regret}\n\\end{figure}\n\nIn Table \\ref{tab:low_reg_table}, we have collected data for the low-regret control problem with different values of $\\beta$ and $\\gamma$. From there, we can conclude that as in the optimal control problem, the higher the value of $\\beta$ is, the better approximation to the target function $\\bar{z}$ is. We also see that the parameter $\\gamma$ influences greatly the behavior of the control problem in the sense that for given $\\beta>0$, lower values of $\\gamma$ translate into a smaller $L^2$-norms of the control, affecting the overall approximation of the target $\\bar z$. \n\n\\begin{table}\n\\centering\n\\renewcommand{\\arraystretch}{1.35}\n\\begin{tabular}{ |c||c|c|c| }\n\\hline\n\\multicolumn{4}{|c|}{$\\beta=1$} \\\\\n\\hline\n$\\gamma$ & $J_{h,\\delta t}^\\gamma(v)$ & $\\|B_h v\\|_{L^2_{\\delta t}(0,T;\\RR^N)}$ & $\\|y(v,0)-\\bar z\\|_{L^2_{\\delta t}(0,T;\\RR^N)}$ \\\\\n\\hline\n10 & -0.0132321 & 0.11325 & 0.688434 \\\\ 1 & -0.0132209 & 0.113154 & 0.68845 \\\\ 0.1 &-0.0131106 & 0.112213 & 0.688605 \\\\ 0.01 & -0.0121515 & 0.10422 & 0.689959 \\\\\n \\hline \n \\multicolumn{4}{|c|}{$\\beta=10$} \\\\\n \\hline\n10 & -1.03067 & 0.882627 & 0.564147 \\\\\n 1 & -0.968289 & 0.829583 &0.572797 \\\\\n 0.1 & -0.682832 & 0.631448 & 0.61298 \\\\\n 0.01 &-0.43599 & 0.558128 & 0.648367 \\\\\n \\hline\n \\multicolumn{4}{|c|}{$\\beta=10^2$} \\\\\n \\hline\n 10 & -27.1986 & 2.37894 & 0.360065 \\\\\n 1 & -17.9241 & 2.10709 & 0.500576 \\\\\n 0.1 &-14.9387 & 2.15546 & 0.546241 \\\\\n \\hline\n\\end{tabular}\n\\caption{Optimal energy $J^\\gamma(v)$, cost of control $\\|B_h v\\|_{L^2}$ and distance to the target $\\|y(v,0)-\\bar z\\|_{L^2}$ for the low-regret control problem with different values of $\\gamma$ and $\\beta$}\n\\label{tab:low_reg_table}\n\\end{table}\n\nWe conclude the discussion by showing that the low-regret control can be used for controlling equations in the case of missing data. To this end, we take the simulation parameters $\\beta=100$, $\\gamma=1$, $T=1$, and the kernels $K_1$ and $K_2$ of \\eqref{eq:kernel_num}. Using our computational tool, we can compute the low-regret control $v^\\gamma$ and use it for controlling system \\eqref{eq:discr_nonlocal} with different initial conditions. In Table \\ref{tab:comp_1}, we have collected some information about the performance of the low-regret control $v^\\gamma$ against the \\textit{real} optimal control $v^{opt}$ (computed with $\\beta=100$) and the uncontrolled case for different initial data. \n\nWe can see that the low-regret control $v^\\gamma$ does not make things worse as compared to the uncontrolled case, which is consistent with the goal of this strategy, but it is not as good as the performance achieved with the optimal control computed with the full knowledge of the initial datum. Nonetheless, the low-regret control has the advantage that is has to be computed only once and then it can be used to control the system for a wide variety of initial data. We shall remark that there is still room for improvement as shown in Table \\ref{tab:comp_2}. There, we have increased to $\\gamma=10$ and computed the corresponding low regret control to obtain a lower optimal energy and smaller distance of the target (as compared to the last two columns of Table \\ref{tab:comp_1}). Nevertheless, how to choose effectively this parameter depends largely on the application and the experiment performed.\n\n\\begin{table}\n\\centering\n\\renewcommand{\\arraystretch}{1.35}\n\\begin{tabular} {|c|c|c|c|c|c|c|}\n\\cline{2-7} \n\\multicolumn{1}{c|}{} & \\multicolumn{2}{c|}{Uncontrolled} & \\multicolumn{2}{c|}{Optimal control} & \\multicolumn{2}{c|}{Low-regret control} \\\\\n\\hline\nInitial datum & $J(0)$ & $\\|y(0)-\\bar z\\|_{L^2}$ & $J(v^{opt})$ & $\\|y(v^{opt})-\\bar z\\|_{L^2}$ & $J(v^\\gamma)$ & $\\|y(v^{\\gamma})-\\bar z\\|_{L^2}$ \\\\\n\\hline \n$\\sin^{10}(\\pi x)$ & 27.62901 & 0.74336 & 8.82497 & 0.28957 & 17.43680 & 0.55176 \\\\\n3 & 191.09119 & 1.95495 & 45.17162 & 0.70582 & 180.79790 & 1.88988 \\\\\n$\\mathbf 1_{(0.5,0.8)}(x)-\\mathbf 1_{(0.2,0.5)}(x)$ & 37.50553 & 0.86609 & 13.85573 & 0.38048 & 26.43313 & 0.69597 \\\\\n$\\sin(\\tfrac{1}{3}\\pi x)+0.3\\cos(\\tfrac{15}{4}\\pi x)$ & 32.88063 &0.81093 &10.83359 &0.32834 &22.34038 &0.63444 \\\\\n\\hline\n\\end{tabular}\n\\caption{Comparison of the optimal energy $J(\\cdot)$ and the distance to the target for the controlled solution $\\|y(\\cdot)-\\bar{z}\\|_{L^2}$ for system \\eqref{eq:discr_nonlocal} controlled with different control functions and initial conditions.}\n\\label{tab:comp_1}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\renewcommand{\\arraystretch}{1.35}\n\\begin{tabular} {|c|c|c|}\n\\hline\nInitial datum & $J(v^\\gamma)$ & $\\|y(v^{\\gamma})-\\bar z\\|_{L^2}$ \\\\\n\\hline \n$\\sin^{10}(\\pi x)$ &11.92807 &0.42667 \\\\\n3 &175.15713 &1.85651 \\\\\n$\\mathbf 1_{(0.5,0.8)}(x)-\\mathbf 1_{(0.2,0.5)}(x)$ &17.98742 &0.55067 \\\\\n$\\sin(\\tfrac{1}{3}\\pi x)+0.3\\cos(\\tfrac{15}{4}\\pi x)$ &15.81261 &0.50965 \\\\\n\\hline\n\\end{tabular}\n\n\\caption{Effect of the parameter $\\gamma$ on the controlled solution with low-regret control.}\n\\label{tab:comp_2}\n\\end{table}\n\n\\bibliographystyle{siam}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdqpx b/data_all_eng_slimpj/shuffled/split2/finalzzdqpx new file mode 100644 index 0000000000000000000000000000000000000000..817be90fd86410844818d8189da106670d1f6ccf --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdqpx @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nNetworks define versatile mathematical tools which can be conveniently invoked for modelling a plethora of physical systems~\\cite{AlbertBarabasi,newmanbook}. Local interactions among elementary constituents take place on isolated patches, the nodes of the network, while long-ranged exchanges crawls across the links, typically driven by diffusion, bridging thereby adjacent nodes of the collection. Networks of coupled oscillators can self-organise to operate in unison~\\cite{Pikovsky2001,arenasreview}, a condition that is necessary to sustain normal brain activity but which can yield pathological states in case of hyper-synchronisation, i.e., the inability of neurones to desynchronise~\\cite{AECPlos2018}. In a completely different context, favouring a perfect synchrony of cyclic rhythms is vital for an optimal handling of energy production and distribution in power grids~\\cite{MMAN2013}. Moreover, spatial non homogenous stationary stable motifs can spontaneously emerge following a symmetry breaking mechanism~\\cite{Turing}, the analogue of a Turing instability for reaction-diffusion systems anchored on networks~\\cite{NM2010}, which amplifies small perturbations acting on a uniform background. \n\nThese phenomena have been mainly characterised with reference to static networks. That is, the network connections do not change over time, or, alternatively, the rate of modulation is very slow as compared to the typical time scales that regulate the dynamics of the state variables. The few attempts reported in the literature to generalise beyond this setting, focused either on fast switching networks~\\cite{SBGR2006,PABFC2017,LFCP2018} among distinct static configurations, thus implying that the natural time scale of the network evolution is extremely fast as compared to that associated to the underlying dynamical system. In~\\cite{AmritkarHu2006,BHCAKP,ZS2021} commutative temporal networks are instead invoked, a working hypothesis which may prove too restrictive because it assumes the eigenvectors of the relevant involved matrices, e.g., the Laplacian or the adjacency matrix, to not evolve in time. Other attempts to tackle the problem of a comprehensive theory of pattern formation on time dependent networks do not bear the sought generality, because they are once again implicitly based on the assumption of static, or very slowly varying, Laplace eigenvectors~\\cite{vangorder2}, as we will show in the following.\n\n All endeavours must however deal with revisiting the seminal concept of master stability equation~\\cite{Pecora}, so as to incorporate an explicit account of the inherent plasticity of the underlying network~\\cite{Holme2013,MR2016}. The intent is accomplished in this work, where the master stability formalism is expanded to include the contributions stemming from an imposed time evolution of the links weight. The theory is tested against two distinct applications designed to tackle both synchronisation and Turing settings. The general framework that we shall here address assumes (nonlinear) diffusive inter-node exchanges. This is a mandatory pre-requisite for a uniform solution of the extended system to exist and, as such, it is routinely invoked in different realms. Another viable scenario is found when the coupling term is a nonlinear function of the difference of the state variables referred to connected nodes (e.g., the paradigmatic Kuramoto model~\\cite{Kuramoto} and its extension). This falls also under the umbrella of the developed theory, as we shall hereafter argue. As we will show the theory hereby developed confirms that synchronisation can be enhanced by commutative temporal networks as reported in the litterature~\\cite{BHCAKP,GFRMRABB2022}; we moreover take a step forward by showing that such a claim holds true beyond the limited setting of commuting networks. Similarly, we also show that the constraint on the curvature of the master stability function recently introduced in~\\cite{ZS2021} can be relaxed once we remove the assumption of commutative temporal networks; indeed there exist time varying networks whose dynamics synchronise for a larger interval of the coupling strength without meeting the above condition. Finally, the proposed general framework contributes to significantly expand our current understanding on Turing patterns on time varying networks, beyond the settings so far considered which relied on the fast switching assumption~\\cite{PABFC2017} or on peculiar choices of the evolution of the eigenvectors of the Laplace matrix~\\cite{vangorder2}.\n \n\n\\section{Materials and Methods} \n\\label{sec:themodel}\nTo set the stage for further analysis, we will begin our discussion by inspecting the spectral characteristics of a time dependent discrete Laplacian operator. Consider a symmetric time varying network made by $n$ nodes, characterised by the adjacency matrix $\\mathbf{A}(t)$: $A_{ij}(t)=A_{ji}(t)\\neq 0$ whenever nodes $i$ and $j$ are connected at time $t$ and $A_{ij}(t)=A_{ji}(t)=0$ otherwise. Given the adjacency matrix one can construct the (combinatorial) Laplace matrix, $L_{ij}(t)= A_{ij}(t)-\\delta_{ij} k_i(t)$, where $k_i(t)=\\sum A_{ij}(t)$ denotes the degree of node $i$ at time $t$ and $\\delta_{ij}$ is the Kronecker-$\\delta$. Since $\\mathbf{L}(t)$ is a symmetric matrix, for all $t$, one can find a (time dependent) orthonormal basis of eigenvectors, $\\vec{\\phi}^{(\\alpha)}(t)$, associated with the eigenvalues $\\Lambda^{(\\alpha)}(t)\\leq 0$ such that \n\\begin{equation*}\n\\mathbf{L}(t)\\vec{\\phi}^{(\\alpha)}(t)=\\Lambda^{(\\alpha)}(t)\\vec{\\phi}^{(\\alpha)}(t) \\quad \\forall \\alpha=1,\\dots, n \\text{ and } \\forall t\\, .\n\\end{equation*}\nMoreover \n \\begin{equation}\n \\left(\\vec{\\phi}^{(\\alpha)}(t)\\right)^\\top\\cdot \\vec{\\phi}^{(\\beta)}(t)=\\delta_{\\alpha \\beta}\\, ,\n\\label{eq:eigeq2}\n\\end{equation}\nwhere the dot represents the scalar product and $\\left( \\right) ^\\top$ stands for the transpose operation. Finally, with no loss of generality, we order the eigenvalues in such a way that $0=\\Lambda^{(1)}(t)>\\Lambda^{(j)}(t)$ for all $j\\in\\{2,\\dots,n\\}$ and we recall that $\\vec{\\phi}^{(1)}(t)=(1,\\dots,1)^\\top\/\\sqrt{n}$.\n\nAssume that the eigenvectors evolve smoothly in time. Then, one can express the eigenvectors change rate as:\n\\begin{equation}\n\\label{eq:cab}\n\\frac{d \\vec{\\phi}^{(\\alpha)}}{dt}(t)=\\sum_{\\beta}c_{\\alpha\\beta}(t)\\vec{\\phi}^{(\\beta)}(t)\\quad\\forall \\alpha=1,\\dots, n\\, .\n\\end{equation}\nwhere $\\mathbf{c}(t)$ is a $n\\times n$ time dependent matrix that quantifies the projections on the independent eigendirections. By recalling the orthonormality condition~\\eqref{eq:eigeq2} we can straightforwardly conclude that $\\mathbf{c}$ is a real skew symmetric matrix with a null first row and first column, i.e., $c_{\\alpha\\beta}+c_{\\beta\\alpha}=0$ and $c_{1\\alpha}=0$.\n\nThe time evolution of the eigenvectors is hence self-consistently ruled by the system of ODEs~\\eqref{eq:cab}, complemented by the initial conditions, i.e., the Laplace eigenbasis at $t=0$. Notice that the case of switching networks can be also brought back to the above scenario, by approximating piecewise regular functions with smooth profiles or, alternatively, using the Fourier transform (see Appendix~\\ref{sec:smallnet}).\nFinally, we require the eigenvalues to satisfy standard conditions ensuring the existence and uniqueness of the linear system~\\eqref{eq:GLHGlinalpha3}, so in particular differentiability is no longer required as in~\\cite{vangorder2}.\n\nWe are now in a position to elaborate on the conditions that generalise the master stability formalism to systems defined on time varying networks. To this end, consider a $d$-dimensional system described locally, i.e., at each node, by the following ODE:\n\\begin{equation}\n\\label{eq:dotxF}\n\\frac{d\\mathbf{x}}{dt}=\\mathbf{F}(\\mathbf{x})\\quad \\mathbf{x}\\in\\mathbb{R}^d\\, ,\n\\end{equation}\nwhere $\\mathbf{F}$ is an arbitrary nonlinear function. Further, assume $n$ identical copies of the above system to be coupled via a time varying network through diffusive interactions modified with the inclusion of a nonlinear function $\\mathbf{H}$:\n\\begin{equation}\n\\label{eq:maineq}\n\\frac{d\\mathbf{x}_i}{dt}=\\mathbf{F}(\\mathbf{x}_i) +\\varepsilon \\sum_{j} L_{ij}(t) \\mathbf{H}(\\mathbf{x}_j)\\, ,\n\\end{equation} \nwhere $\\mathbf{x}_i=(x_i^{(1)},\\dots,x_i^{(d)})^\\top$ photographs the state of the system on node $i$, $\\varepsilon>0$ is the strength of the coupling and $L_{ij}(t)$ are the entries of the time varying Laplace matrix.\n\nLet us now fix a reference orbit, $\\mathbf{s}(t)$, of the aspatial system~\\eqref{eq:dotxF}. By exploiting the obvious condition $\\sum_j L_{ij}(t)=0$ for all $i=1,\\dots, n$ and all $t$, it is immediate to conclude that $\\mathbf{s}(t)$ is also solution of Eq.~\\eqref{eq:maineq}. Namely the coupled system exhibits a spatially homogeneous synchronous solution. Assuming the latter solution to be stable for the decoupled system~\\eqref{eq:dotxF}, the question to be answered is whether it can turn unstable (or conversely, preserve its stability) when the inter-nodes couplings get activated by a small heterogeneous perturbation. Denote by $\\delta\\mathbf{x}_i=\\mathbf{x}_i-\\mathbf{s}$ the deviations from the reference orbit and by assuming these latter small, one can derive a self-consistent set of linear ODE for tracking the evolution of the perturbation in time. To this end, we introduce $\\delta\\mathbf{x}_i$ in Eq.~\\eqref{eq:maineq} and perform a Taylor expansion arrested to linear order, to eventually get:\n\\begin{equation}\n\\label{eq:GLHGlin}\n\\frac{d\\delta\\mathbf{x}_i}{dt}=\\mathbf{J}_\\mathbf{F}(\\mathbf{s}(t))\\delta\\mathbf{x}_i +\\varepsilon \\sum_{j} {L}_{ij}(t) \\mathbf{J}_\\mathbf{H}(\\mathbf{s}(t))\\delta\\mathbf{x}_j\\, , \n\\end{equation}\nwhere $\\mathbf{J}_\\mathbf{F}(\\mathbf{s}(t))$ (resp. $\\mathbf{J}_\\mathbf{H}(\\mathbf{s}(t))$) denotes the Jacobian matrix of the function $\\mathbf{F}$ (resp. $\\mathbf{H}$) evaluated on the trajectory $\\mathbf{s}(t)$. Remark that a completely equivalent governing equation is obtained when starting from an inter-nodes coupling of the type $\\sum_j A_{ij} \\mathbf{H}(\\mathbf{x}_j-\\mathbf{x}_i)$, with $\\mathbf{H}(\\mathbf{0})=\\mathbf{0}$, as anticipated above.\n\nTo make further progress in the study of the linear non-autonomous system (\\ref{eq:GLHGlin}), we project $\\delta\\mathbf{x}_i$ onto the orthonormal basis formed by the eigenvectors of $\\mathbf{L}(t)$, to yield $\\delta\\mathbf{x}_i=\\sum_\\alpha \\delta\\hat{\\mathbf{x}}_{\\alpha}\\phi^{(\\alpha)}_i$. By inserting the latter into~\\eqref{eq:GLHGlin} and recalling the definition of matrix $\\mathbf{c}(t)$, one obtains for all $\\beta$ (see Appendix~\\ref{sec:MSEtv}):\n\\begin{equation}\n\\label{eq:GLHGlinalpha3}\n\\frac{d\\delta\\hat{\\mathbf{x}}_{\\beta}}{dt} = \\sum_\\alpha c_{\\beta\\alpha}(t)\\delta\\hat{\\mathbf{x}}_{\\alpha}+\\left[\\mathbf{J}_\\mathbf{F}(\\mathbf{s}(t))+\\varepsilon \\Lambda^{(\\beta)}(t)\\mathbf{J}_\\mathbf{H}(\\mathbf{s}(t))\\right]\\delta\\hat{\\mathbf{x}}_{\\beta}\\, . \n\\end{equation}\n\n By introducing $\\delta\\hat{\\mathbf{x}}=(\\delta\\hat{\\mathbf{x}}^\\top_{1},\\dots,\\delta\\hat{\\mathbf{x}}_{n}^\\top)^\\top$ one can cast Eq.~\\eqref{eq:GLHGlinalpha3} in compact form:\n \\begin{equation}\n\\label{eq:GLHGlinalpha3compact}\n\\frac{d\\delta\\hat{\\mathbf{x}}}{dt} = \\left[\\mathbf{c}\\otimes \\mathbf{1}_d+\\mathbf{1}_n\\otimes \\mathbf{J}_\\mathbf{F}+\\varepsilon \\mathbf{\\Lambda}\\otimes\\mathbf{J}_\\mathbf{H}\\right]\\delta\\hat{\\mathbf{x}}:=\\mathbf{M}\\delta\\hat{\\mathbf{x}}\\, ,\n\\end{equation}\nwhere $\\otimes$ denotes the Kronecker product, $\\mathbf{\\Lambda}=\\mathrm{diag}\\left(\\Lambda^{(1)},\\dots,\\Lambda^{(n)}\\right)$ and, given any positive integer $m$, $\\mathbf{1}_m$ denotes the $m$ dimensional identity matrix. Let us observe that the latter formula and the following analysis differ from the one presented in~\\cite{vangorder2} where the perturbation is assumed to align onto a single mode, hypothesis that ultimately translates in the stationarity of the Laplace eigenvectors, that is $\\mathbf{c}=\\mathbf{0}$. The same assumption is also at the root of the results by~\\cite{ZS2021}; indeed commuting time varying networks implies to deal with a constant eigenbasis. In conclusion, Eq.~\\eqref{eq:GLHGlinalpha3} or Eq.~\\eqref{eq:GLHGlinalpha3compact} are capable to describe the projection of the linearised dynamics on a generic time varying Laplace eigenbasis, and thus allow us to draw general conclusions without simplifying assumptions.\n\nThe above system~\\eqref{eq:GLHGlinalpha3} represents the generalised version of the celebrated Master Stability Equation (MSE)~\\cite{Pecora,HCLP}, which includes an explicit account of the network evolution as encoded in matrix $\\mathbf{c}(t)$, as well as in the dependence of $\\Lambda^{(\\beta)}(t)$ against time. Its largest Lyapunov exponent quantifies the exponential rate at which an infinitesimal perturbation in the transverse subspace grows: it defines an improved version of the Master Stability Function (MSF) and enables one to draw conclusions about the stability of the reference orbit. In concrete terms, consider the matrix equation $d \\mathbf{O} \/ dt = \\mathbf{M} \\mathbf{O}$ where $\\mathbf{O}(0)=\\mathbf{1}_{nd}$. Solve the preceding equation numerically and compute $\\nu_i(t)$ ($i=1,..., nd$) the (time dependent) eigenvalues of $\\mathbf{O}(t)$. The Lyapunov \nexponents are the computed by $\\lambda_i=\\lim_{t \\rightarrow \\infty} \\ln \\nu_i \/ t$. Notice that~\\eqref{eq:GLHGlinalpha3compact} -- or its equivalent counterpart~\\eqref{eq:GLHGlinalpha3} from which matrix $\\mathbf{M}$ originates -- displays two independent time scales (in addition to the ones characterising the reactive dynamics), one reflecting the dynamics of the isolated units, i.e., $\\mathbf{s}(t)$, and the other stemming from the network modulation over time, as mirrored in $\\mathbf{c}(t)$ and $\\Lambda^{(\\beta)}(t)$. Hence, the examined system cannot be managed via standard Floquet methods, not even when periodic homogeneous $\\mathbf{s}(t)$ are concerned. Further, classical MSF approaches -- carried out over static networks -- can be conveniently simplified by considering the evolution of the imposed perturbation along each independent direction (the eigenvectors), associated to different eigenvalues of the Laplacian. This path cannot be pursued here as the matrix $\\mathbf{c}(t)$ is responsible for a non trivial entanglement of different modes, as also remarked in~\\cite{vangorder3}. In general, system (\\ref{eq:GLHGlinalpha3compact}) should be hence handled numerically (see e.g.,~\\cite{HCLP} for an account of the subtleties to be faced when carrying the numerical computation). \n\nFor demonstrative purposes and to highlight the consequences resulting from the form of Eq.~\\eqref{eq:GLHGlinalpha3}, we will hereby concentrate onto two illustrative examples, which are constructed so as to yield a time independent generalised MSF. In both cases, the network is made of just three nodes, the eigenvalues are assumed constant and the time derivative of the eigenvectors projected on the eigenbasis returns a matrix $\\mathbf{c}$ that does not change in time (details are supplied in~\\ref{sec:smallnet}). In the first example, the reference orbit is stationary and we will operate in the furrow of the Turing instability~\\cite{Turing,NM2010}. In the second case, we will assume a set of nonlinearly coupled Stuart-Landau (SL) oscillators~\\cite{vanharten,aranson,garcamorales}, to investigate the impact of the network dynamics on the onset of synchronisation. \n\n\n\\section{Turing instability on time varying network}\n\\label{sec:turing} \nThe reference solution $\\mathbf{s} \\equiv \\mathbf{s}_0$ is assumed stationary and the coupling linear, i.e., $\\mathbf{H}(\\mathbf{x})=\\mathbf{D}\\mathbf{x}$, where $\\mathbf{D}$ is a suitable diagonal matrix with positive entries. To reduce to the usual Turing setting we posit $d=2$, namely $\\mathbf{x}_i=(u_i,v_i)$, $\\mathbf{F}(\\mathbf{x}_i)=(f(u_i,v_i),g(u_i,v_i))$ and $\\mathbf{D}=\\mathrm{diag}(D_u,D_v)$. Then, Eq.~\\eqref{eq:maineq} rewrites for all $i=1,\\dots,n$\n\\begin{equation}\n\\label{eq:Tnet}\n\\begin{dcases}\n\\frac{du_i}{dt}&=f(u_i,v_i)+D_u\\sum_{j=1}^{n}L_{ij}(t) u_j \\\\ \n\\frac{dv_i}{dt}&=g(u_i,v_i)+D_v\\sum_{j=1}^{n}L_{ij}(t) v_j \n\\end{dcases}\\, ,\n\\end{equation}\nwhere $D_u>0$ (resp. $D_v>0$) is the diffusion coefficients of species $u$ (reps. $v$) and ${L}_{ij}(t)$ are the elements of the above defined Laplace matrix of the time varying network. Recall that $\\mathbf{s}_0=(u^*,v^*)$ is a stable equilibrium solution for the reaction part. Therefore, $f(u^*,v^*)=g(u^*,v^*)=0$, $\\mathrm{tr}(\\mathbf{J}_0)<0$ and $\\det(\\mathbf{J}_0)>0$, where $\\mathbf{J}_0$ is the Jacobian of the reaction part evaluated at the equilibrium. Notice that $\\mathbf{s}_0$ is also an equilibrium solution for the whole system of coupled equations~\\eqref{eq:Tnet}. We are thus interested in studying its stability under an imposed node-dependent, hence heterogeneous perturbation.\n\nBy setting $\\delta x_i=u_i-u^*$, $\\delta y_i=v_i-v^*$, one can follow the main steps of the theory presented above to eventually obtain the analogous of Eq.~\\eqref{eq:GLHGlinalpha3compact},\n\\begin{equation}\n\\label{eq:Tnetlin5}\n\\frac{d \\delta\\hat{\\mathbf{x}}}{dt}= \\left[\\mathbf{c}\\otimes \\mathbf{1}_2+\\mathbf{1}_n\\otimes \\mathbf{J}_0+\\mathbf{\\Lambda}\\otimes \\mathbf{D}\\right]\\delta\\hat{\\mathbf{x}}\\, ,\n\\end{equation}\nwhere $\\delta\\hat{\\mathbf{x}}=(\\delta\\hat{\\mathbf{x}}_1^\\top,\\dots,\\delta\\hat{\\mathbf{x}}_n^\\top)^\\top$ and $\\delta\\hat{\\mathbf{x}}_\\beta=(\\delta \\hat{x}_{\\beta},\\delta \\hat{y}_{\\beta})$ represents the projection of the $i$-th perturbation $(\\delta x_i,\\delta y_i)$ on the $\\beta$-eigenvector. \n\nFor a definite application, we focus on the three-nodes time dependent network mentioned above and further characterised in~\\ref{sec:smallnet}. For sake of definitiveness let us hereby recall that the matrix $\\mathbf{c}$ defined by Eq.~\\eqref{eq:cab} is given by\n\\begin{equation*}\n \\mathbf{c}=\\left(\n\\begin{matrix}\n 0 & 0 & 0\\\\\n 0 & 0 & \\Omega\\\\\n 0 & -\\Omega& 0\n\\end{matrix}\\right)\\, ,\n\\end{equation*}\nfor some $\\Omega >0$. That together with the following initial Laplace eigenbasis\n\\begin{equation*}\n \\vec{\\phi}^{(1)}(0)=\\frac{1}{\\sqrt{3}}\\left(\n\\begin{smallmatrix}\n 1\\\\1\\\\1\n\\end{smallmatrix}\n\\right)\\, , \\vec{\\phi}^{(2)}(0)=\\frac{1}{\\sqrt{6}}\\left(\n\\begin{smallmatrix}\n 1\\\\-2\\\\1\n\\end{smallmatrix}\n\\right)\\, ,\\vec{\\phi}^{(3)}(0)=\\frac{1}{\\sqrt{2}}\\left(\n\\begin{smallmatrix}\n -1\\\\0\\\\1\n\\end{smallmatrix}\n\\right)\n\\end{equation*}\nreturns the following adjacency matrix\n\\begin{equation}\n A_{ij}(t)=\\left(\\begin{smallmatrix} 0 & \\frac{1}{2}-\\frac{\\cos\\left(\\frac{\\pi }{3}+2\\Omega t\\right)}{3} & \\frac{\\cos\\left(2\\Omega t\\right)}{3}+\\frac{1}{2}\\\\ \\frac{1}{2}-\\frac{\\cos\\left(\\frac{\\pi }{3}+2\\Omega t\\right)}{3} & 0 & \\frac{1}{2}-\\frac{\\cos\\left(\\frac{\\pi }{3}-2\\Omega t\\right)}{3}\\\\ \\frac{\\cos\\left(2\\Omega t\\right)}{3}+\\frac{1}{2} & \\frac{1}{2}-\\frac{\\cos\\left(\\frac{\\pi }{3}-2\\Omega t\\right)}{3} & 0 \\end{smallmatrix}\\right)\\, .\n\\label{eq:adjt}\n\\end{equation}\n\nMoreover, we assume the paradigmatic Brusselator scheme as the reference reaction model, namely $f(u,v)=1-(b+1)u+cu^2v$ and $g(u,v)=bu-cu^2v$, where $b$ and $c$ are the positive parameters. The stationary equilibrium is thus $u^* = 1$ and $v^* = b\/c$, while the Jacobian of the reaction part evaluated on the equilibrium is $\\partial_u f = b-1$, $\\partial_vf = c$, $\\partial_u g = -b$ and $\\partial_v g = -c$. To exemplify our conclusion we will further set $D_u =0.01$ and $D_v = 1$, compute the eigenvalues of the linear system~\\eqref{eq:Tnetlin5} (here straightforward as the linear system has constant coefficients) and then characterise the region in the $(b,c)$ plane where the instability is predicted to take place for (i) the standard Turing setting, i.e., when the network is made time independent ($\\Omega=0$, which implies $\\mathbf{c}$ to be the null matrix); (ii) the extended scenario when the network is made to evolve in time ($\\Omega >0$, hence $\\mathbf{c} \\ne \\mathbf{0}$).\n\nThe results reported in Fig.~\\ref{fig:resultsTuring} show that, for this specific choice of the parameters, the region deputed to the instability (black shadow) shrinks when the system is made to evolve on a time varying network~\\footnote{Throughout this work the numerical simulations have been performed using a $4$-th order Runge-Kutta scheme implemented in Matlab~\\cite{MATLAB2021}. The initial conditions have been realised by drawing uniformly random perturbations $\\delta$-close to the homogeneous equilibrium and the simulation time has been taken of the order of $(-\\log \\delta) \/ \\rho_{\\mathrm{MSF}}$, where $\\rho_{\\mathrm{MSF}}$ is the maximum of the MSF. This is indeed the time necessary to (possibly) increase the $\\delta$-perturbation up to a macroscopic size. In the rest of the work we set $\\delta=10^{-2}$, small enough to discriminate between the onset of the instability using a reasonable simulation time for the values of $\\rho_{\\mathrm{MSF}}$ we are dealing with.}. The outcome of the simulations corroborates the predictions, thus confirming the adequacy of the theory and the crucial role played by the extra contribution to the MSF which can be traced back to matrix $\\mathbf{c}$.\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[scale=0.22]{Fig1.pdf}\n\\caption{\\textbf{Turing instability on time varying networks.} Middle panels report the parameters region associated to the emergence of Turing instability (black region) for the Brusselator model, on respectively a static (top panel) and time varying network (bottom panel, $\\Omega=2$). The remaining panels display the computed trajectories for $b=10$, $c=15$ (left, yielding to Turing patterns just for the case of a static network) and $b=c=15$ (right, patterns develop in both considered situations). Here, $D_u=0.01$ and $D_v=1.0$.}\n\\label{fig:resultsTuring}\n\\end{figure*}\n\nIn Fig.~\\ref{fig:sizeTuring} we report the size of the Turing instability region $\\Theta(\\Omega)$ as a function of $\\Omega$ (black dots); by visual inspection one can appreciate that indeed the instability region shrinks once $\\Omega$ increases, i.e., $\\Theta(0)>\\Theta(\\Omega)$ for all $\\Omega$. We can also observe a non-monotone behaviour with a minimum for $\\Omega=1$. The Figure also supports the correctness of the results proved in~\\cite{PABFC2017}. Indeed, for sufficiently fast network dynamics, the emergence of Turing pattern can be proved by looking at the behaviour of the system defined on the averaged network, whose associated size (i.e., the extension of the deputed region in the parameters plane) is given by the green line.\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[scale=0.33]{RelBrusselatorModel01012022_001All.pdf}\n\\vspace{-3cm}\n\\caption{\\textbf{Size of the Turing region with respect to $\\Omega$.} For a fixed range of model parameters $(b,c)\\in [0,25]\\times[0,25]$ we report the size of the region associated to the emergence of Turing patterns for a given $\\Omega$, assuming the network defined above as the underlying support for the dynamics. The horizontal green line stands for the size of the instability region as obtained for the averaged network. The diffusion coefficients have been set to $D_u=0.01$ and $D_v=1.0$.}\n\\label{fig:sizeTuring}\n\\end{figure*}\n\n\n\\section{Synchronisation of Stuart-Landau oscillators nonlinearly coupled via time varying networks}\n\\label{sec:SL} \nConsider a Stuart-Landau (SL) oscillator, the normal form for a generic system close to a supercritical Hopf-bifurcation. It is characterised by a complex amplitude $w$ which evolves in time according to $\\dot{w}=\\sigma w-\\beta |w|^2w$, where $\\sigma=\\sigma_\\Re+i\\sigma_\\Im$ and $\\beta=\\beta_\\Re+i\\beta_\\Im$ are complex control parameters. The oscillator admits a limit cycle $\\hat{z}(t)=\\sqrt{\\sigma_\\Re\/\\beta_\\Re}e^{i\\omega t}$, where $\\omega=\\sigma_\\Im-\\beta_\\Im \\sigma_\\Re\/\\beta_\\Re$. The latter is a stable solution of an isolated SL equation, provided $\\sigma_\\Re>0$ and $\\beta_\\Re>0$, conditions that we hereby assume. To proceed in the analysis we couple together \n$n$ identical SL oscillators, each bearing a complex amplitude $w_j$, with $j=1,...,n$:\n\\begin{equation}\n\\label{eq:maineqSL}\n\\frac{dw_j}{dt}= \\sigma w_j-\\beta w_j|w_j|^2+\\mu \\sum_{\\ell} {L}_{j\\ell}(t) H(w_\\ell)\\, ,\n\\end{equation}\nwhere $\\mu=\\mu_\\Re+i\\mu_\\Im$ is a complex parameter that sets the strength of the coupling and where $H(w)=w|w|^{m-1}$, for some integer $m\\geq 1$. The above system falls in the class of Eq.~\\eqref{eq:maineq}. We focus now on the extended solution $w_j=\\hat{z}(t)$ $\\forall j$ and inspect its stability, as a function of the model parameters, namely to the emergence of global synchronisation. To achieve this goal we set: \n\\begin{equation}\n\\label{eq:wpert}\nw_j(t)=\\hat{z}(t)(1+\\rho_j(t))e^{i\\theta_j(t)}\\, ,\n\\end{equation}\nwhere the real functions $\\rho_j(t)$ and $\\theta_j(t)$ are assumed to be small. \nA straightforward computation (more details can be found in~\\ref{sec:synchSL}) leads to :\n\\begin{equation}\n\\label{eq:maineqSLlinMSF}\n\\frac{d \\delta\\hat{\\mathbf{x}}}{dt}= \\left[\\mathbf{c}\\otimes \\mathbf{1}_2+\\mathbf{1}_n\\otimes \\mathbf{J}_0+\\mathbf{\\Lambda}\\otimes \\mathbf{J}_{H}\\right]\\delta\\hat{\\mathbf{x}}\\, ,\n\\end{equation}\nwhere $\\delta\\hat{\\mathbf{x}}=(\\delta\\hat{\\mathbf{x}}_1^\\top,\\dots,\\delta\\hat{\\mathbf{x}}_n^\\top)^\\top$ and where $\\delta\\hat{\\mathbf{x}}_\\beta=(\\hat{\\rho}_{\\beta},\\hat{\\theta}_{\\beta})$ denotes\nthe projection of $(\\rho_i,\\theta_i)$ on the $\\beta$-eigenvector. The Jacobian of the isolated SL system is given by $\\mathbf{J}_0=\\left(\n\\begin{smallmatrix}\n -2\\sigma_\\Re & 0\\\\\n -2\\beta_\\Im \\sigma_\\Re\/\\beta_\\Re & 0\n\\end{smallmatrix}\\right)$, whereas the contributions stemming from the nonlinear coupling correspond to $\\mathbf{J}_H=\\left(\\frac{\\sigma_\\Re}{\\beta_\\Re}\\right)^{\\frac{m-1}{2}}\\left(\n\\begin{smallmatrix}\nm\\mu_\\Re & -\\mu_\\Im\\\\\nm\\mu_\\Im & \\mu_\\Re\n\\end{smallmatrix}\\right)$.\nTo illustrate the outcome of the analysis, we set $n=3$ and deal with the time dependent network introduced above (see also~\\ref{sec:smallnet}). We let $\\beta_\\Im$ and $\\mu_\\Im$ to vary freely, and freeze the other parameters to nominal values, for the sake convenience. In Fig.~\\ref{fig:resultsSynchro} we depict in black the region of the parameters plane $(\\beta_\\Im, \\mu_\\Im)$ where the generalised MSF is positive, i.e., where the uniform limit cycle solution is predicted to be unstable and thus synchronisation is excluded. The complementary domain points hence to the choice of the parameters that yields stable synchronous oscillations. The inherent dynamics of the network enhances the ability of the system to synchronise, at a global scale. Indeed the region in black shrinks when the network is made to evolve in time. However, the parameters $\\beta_\\Im$ and $\\mu_\\Im$ can be chosen in such a way that the coupled collection of SL oscillators turns unstable when evolved on a dynamical support ($\\Omega \\ne 0$), while being stable on its static counterpart ($\\Omega = 0$).\n\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[scale=0.22]{Fig2.pdf}\n\\caption{\\textbf{Synchronisation on time varying networks.} Middle panels identify the regions in the parameters plane $(\\beta_\\Im, \\mu_\\Im)$ where the synchronous solution is unstable (back domains) on, respectively, a static (top panel) and time varying network (bottom panel, $\\Omega=4$). The remaining panels show the evolution in time of the real part of \n$w_j$, for different choices of the parameters (as indicated by the arrows). The two selected working points are $\\beta_\\Im=4.9$, $\\mu_\\Im=-2.3$ and $\\beta_\\Im=4.9$, $\\mu_\\Im=-3.6$. Here, $\\sigma = 1.0+4.3i$, $\\beta_\\Re = 1.0$, $\\mu_\\Re = 0.1$ and $m = 3$.}\n\\label{fig:resultsSynchro}\n\\end{figure*}\n\nTo study the interplay of the coupling strength and the parameter $\\Omega$ on synchronisation, let us set in Eq.~\\eqref{eq:maineqSL} $\\mu = \\varepsilon \\mu_0$ for a fixed complex parameter $\\mu_0$ (hereby fixed, without loss of generality, to $\\mu_0=0.1-0.5 i$) and a positive real parameter $\\varepsilon$. In Fig.~\\ref{fig:fig1SL} we report the behaviour of the MSF as a function of the coupling strength $\\varepsilon$ for two values of $\\Omega$ (blue curve $\\Omega=2.0$, red curve $\\Omega=0.0$, i.e., static network). Let us recall that values of $\\varepsilon$ associated to a positive MSF correspond to an unstable limit cycle solution for the SL and thus to desynchronisation. The size of such interval of desynchronisation is given by the first nonzero value of $\\varepsilon$ for which the MSF is zero, say $\\varepsilon_0$. The latter clearly depends on $\\Omega$ and moreover $\\varepsilon_0(2.0)>\\varepsilon_0(0.0)$, namely the time varying network exhibits a larger domain of synchronisation, being the interval of instability smaller than in the static network case.\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[scale=0.33]{GLModel20012022_001_mod.pdf}\n\\vspace{-2cm}\n\\caption{\\textbf{Master Stability Function for the Stuart-Landau nonlinearly coupled oscillators.} We report the MSF computed for the SL system coupled with the simple $3$ nodes network as a function of $\\varepsilon$. The remaining parameters have been fixed to the values, $\\Omega=2.0$, $\\sigma = 1.0+4.3i$, $\\beta= 1.0+1.1i$, $\\mu_0 = 0.1-0.5i$ and $m = 3$. We emphasised the first nontrivial zeros of the MSF, $\\varepsilon_0(\\Omega)$, as as a function of the coupling strength and its dependence of $\\Omega$.}\n\\label{fig:fig1SL}\n\\end{figure*}\n\nThe latter claim is further supported by the results displayed in Fig.~\\ref{fig:fig2SL} where we report the dependence of $\\varepsilon_0(\\Omega)$ as a function of $\\Omega$. One can observe that $\\varepsilon_0(0)>\\varepsilon_0(\\Omega)$ for all the considered values of $\\Omega$ and thus conclude that the time varying network exhibits a larger domain of synchronisation. We observe that such result is not limited to the small network here considered as numerically shown in~\\ref{sec:smallnet}.\n\nLet us remark that the MSF has a negative second derivative evaluated at $\\varepsilon_0(\\Omega)$ for all $\\Omega$ (see Fig.~\\ref{fig:fig1SL}), we can thus conclude that the result here presented generalises the one recently found in~\\cite{ZS2021} without resorting to the restrictive assumption of commuting time varying networks: if the MSF is convex, with respect to the coupling strength, then the time varying network can synchronise for a range of $\\varepsilon$ larger than that associated to its static analogue.\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[scale=0.33]{RelGLModel04022022_001_mod.pdf}\n\\vspace{-2cm}\n\\caption{\\textbf{Size of the synchronisation region for the Stuart-Landau nonlinearly coupled oscillators.} We report the value of $\\varepsilon_0(\\Omega)$ as a function of $\\Omega$. The remaining parameters have been fixed to the values, $\\sigma = 1.0+4.3i$, $\\beta= 1.0+1.1i$, $\\mu_0 = 0.1-0.5i$ and $m = 3$. Let us observe the existence of an interval of values $I=[\\Omega_1,\\Omega_2]$ such that for all $\\Omega\\in I$ there exist three values of $\\varepsilon_0(\\Omega)$ (see Appendix~\\ref{sec:synchSL}).}\n\\label{fig:fig2SL}\n\\end{figure*}\n\n\\section{Conclusions}\n\\label{sec:concl}\nSumming up we have here extended the Master Stability theory to a setting where the time evolution of the underlying network support is explicitly accounted for. The dynamics of the network is reflected in a time dependent skew symmetric matrix $\\mathbf{c}(t)$, that stems from the time evolution of the Laplacian eigenvectors. The corresponding eigenvalues can also adjust in time and contribute to shape the evolution of the imposed perturbation at a linear order of approximation. The proposed theory is general and applies to all systems for which the Master Stability formalism was originally conceived. For illustrative purposes we have here decided to test it against two simple cases study, which can be respectively ascribed to synchronisation and Turing instability. The method applies however to more complex settings, as e.g., synchronisation of chaotic trajectories (see Appendix~\\ref{sec:synchLorenz}). In particular, we showed that the condition of the negative curvature of the MSF invoked in~\\cite{ZS2021} to ensure that commuting time varying networks do synchronise, is no longer required when general time varying networks are accounted for. Indeed we have demonstrated that coupled Lorenz systems can easily synchronise once coupled with a time varying network, even if the associated MSF has a positive curvature (see Appendix~\\ref{sec:synchLorenz}). \n\n\\vspace{0.5cm}\n\\noindent\\textbf{Acknowledgement.} We thank Robert A Van Gorder for fruitful discussions.\n\n \\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{1}\n\nThe existence of maximally nonassociative quasigroups was an open\nquestion for quite a long time \\cite{kep,gh,dv1}. In 2018 a maximally\nnonassociative quasigroup of order nine was found \\cite{dv3}, and that\nwas the first step to realise that Stein's nearfield\nconstruction \\cite{st} can be used to obtain maximally nonassociative\nquasigroups of all orders $q^2$, where $q$ is an odd prime\npower \\cite{dl}. A recent result \\cite{dw} constructs examples of all\norders with the exception of a handful of small cases and two sparse\nsubfamilies within the case $n\\equiv 2\\bmod 4$. The main construction\nof \\cite{dw} is based upon quadratic orthomorphisms and can be used\nfor all odd prime powers $q\\ge 13$. However, it was left open how many\nquadratic orthomorphisms can be used in the construction. We provide\nan asymptotic answer to that question in this paper.\n\nThroughout this paper $q$ is an odd prime power and\n$\\mathbb{F} = \\mathbb{F}_q$ is a field of order $q$. For $a,b\\in \\mathbb{F}$ \ndefine a binary operation on $\\mathbb{F}$ by\n\\begin{equation}\\label{e11}\nu*v = \n\\begin {cases}\nu+a(v-u)&\\text{if $v-u$ is a square;}\\\\\nu+b(v-u)&\\text{if $v-u$ is a nonsquare.}\n\\end{cases}\n\\end{equation}\nThis operation yields a quasigroup \nif and only if\nboth $ab$ and $(1-a)(1-b)$ are squares, and both $a$ and $b$ are\ndistinct from $0$ and $1$, cf.~\\cite{Eva18,cyclatom}. \nDenote by $\\Sigma = \\Sigma(\\mathbb{F})$ the\nset of all such $(a,b) \\in \\mathbb{F}\\times \\mathbb{F}$ for which $a\\ne b$. \n\nFor each $(a,b) \\in \\Sigma$ denote the quasigroup $(\\mathbb{F},*)$\nby $Q_{a,b} = Q_{a,b}(\\mathbb{F})$. A quasigroup $(Q,*)$ is said to be\n\\emph{maximally nonassociative} if \n\\begin{equation}\\label{e12}\n(u*v)*w = u*(v*w)\\ \\Longrightarrow \\ u=v=w\n\\end{equation}\nholds for all $u,v,w \\in Q$. By \\cite{kep}, a maximally nonassociative \nquasigroup\nhas to be idempotent (i.e., $u*u=u$ for all $u\\in Q$). Hence in\na maximally nonassociative quasigroup the converse implication \nto \\eref{12} holds as well.\n\nIf $a=b\\in \\mathbb{F}\\setminus\\{0,1\\}$, then \\eref{11} defines a quasigroup\nin which $u*(v*u) = (u*v)*u$ for all $u,v \\in \\mathbb{F}$. This means that\nsuch a quasigroup is never maximally nonassociative. If $q\\ge 13$,\nthen there always exists $(a,b) \\in \\Sigma(\\mathbb{F}_q)$ such that\n$Q_{a,b}$ is maximally nonassociative \\cite{dw}. This paper is concerned\nwith the density of such $(a,b)$. Our main result is as follows:\n\n\\begin{thm}\\label{11}\nFor an odd prime power $q$ denote by $\\sigma(q)$ the number\nof $(a,b) \\in \\Sigma(\\mathbb{F}_q)$ for which $Q_{a,b}$ is maximally\nnonassociative. Then \n\\begin{equation}\\label{e13}\n\\lim_{q\\to \\infty}\\frac{\\sigma(q)}{q^2} =\n\\begin{cases} \n953\\cdot2^{-15}\\approx 0.02908&\\text{for $q\\equiv 1\\bmod 4$,}\\\\\n825\\cdot2^{-16}\\approx 0.01259&\\text{for $q\\equiv 3\\bmod 4$.}\n\\end{cases}\n\\end{equation}\n\\end{thm}\n\nAs we show below, the set $\\Sigma$ consists of $(q^2-8q+15)\/4$\nelements. Hence a random choice of $(a,b) \\in \\Sigma$ yields a\nmaximally nonassociative quasigroup with probability $\\approx 1\/8.596$ if\n$q\\equiv 1\\bmod 4$, and with probability $\\approx 1\/19.86$ if $q\\equiv\n3 \\bmod 4$. This may have an important consequence for the cryptographic\napplication described in \\cite{gh}. It means that a maximally nonassociative\nquasigroup of a particular large order can be obtained in an acceptable\ntime by randomly generating pairs $(a,b)$ until one is found for which\n$Q_{a,b}$ is maximally nonassociative.\n\n\n\nAn important ingredient in the proof of \\tref{11} is the\ntransformation described in \\pref{12}, and used in \\cref{13} to\ndetermine $|\\Sigma|$.\n\nDefine $S = S(\\mathbb{F})$ as the set of all $(x,y)\\in \\mathbb{F}\\times \\mathbb{F}$ such that\nboth $x$ and $y$ are squares, $x\\ne y$ and $\\{0,1\\}\\cap\\{x,y\\}=\\emptyset$.\n\n\\begin{prop}\\label{12}\nFor each $(a,b)\\in \\Sigma$ there exists exactly one $(x,y)\\in S$\nsuch that\n\\begin{equation}\\label{e14}\na=\\frac{x(1{-}y)}{x{-}y}, \\quad b = \\frac{1{-}y}{x{-}y}, \\quad\n1{-}a = \\frac{y(1{-}x)}{y{-}x} \\text{ \\, and \\, }\n1{-}b=\\frac{1{-}x}{y{-}x}.\n\\end{equation}\nThe mapping\n\\[\n\\Psi\\colon \\Sigma\\to S, \\quad (a,b) \\mapsto \\left ( \\frac ab,\n\\frac{1{-}a}{1{-}b}\\right)\\]\nis a bijection. If $(x,y)\\in S$, then $\\Psi^{-1} ((x,y)) = (a,b)$\nif and only if \\eref{14} holds.\n\\end{prop}\n\\begin{proof} If $x,y,a,b\\in \\mathbb{F}$ satisfy $x\\ne y$, \n$a =x(1{-}y)\/(x{-}y)$ and\n$b= (1{-}y)\/(x{-}y)$, then \n\\begin{equation} \\label{e15}\n\\text{$1{-}a = y(1{-}x)\/(y{-}x)$ and\n$1{-}b = (1{-}x)\/(y{-}x)$.}\n\\end{equation} \nDefine\\begin{equation*}\n\\Phi\\colon S\\to \\mathbb{F}\\times \\mathbb{F}, \\quad (x,y)\\mapsto \\left ( \\frac{x(1{-}y)}\n{x{-}y}, \\frac{1{-}y}{x{-}y}\\right ).\n\\end{equation*}\nSuppose that $(x,y) \\in S$ and set $b = (1{-}y)\/(x{-}y)$. Then\n$b\\ne0$ as $y\\ne 1$, and $b\\ne 1$ since $x\\ne 1$. Put $a = xb$.\nThen $a\\ne 0$ since $b\\ne 0$ and $x\\ne 0$, and $a\\ne b$ since\n$x\\ne 1$. Furthermore, $a\\ne 1$ since $y\\ne 0$ and $x\\ne 1$.\nSince $a=xb$, $ab=xb^2$ is a square. By \\eref{15}, $1{-}a = y(1{-}b)$.\nHence $(1{-}a)(1{-}b) = y(1{-}b)^2$ is a square too. This verifies that\n$\\Phi$ may be considered as a mapping $S\\to \\Sigma$.\n\nAssume $(a,b) \\in \\Sigma$. By definition, $\\Psi((a,b))=(x,y)$, where\n$x=a\/b$ and $y= (1{-}a)\/(1{-}b)$. We have $x\\notin \\{0,1\\}$ since\n$a\\ne 0$ and $a\\ne b$. Similarly, $y\\notin \\{0,1\\}$. Furthermore, \n$x\\ne y$ since $x=y$ implies\n$a = b$. Thus $(x,y)\\in S$. By straightforward verification,\n$\\Psi\\Phi = \\operatorname{id}_S$ and $\\Phi\\Psi = \\operatorname{id}_\\Sigma$.\n\\end{proof}\n\n\\begin{cor}\\label{13}\n$|\\Sigma(\\mathbb{F}_q)| = |S(\\mathbb{F}_q)| = (q^2-8q+15)\/4$.\n\\end{cor}\n\\begin{proof} By \\pref{12}, $|\\Sigma| = |S|$. By the definition,\n$S$ contains $((q-3)\/2)^2 - (q-3)\/2$ elements. \n\\end{proof}\n\nThe definition of $Q_{a,b}$ follows the established way of defining\na quasigroup by means of an orthomorphism, say $\\psi$, of an\nabelian group $(G,+)$. Here, $\\psi$ is said to be an \\emph{orthomorphism}\nof $(G,+)$ if it permutes $G$ and if the mapping $x\\mapsto \\psi(x)-x$\npermutes $G$ as well. If $\\psi$ is an orthomorphism, then\n$x*y = x+\\psi(y-x)$ is always a quasigroup. A \\emph{quadratic orthomorphism}\n$\\psi=\\psi_{a,b}$ is defined for each $(a,b)\\in \\Sigma(\\mathbb{F}_q)$ by \n\\begin{equation}\\label{e16}\n\\psi(u) = \\begin{cases} au \\quad \\text{\\,if $u$ is a square;}\\\\\nbu \\quad \\text{\\ if $u$ is a nonsquare.}\\end{cases}\n\\end{equation}\nThe definition \\eref{11} of the quasigroup $Q_{a,b}$ thus\nfits the general scheme. See~\\cite{Eva18,diagcyc} for more \ninformation on quasigroups defined by means of orthomorphisms.\n\nThe maximal nonassociativity of $Q_{a,b}$ can be expressed via\nthe \\emph{Associativity Equation:}\n\\begin{equation} \\label{e17}\n\\psi(\\psi(u)-v) = \\psi(-v) + \\psi(u-v-\\psi(-v))\n\\end{equation}\n\n\\begin{prop}\\label{14}\nFor $(a,b)\\in \\Sigma$ put $\\psi = \\psi_{a,b}$.\nAn ordered pair $(u,v)\\in \\mathbb{F}^2$ fulfils the Associativity Equation \n\\eref{17} if and only\nif $v*(0\\,*u) = (v*\\,0)*u$. Furthermore,\n\\begin{equation}\\label{e18}\nu-v-\\psi(-v) = u - (v*0) \\quad\\text{and}\\quad \\psi(u) - v = (0*u)-v.\n\\end{equation}\nIf $(u,v)\\ne (0,0)$ fulfils \\eref{17}, then none of\n$u$, $v$, $u-v-\\psi(-v)$ and $\\psi(u)-v$ vanishes,\nand $(c^2u,c^2v)$ fulfils \\eref{17} too, for any $c\\in \\mathbb{F}$.\n\nThe quasigroup $Q_{a,b}$ if maximally nonassociative if and only\nif $(u,v)=(0,0)$ is the only solution to \\eref{17}. \n\\end{prop}\n\\begin{proof} This is a restatement of Lemmas~1.3 and~3.1 from\n\\cite{dw}. A sketch of the proof follows,\nin order to make this paper self-contained. \nSince $u \\mapsto z + u$ is an automorphism of $Q=Q_{a,b}$ for each\n$z\\in \\mathbb{F}$, the\nmaximal nonassociativity is equivalent to having no $(u,v)\\ne (0,0)$\nsuch that $u*(0*v) = (u*0)*v$. This turns into \\eref{17} \nby invoking the formula $u*v = u+\\psi(v-u)$.\nSince $x\\mapsto c^2x$\nis an automorphism of $Q$ for each $c\\in \\mathbb{F}$, $c\\ne 0$, the \nAssociativity Equation holds for $(u,v)$ if and only if it holds\nfor $(c^2u,c^2v)$. For the rest it suffices to observe\nthat in an idempotent quasigroup $u*(v*w) =(u*v) *w $ implies $u=v=w$\nif $u=v$ or $u=v*w$ or $v=w$ or $u*v = w$. \n\\end{proof}\n\nFor $(a,b)\\in \\Sigma$ denote by $E(a,b)$ the set of $(u,v)\\ne (0,0)$\nthat satisfy the Associativity Equation \\eref{17}. By \\pref{14},\n$Q_{a,b}$ is maximally nonassociative if and only if $E(a,b)= \\emptyset$.\nThe number of such $(a,b)$ may be obtained indirectly by counting\nthe number of $(a,b)\\in \\Sigma$ for which $E(a,b)\\ne \\emptyset$. \nTo this end, we will partition $E(a,b) = \\bigcup E_{ij}^{rs}(a,b)$,\nwhere $i,j,r,s\\in \\{0,1\\}$. To determine\nto which part an element $(u,v)\\in E(a,b)$ belongs,\nthe following rule is used:\n\\begin{gather*}\n\\begin{aligned}\ni=0\\quad &\\Longleftrightarrow \\quad u \\ \\text {is a square};\\\\\nj=0\\quad &\\Longleftrightarrow \\quad -v \\ \\text {is a square};\\\\\nr=0\\quad &\\Longleftrightarrow \n\\quad \\psi_{a,b}(u)-v \\ \\text {is a square, and};\\\\\ns=0\\quad &\\Longleftrightarrow \\quad u-v-\\psi_{a,b}(-v) \\ \\text {is a square}. \n\\end{aligned}\n\\end{gather*}\nThus, if one of the elements $u$, $-v$, $\\psi_{a,b}(u){-}v$\nand $u{-}v{-}\\psi_{a,b}(-v)$ is a nonsquare, then the respective value\nof $i$, $j$, $r$ or $s$ is set to $1$. For each $(u,v)\\in E(a,b)$ there\nhence exists exactly one quadruple $(i,j,r,s)$ such that\n$(u,v) \\in E_{ij}^{rs}(a,b)$, giving us the desired partition. \nWe will also work with sets \n\\begin{equation*}\n\\Sigma_{ij}^{rs} = \\{(a,b)\\in \\Sigma: E_{ij}^{rs}(a,b)\\ne \\emptyset\\},\n\\end{equation*}\nwhere $i,j,r,s\\in \\{0,1\\}$.\nThe next observation directly follows from the definition\nof the sets\n$\\Sigma_{ij}^{rs}$. It is recorded here for the sake of later reference.\n\n\\begin{prop}\\label{15}\nSuppose that $(a,b)\\in \\Sigma=\\Sigma(\\mathbb{F}_q)$, for an odd\nprime power $q>1$. The quasigroup $Q_{a,b}$ is maximally nonassociative\nif and only if $(a,b)\\notin \\bigcup \\Sigma_{ij}^{rs}$.\n\\end{prop}\n\n\nIf it is assumed that $(u,v) \\in E_{ij}^{rs}(a,b)$, then \nthe Associativity Equation \\eref{17} can be turned into a linear \nequation in unknowns $u$ and $v$ since each occurrence of $\\psi$ can\nbe interpreted by means of \\eref{16}. The list of these linear equations\ncan be found in \\cite{dw}. Their derivation is relatively short\nand is partly repeated in Lemmas~\\ref{24}--\\ref{27}. The approach used here\ndiffers from that of \\cite{dw} in two aspects. The symmetries induced\nby opposite quasigroups and by automorphisms $Q_{a,b}\\cong Q_{b,a}$\nare used more extensively here, and characterisations\nof $\\Sigma_{ij}^{rs}$ are immediately transformed \ninto characterisations of \n\\begin{equation}\\label{e110}\nS_{ij}^{rs} = \\Psi(\\Sigma_{ij}^{rs}).\n\\end{equation}\nAs will turn out, sets $S_{ij}^{rs}$ can be described by a requirement\nthat several polynomials in $x$ and $y$ are either squares or nonsquares.\nEstimates of $|S_{ij}^{rs}|$ can be thus obtained by means of the Weil bound\n(as formulated, say, in \\cite[Theorem 6.22]{Evans}). \nWe shall not be using the Weil bound directly, but via \\tref{16} below, \na straightforward consequence from \\cite[Theorem~1.4]{dw}.\nApplications of \\tref{16} to the intersections of \nsets $S_{ij}^{rs}$, with symmetries taken into account, yield, after\na number of computations, the asymptotic results stated in \\tref{11}. \n\nSay that a list of polynomials $p_1,\\dots,p_k$ in one variable,\nwith coefficients in $\\mathbb{F}$,\nis \\emph{square-free}\nif there exists no sequence $1\\le i_1<\\dots43$, \nthen $(x,y) \\in S_{01}^{01}$.\n\\item[(ii)] If $y{+}1{-}x \\ne 0$ or $x^2{-}x{-}1\\ne 0$,\nthen $(x,y)\\in S_{01}^{01}$ if and only if both\n$(y{+}xy{-}x)(x{-}y{-}1)$ and $(y{-}2x{+}x^2)(x{-}y)(x{-}y{-}1)$\nare nonsquares,\nwhile $(2xy{-}y^2{-}x)(x{-}y)(x{-}y{-}1)$\nis a square.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nIn this case the Associativity Equation yields $a(au-v) = -bv + b(u -\n(1{-}b)v)$. That is equivalent to $(a^2-b)u = (b^2-2b+a)v$. If there\nexists a solution $(u,v)\\in E_{01}^{01}(a,b)$, and one of the elements\n$a^2-b$ and $b^2-2b+a$ is equal to zero, then the other has to vanish\nas well. Assume that $(x,y)=\\Psi((a,b))$. Then $a^2-b=0$\nif and only if $0 = x^2(1-y)^2-(1-y)(x-y) = (1-y)(-x^2y+x^2-x+y)\n= (1-y)(1-x)(y+xy-x)$, and $b^2-2b+a = (1-b)^2-(1-a)=\n0$ if and only if $0 = (1-x)^2-y(1-x)(y-x) = (1-x)(1-x-y^2+xy)=\n(1-x)(1-y)(y-x+1)$. If $y=x-1$, then $y+xy-x=x^2-x-1$.\n\nComputations above show that \n\\begin{equation*}\na^2-b = \\frac{(1{-}y)(1{-}x)(xy{-}x{+}y)}{(x{-}y)^2}\n\\text{ \\, and \\,} b^2-2b+a = \n\\frac{(1{-}y)(1{-}x)(y{-}x{+}1)}{(x{-}y)^2}.\n\\end{equation*}\n\nSuppose now that at least one of $x^2{-}x{-}1$ and $y{-}x{+}1$ does not vanish.\nIf $y{-}x{+}1=0$, then $E_{01}^{01}(a,b)=\\emptyset$ \nand $(y{+}xy{-}x)(x{-}y{-}1)=0$, which \nis a square. Hence $y{-}x{+}1\\ne 0$ may be assumed. \nThat implies $b^2-2b+a\\ne 0$. From the Associativity Equation\nit then follows that $(a,b)\\in \\Sigma_{01}^{01}$ if and only if\n$(1,v)\\in E_{01}^{01}(a,b)$, where $v=(a^2-b)\/(b^2-2b+a)$. Now,\n\\begingroup\n\\begin{align*}\n-v &= \\frac{b-a^2}{b^2-2b+a} = \n\\frac{y{+}xy{-}x}{x{-}y{-}1}, \\\\\n1-(1{-}b)v&= \\frac{(x{-}y{-}1)(y{-}x)+(1{-}x)(y{+}xy{-}x)}{(x{-}y{-}1)(y{-}x)}\n=\\frac{y(x^2{-}2x{+}y)}{(x{-}y{-}1)(x{-}y)}, \\text{ and}\\\\\na{-}v &= \\frac{x(1{-}y)(x{-}y{-}1) + (x{-}y)(y{+}xy{-}x)}{(x{-}y)(x{-}y{-}1)\n= \\frac{2xy{-}y^2{-}x}{(x{-}y)(x{-}y{-}1)}.\n\\end{align*}\n\\endgroup\n\nIt remains to prove that $E_{01}^{01}(a,b)$ is nearly always nonempty\nif $a^2-b=b^2-2b+a=0$.\nLet the latter be true. Then $b^2-2b+a=a^4-2a^2+a = a(a{-}1)(a^2{+}a{-}1)$.\nThus $a^2{+}a{-}1=0$. A pair $(1,v)$ is a solution to the Associativity \nEquation if $-v$ is a nonsquare, $1+(1{-}b)(-v)$ is a nonsquare, and\n$a-v$ is a square. Put $p_1(t) = t$, $p_2(t) = 1+(1{-}b)t=1+(1{-}a^2)t$,\nand $p_3(t) = a+t$. A solution $(1,v)$ exists if there exists $\\gamma=-v\\in \\mathbb{F}$\nsuch that $\\chi(p_1(\\gamma))=\\chi(p_2(\\gamma))=-1$ and \n$\\chi(p_3(\\gamma)) = 1$. Polynomials $p_2$ and $p_3$ have a common\nroot if and only if $0=1-a+a^3$. \nIf this is true, then \n$0=a^2+a^3 = a^2(1+a)$. This implies $a=-1$ and\n$0 = (-1)^2+(-1)-1 =-1$, a contradiction. The list of polynomials $p_1$, \n$p_2$, $p_3$ is therefore square-free. \\tref{16} guarantees the \nexistence of $\\gamma$\nif $00$ and $\\Delta m^2=m^2_3-(m^2_1+m^2_2)\/2$ \n\\cite{Fogli:2005cq}, with\nsign$(\\Delta m^2)=\\pm$ distinguishing normal ordering ($+$, NO) and inverted ordering ($-$, IO). Nonoscillation \nprobes of absolute neutrino masses include: $\\beta$-decay, sensitive to an effective mass \n$m_\\beta$ \\cite{PDG3}; precision cosmology within the standard $\\Lambda$CDM model \\cite{PDG4}, \nsensitive to $\\Sigma=m_1+m_2+m_3$ \\cite{PDG5}; and, if\nneutrinos are Majorana, neutrinoless double beta decay ($0\\nu\\beta\\beta$), sensitive to another effective mass $m_{\\beta\\beta}$ \\cite{PDG6}. \nWe refer, e.g.,\\ to \\cite{Fogli:2005cq} for definitions of $m_\\beta$ and $m_{\\beta\\beta}$. \n\n\nConstraints on the oscillation parameters $(\\delta m^2,\\,\\Delta m^2,\\,\\theta_{ij},\\,\\delta)$ and on the nonoscillation observables $(\\Sigma,\\,m_\\beta,\\,m_{\\beta\\beta})$ have been explored in several global neutrino data analyses, \nincluding our previous work \\cite{Capozzi:2017ipn} and the more recent papers \\cite{Capozzi:2020,deSalas:2020pgw,Esteban:2020cvm}, plus the preliminary contribution in \\cite{Marrone2021}. \nIn particular, the analyses in \\cite{deSalas:2020pgw,Esteban:2020cvm,Marrone2021} are based on largely common datasets, \nbased on updated information from the Conference {\\em Neutrino 2020\\\/} \\cite{Nu2020}. As a result, \na solid fabric for the $3\\nu$ paradigm emerges from convergent measurements of \nfive oscillation parameters $(\\theta_{12},\\,\\theta_{23},\\theta_{13},\\,\\delta m^2,\\,|\\Delta m^2|)$, with an overall accuracy ranging from $\\sim\\! 1\\%$ for $|\\Delta m^2|$ to $\\sim\\! 6\\%$ for $\\sin^2\\theta_{23}$ (dominated by the so-called $\\theta_{23}$ octant degeneracy). However, the fabric is still unfinished in $\\nu$ oscillations, as far as the $\\theta_{23}$ octant, \nthe mass ordering and the phase $\\delta$ are concerned. \nIn particular, a tension between recent long-baseline accelerator neutrino data (from T2K \\cite{T2K2020} and NOvA \\cite{NOvA2020}) \naffects all these $3\\nu$ unknowns at the same time \\cite{deSalas:2020pgw,Esteban:2020cvm,Marrone2021,Kelly:2020fkv}.\nIn addition, the Dirac-Majorana nature\nand the absolute neutrino mass scale remain undetermined in current nonoscillation searches, with $\\Sigma$, $m_\\beta$ and $m_{\\beta\\beta}$ constrained at sub-eV scales but still consistent with null values \\cite{Formaggio:2021nfz}. \n\n\nIn this work we discuss the status of the $3\\nu$ framework, including new data that have recently become available\nfrom both oscillation and nonoscillation searches, with particular attention to issues of concordance or discordance of various data sets, and to their implications on the unfinished fabric of the paradigm. By performing a global analysis of oscillation data, including the latest Super-Kamiokande atmospheric results made publicly available in 2021 \\cite{SKmap}, \nwe find an indication for normal ordering at the level of $2.5\\sigma$, as well as $90\\%$ C.L.\\ hints for $\\theta_{23}<\\pi\/4$ and for $\\sin\\delta<0$. We discuss the structure and interplay of such hints, especially in the light of the T2K and NOvA tension and of the complementarity among accelerator, atmospheric and reactor data. We surmise that further understanding of neutrino nuclear interactions may help to clarify some issues. \n\nConcerning nonoscillation searches, we include the KATRIN 2021 data \\cite{KATRIN2021} that, for the first time, set sub-eV upper bounds on $m_\\beta$ at 90\\% C.L. We analyze systematically all the latest $0\\nu\\beta\\beta$ decay searches probing half lives $T>10^{25}$~y (in $^{76}$Ge, $^{130}$Te and $^{136}$Xe), and translate them into $m_{\\beta\\beta}$ bounds via correlated nuclear matrix elements. In the realm of cosmology and of its consensus $\\Lambda$CDM model, increasing attention is also being paid to old and new data tensions, see e.g.\\ \\cite{DiValentino:2021izs,DiValentino:Snowmass,Challenge2021}. In this context, we focus on the \nso-called $A_\\mathrm{lens}$ anomaly affecting Planck angular spectra (that show more lensing\nthan expected in the $\\Lambda$CDM model \\cite{Aghanim:2018eyx}), and we consider two possible \noptions, leading to different implications for absolute mass observables. On the one hand, we revisit a previously considered ``default'' scenario \\cite{Capozzi:2020},\nincluding all Planck results (irrespective of the $A_\\mathrm{lens}$ anomaly), that sets stringent upper bounds on $\\Sigma$ at the level of $\\sim\\! 10^{-1}$~eV, and further favors normal ordering, raising its overall preference to $\\sim 3\\sigma$. On the other hand, we discuss an ``alternative'' option\nthat makes use of the recent ACT CMB polarization data release 4 (ACTPol-DR4) \\cite{Aiola:2020azj} that is consistent with standard lensing, also in combination\nwith WMAP 9-year data (WMAP9) \\cite{Bennett:2012zja} and selected data from Planck \\cite{Aghanim:2018eyx,Aghanim:2019ame,Aghanim:2018oex}; such option is insensitive to the mass ordering and prefers $\\Sigma\\sim \\mathrm{few}\\times 10^{-1}$~eV, with different implications for $m_\\beta$ and $m_{\\beta\\beta}$ searches. \n\n\nBuilding upon our previous works \\cite{Capozzi:2017ipn,Capozzi:2020} we elaborate upon these recent topics as follows: \nIn Sections~II and III we update and discuss the analysis of oscillation and nonoscillation data, respectively.\nWe pay particular attention to relevant correlations among various observables, and to some emerging tensions \namong different data sets. We provide a brief synthesis of oscillation and nonoscillation results in Sec.~IV.\n\n\n\\vspace*{-1mm}\n\\section{Oscillation data, analysis methods and results}\n\\label{Sec:Osc}\n\nIn this section we introduce recent oscillation data that were not included in our previous work \\cite{Capozzi:2020}, \ntogether with the methodology used for their analysis, in the light of some emerging issues in precision oscillation physics. We then discuss the resulting constraints on the parameters $(\\delta m^2,\\,\\Delta m^2,\\,\\sin^2\\theta_{ij},\\,\\delta)$,\nboth separately and in selected pairs, highlighting the concordance, discordance and complementarity of various datasets.\n\n\\vspace*{-1mm}\n\\subsection{Oscillation data update}\n\n\nThree-neutrino oscillations are currently constrained by experiments using long-baseline (LBL) accelerator, solar, long-baseline reactor (KamLAND), short-baseline (SBL) reactor and atmospheric neutrinos. With respect to \\cite{Capozzi:2020}, we update some of these datasets as follows. LBL accelerator data, in the form of neutrino and antineutrino energy spectra for flavor disappearance and appearance channels, are taken from the presentations of T2K \\cite{T2K2020} and NOvA \\cite{NOvA2020} at {\\em Neutrino 2020\\\/} \\cite{Nu2020} (and subsequent conferences). Such spectra are endowed with statistical (Poisson) errors and \nsystematic (normalization and energy scale) uncertainties, as well as with oscillation-independent backgrounds, \nin a modified version of GLOBES \\cite{GLOBES}. Solar neutrinos data from the Super-Kamiokande-IV 2970-days run (SK-IV energy spectrum and day-night asymmetry) are taken from the presentation at {\\em Neutrino 2020\\\/} \\cite{SK2020}, while the input solar model BP16-GS98 \\cite{Vinyoles:2016djt} is unchanged. Concerning SBL reactors, we update from {\\em Neutrino 2020\\\/} the RENO \\cite{RENO2020} and \nDouble Chooz data \\cite{DC2020}, while the\nDaya Bay data \\cite{Adey:2018zwh} are unchanged. Note that Daya Bay and and RENO measure both \n$\\theta_{13}$ and $\\Delta m^2$, while the latter parameter is not significantly constrained by\nDouble Chooz. IceCube-DeepCore (IC-DC) atmospheric data \nare taken as in \\cite{Capozzi:2020}; a new IC-DC data release is expected in the near future \\cite{ICDC2021}.\nFinally, our oscillation dataset is completed by the recent SK-IV atmospheric results \\cite{SK2020,Jiang:2019xwn}, included through the\n$\\chi^2$ map recently made available by the collaboration \\cite{SKmap}. \n\n\n\\vspace*{-1mm}\n\\subsection{Analysis method and emerging issues}\n \nWe adopt the methodology proposed in \\cite{Capozzi:2018ubv}, see also \\cite{Capozzi:2017ipn,Capozzi:2020}. In particular, \nwe start with the combination of solar, KamLAND and LBL accelerator neutrino data, that represent \nthe minimal dataset sensitive to all the oscillation parameters $(\\delta m^2,\\,\\Delta m^2,\\,\\theta_{ij},\\,\\delta)$.\nWe then add SBL reactor neutrino data, that\nsharpen the constraints on $(|\\Delta m^2|,\\theta_{13})$ and indirectly affect the parameters $(\\theta_{23},\\,\\delta)$\nand sign$(\\Delta m^2)$ via correlations. We add atmospheric neutrino data at the end, for two reasons: (a) they\n provide rich but\nrather entangled information on the parameters $(\\Delta m^2,\\theta_{23},\\,\\theta_{13},\\,\\delta)$; (b) \ntheir $\\chi^2(\\Delta m^2,\\theta_{23},\\,\\delta)$ maps\nassume an input on $(\\delta m^2,\\,\\theta_{12},\\,\\theta_{13})$ from the combination of solar, KamLAND and SBL reactor data.\nFinally, a frequentist approach based on $\\chi^2$ functions is used for all datasets. Best fits\nare obtained by $\\chi^2$ minimization, while allowed regions around best fits are expanded \nin terms of ``number of standard deviations'' $N_\\sigma=\\sqrt{\\Delta \\chi^2}$. \n\\textcolor{black}{In particular, two-dimensional contours\nare shown for $N_\\sigma=1$, 2 and 3, which, for a $\\chi^2$ distribution with two degrees of freedom, correspond\nto C.L.\\ of 39.35\\%, 86.47\\% and 98.89\\%, respectively. Their one-dimensional \nprojections provide the $N_\\sigma$ ranges for each parameter, corresponding to C.L.\\ \nof 68.27\\%, 95.45\\% and 99.73\\%, respectively.}\nThe difference $\\Delta \\chi^2_{\\mathrm{IO}-\\mathrm{NO}}$ between the minima in IO and NO may---or may not---be accounted\nfor, when reporting fit results; these two options will be clearly distinguished in each context. \n\nWe briefly discuss some issues arising in global data analyses, in \nthe era of increasingly precise measurements and of growing sensitivity to subleading effects. \nData fits usually\ninvolve the comparison of experimental event rates $R_\\mathrm{expt}$ \nwith their theoretical predictions $R_\\mathrm{theo}$\n\\begin{equation}\n\\label{Rtheo} \nR_\\mathrm{theo} = \\int \\Phi_\\alpha \\otimes P_{\\alpha\\beta} \\otimes \\sigma_\\beta \\otimes r_\\beta \\otimes \\varepsilon_\\beta\\ ,\n\\end{equation}\nwhere, from left to right, the integrands represent the source flux of $\\nu_\\alpha$, the probability of\n$\\nu_\\alpha\\to\\nu_\\beta$ oscillations, and the interaction cross section, detector resolution and efficiency for \n$\\nu_\\beta$ events. Some \nfactors may be differential functions that need multiple integrations or convolutions, \nas alluded by the cross product $\\otimes$. Integrands are endowed with various uncertainties, that may be shared\nby (i.e., correlated among) various rates $R$ in the same or different experiment(s).\nIn solar neutrino searches, all these features can be accurately implemented to a large extent\n\\cite{Fogli:2002pt}. Also short-baseline reactor experiments (Daya Bay, RENO, Double Chooz) \ngenerally provide enough public information to allow reproducible analyses, although \na more precise joint analysis by the different collaborations (accounting for minor correlated uncertainties)\nwould be desiderable \\cite{ESCAPE}. \nMore relevant issues arise in the context of long-baseline accelerator searches, currently carried out\nby T2K \\cite{T2K2020} and NOvA \\cite{NOvA2020}. \nTheir event spectra are usually given in terms of a ``reconstructed'' (unobservable) neutrino energy $E_\\nu^\\mathrm{rec}$, that\nis processed from observable event energies at far detectors, through models (for some $R_\\mathrm{theo}$ integrands) \nconstrained by near-detector data. \nIn principle, both T2K and NOvA should share a common theoretical model for $\\sigma$ (and to some extent for $\\Phi$), \nleading to possible correlations among their uncertainties for $E_\\nu^\\mathrm{rec}$ (affecting $\\Delta m^2$) and for\nthe event\nrates $R_\\mathrm{theo}(E_\\nu^\\mathrm{rec})$ (affecting $\\theta_{23}$ and $\\theta_{13}$). In practice, the adopted models are different, and possible covariances are ignored. A joint analysis planned\nby the T2K and NOvA \ncollaborations \\cite{JOINT} might shed light on these issues.\nFinally, in current atmospheric neutrino searches at SK and IC-DC, \nthe data processing and analysis are too complex to be reproducible by external users with an acceptable accuracy. Oscillation results are\ngiven in terms of public $\\chi^2$ maps that, when summed up, cannot account for known covariances, \nsuch as those related to the (common, in principle) input models for $\\Phi$ and $\\sigma$. Once again, joint \nanalyses or in-depth comparisons of data by different atmospheric $\\nu$ experiments would be desirable \\cite{ATMOS}. We surmise that\nglobal oscillation analyses could \nbenefit from a better control of those systematics that are shared by different experiments \n(such as model uncertainties for $\\Phi_\\alpha$ and $\\sigma_\\beta$), but whose correlations are not yet\nproperly accounted for. See also the remarks in Sec.~\\ref{sec:remarks}.\n \n\\vspace*{-1mm}\n\\subsection{Results on single oscillation parameters}\n\nIn this Section we present the constraints on the six oscillation parameters $(\\delta m^2,\\,\\Delta m^2,\\,\\sin^2\\theta_{ij},\\,\\delta)$ for increasingly rich data sets. We explicitly account for the $\\chi^2$ difference between NO and IO, in order to show its variations. \n\nFigure~\\ref{Fig_01} shows the results for the combination of solar and KamLAND data \n(sensitive to $\\delta m^2$, $\\sin^2\\theta_{12}$, and $\\sin^2\\theta_{13}$) with LBL accelerator data \n(mainly sensitive to $\\Delta m^2$, $\\sin^2\\theta_{23}$, $\\sin^2\\theta_{13}$ and $\\delta$), for\nboth NO (blue) and IO (red). The latter mass ordering is slightly favored (by accelerator data) at the level of $\\sim\\!1\\sigma$, as also discussed later.\nThe parameters $\\delta m^2$ and $\\sin^2\\theta_{12}$ are rather precisely measured, with nearly linear and symmetrical \n(i.e., almost gaussian) uncertainties, and no significant difference between constraints in NO and IO.\nThe parameters $\\sin^2\\theta_{13}$ and $\\sin^2\\theta_{23}$ are less \naccurately constrained. In particular, the two minima in $\\sin^2\\theta_{23}$ reflect the $\\theta_{23}$ \noctant ambiguity in the $\\nu_\\mu\\to\\nu_\\mu$ disappearance searches at LBL accelerators, \ninducing two correlated minima in $\\sin^2\\theta_{13}$ via the leading amplitude\nof $\\nu_\\mu\\to\\nu_e$ appearance ($\\propto \\sin^2\\theta_{23}\\sin^2\\theta_{13}$) \\cite{Fogli:1996pv}.\nThe phase $\\delta$ is poorly constrained, although it appears to be \nslightly favored around $\\pi$ in NO and around $3\\pi\/2$ in IO, while it is disfavored \naround $\\pi\/2$ in both cases. \n\nFigure~\\ref{Fig_02} shows the effect of adding SBL reactor data, which are sensitive to \n$|\\Delta m^2|$ and $\\sin^2\\theta_{13}$. One can notice the strong reduction of the \n$\\sin^2\\theta_{13}$ uncertainty, inducing also correlated changes on the relative likelihood \nof the lower and upper octant of $\\theta_{23}$ via $\\nu_\\mu\\to\\nu_e$ appearance in LBL accelerators\n\\cite{Fogli:1996pv}. The synergy of SBL reactor and LBL accelerator data \nalso helps to break mass-ordering degeneracies via independent measurements of $\\Delta m^2$\n(see, e.g., \\cite{Huber:2003pm}) and currently flips the fit preference from IO to NO (at \nthe level of $\\sim\\!1.3\\sigma$), together with an increase of the best-fit value of $\\Delta m^2$\nwith respect to Fig.~\\ref{Fig_01}. The preference for $\\delta\\sim \\pi$ ($\\sim 3\\pi\/2$) in NO (IO) remains\nunaltered. \n\n \n\n\n\n\\begin{figure}[t!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.82\\textwidth]{Fig_01.pdf}\n\\caption{\\label{Fig_01}\n\\footnotesize Global $3\\nu$ oscillation analysis of long-baseline accelerator, solar and KamLAND $\\nu$ data. \nBounds on the parameters $\\delta m^2$, $|\\Delta m^2|$, $\\sin^2\\theta_{ij}$, and $\\delta$, for NO (blue) and IO (red), in terms of $N_\\sigma=\\sqrt{\\Delta \\chi^2}$ from the global best fit. The offset between separate minima in IO and NO, \n$\\Delta \\chi^2_\\mathrm{IO-NO}=-1.1$, favors the IO case by $\\sim\\! 1.0\\sigma$. \n} \\end{minipage}\n\\end{figure}\n\n\\vspace*{10mm}\n\n\\begin{figure}[h!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.82\\textwidth]{Fig_02.pdf}\n\\caption{\\label{Fig_02}\n\\footnotesize As in Fig.~\\protect\\ref{Fig_01}, but adding short-baseline reactor $\\nu$ data. \nThe offset $\\Delta \\chi^2_\\mathrm{IO-NO}=+1.8$ favors the NO case by $\\sim\\!1.3\\sigma$. \n} \\end{minipage}\n\\end{figure}\n\n\n\n\\newpage\n\nFigure~\\ref{Fig_03} shows the effect of adding atmospheric $\\nu$ data, which add further sensitivity\nto $\\Delta m^2$ (and to its sign), as well as to $\\sin^2\\theta_{23}$ and $\\delta$. In particular, the inclusion of SK-IV \ndata \\cite{SK2020,SKmap} corroborates the preference in favor of NO (at an overall level of $\\sim\\!2.5\\sigma$),\nflips the $\\theta_{23}$ preference from the upper to the lower octant in NO (at $\\sim 1.6\\sigma$) \nand also moves the best fit of $\\delta$ slightly above the CP-conserving value $\\pi$ (disfavored at $\\sim 1.6\\sigma$). \nThe latter hints in favor of $\\theta_{23}<\\pi\/4$ and on $\\delta> \\pi$, \ncurrently emerging in NO at the statistical ``threshold of interest'' \nof 90\\% C.L., represent interesting updates with respect to previous global analyses not including SK-IV\natmospheric data \\cite{deSalas:2020pgw,Esteban:2020cvm,Marrone2021}. \n\n\\begin{figure}[t!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.82\\textwidth]{Fig_03.pdf}\n\\caption{\\label{Fig_03}\n\\footnotesize As in Fig.~\\protect\\ref{Fig_02}, but adding atmospheric $\\nu$ data (i.e., with\nall oscillation data included). \nThe offset $\\Delta \\chi^2_\\mathrm{IO-NO}=+6.5$ favors the NO case by $\\sim\\!2.5\\sigma$. \n} \n\\end{minipage}\n\\end{figure}\n\nTable~\\ref{Tab:Synopsis} reports a numerical summary of the same information shown in Fig.~3, for \nthe separate cases of NO and IO (whose $\\chi^2$ difference is reminded in the last row). \nThe two squared mass splittings $\\Delta m^2$ and $\\delta m^2$ are measured with a formal $1\\sigma$ accuracy\nof $1.1\\%$ and $2.3\\%$, respectively. The mixing parameters $\\sin^2\\theta_{13}$, $\\sin^2\\theta_{12}$ and $\\sin^2\\theta_{23}$\nare measured with an accuracy of $\\sim\\! 3\\%$, $4.5\\%$, and $\\sim 6\\%$, respectively.\nThe latter uncertainty is largely affected by the $\\theta_{23}$ octant ambiguity; if one of the two quasi-degenerate $\\theta_{23}$ options could be removed, such uncertainty would be reduced by factor of $\\sim\\!2$ in both NO and IO.\n\nSummarizing, five oscillation parameters are known with (few) percent accuracy, while \nonly some hints emerge about the remaining three oscillation ``unknowns''. In particular, \nwe find a preference for NO at $\\sim\\!2.5\\sigma$ and, in such ordering, we also find a preference at 90\\% C.L.\\ for \n$\\theta_{23}$ in the lower octant (with respect to the secondary best fit\n in the upper octant) and for $\\delta\\simeq 1.24\\pi$ (with respect to the CP-conserving value $\\delta=\\pi$). Conversely,\nmaximal $\\theta_{23}$ mixing is disfavored at $\\sim 1.8\\sigma$ and \nthe range $\\delta \\in [0,\\, 0.77\\pi]$ is disfavored at $>3\\sigma$ in NO. \n\n\n\\begin{table}[h!]\n\\centering\n\\resizebox{.82\\textwidth}{!}{\\begin{minipage}{\\textwidth}\n\\caption{\\label{Tab:Synopsis}\nGlobal $3\\nu$ analysis of oscillation parameters: best-fit values and allowed ranges at $N_\\sigma=1$, 2 and 3, for either NO or IO, including all data. The latter column shows the formal ``$1\\sigma$ fractional accuracy'' for each parameter, defined as 1\/6 of the $3\\sigma$ range, divided by the best-fit value and expressed in percent. We recall that \n$\\Delta m^2=m^2_3-{(m^2_1+m^2_2})\/2$ and that $\\delta \\in [0,\\,2\\pi]$ (cyclic). The last row reports the difference\nbetween the $\\chi^2$ minima in IO and NO.\n}\n\\begin{ruledtabular}\n\\begin{tabular}{lcccccc}\nParameter & Ordering & Best fit & $1\\sigma$ range & $2\\sigma$ range & $3\\sigma$ range & ``$1\\sigma$'' (\\%) \\\\\n\\hlin\n$\\delta m^2\/10^{-5}~\\mathrm{eV}^2 $ & NO, IO & 7.36 & 7.21 -- 7.52 & 7.06 -- 7.71 & 6.93 -- 7.93 & 2.3 \\\\\n\\hlin\n$\\sin^2 \\theta_{12}\/10^{-1}$ & NO, IO & 3.03 & 2.90 -- 3.16 & 2.77 -- 3.30 & 2.63 -- 3.45 & 4.5 \\\\\n\\hlin\n$|\\Delta m^2|\/10^{-3}~\\mathrm{eV}^2 $ & NO & 2.485 & 2.454 -- 2.508 & 2.427 -- 2.537 & 2.401 -- 2.565 & 1.1 \\\\\n & IO & 2.455 & 2.430 -- 2.485 & 2.403 -- 2.513 & 2.376 -- 2.541 & 1.1 \\\\\n\\hlin\n$\\sin^2 \\theta_{13}\/10^{-2}$ & NO & 2.23 & 2.17 -- 2.30 & 2.11 -- 2.37 & 2.04 -- 2.44 & 3.0 \\\\\n & IO & 2.23 & 2.17 -- 2.29 & 2.10 -- 2.38 & 2.03 -- 2.45 & 3.1 \\\\\n\\hlin\n$\\sin^2 \\theta_{23}\/10^{-1}$ & NO & 4.55 & 4.40 -- 4.73 & 4.27 -- 5.81 & 4.16 -- 5.99 & 6.7 \\\\\n & IO & 5.69 & 5.48 -- 5.82 & 4.30 -- 5.94 & 4.17 -- 6.06 & 5.5 \\\\\n\\hlin\n$\\delta\/\\pi$ & NO & 1.24 & 1.11 -- 1.42 & 0.94 -- 1.74 & 0.77 -- 1.97 & 16 \\\\\n & IO & 1.52 & 1.37 -- 1.66 & 1.22 -- 1.78 & 1.07 -- 1.90 & 9 \\\\\n\\hlin\n$\\Delta \\chi^2_{\\mathrm{{IO}-{NO}}}$ & IO$-$NO & +6.5 \\\\ [1pt]\n\\end{tabular}\n\\end{ruledtabular}\n\\end{minipage}}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.34\\textwidth]{Fig_04.pdf}\n\\vspace*{-2.5mm}\n\\caption{\\label{Fig_04}\n\\footnotesize \nRegions separately allowed by solar and KamLAND data in the plane $(\\sin^2\\theta_{12},\\,\\delta m^2)$ for \n$\\sin^2\\theta_{13}=0.02$ and NO. (The case of IO, not shown, would be almost identical). \nThe solar $\\nu$ fit includes SK-IV 2970-day data \\cite{SK2020}.}\n\\end{minipage}\n\\vspace*{-3mm}\n\\end{figure}\n\n\n \n \n\\subsection{Results on selected pairs of oscillation variables}\n\\vspace*{-2mm}\n\nBy studying selected pairs of variables we can gain \nfurther insights about current unknowns (the mass ordering, the octant of $\\theta_{23}$ and the CP phase $\\delta$), \nand appreciate their interplay with known features of $3\\nu$ oscillations. We discuss the pairs \n $(\\sin^2\\theta_{12},\\,\\delta m^2)$, $(\\sin^2\\theta_{23},\\,\\sin^2\\theta_{13})$, \n\\textcolor{black}{$(\\sin^2\\theta_{23},\\,|\\Delta m^2|)$}, \n$(\\sin^2\\theta_{23},\\,\\delta)$, as well as pairs of total \n$\\nu_e$ and $\\overline\\nu_e$ events (bi-event plots) as observed in the appearance channel by T2K and NOvA.\n\n\n\n\n\n\n\nFigure~\\ref{Fig_04} shows the regions separately allowed by solar and KamLAND neutrino data \nin the plane charted by $(\\sin^2\\theta_{12},\\,\\delta m^2)$, assuming fixed $\\sin^2\\theta_{13}=0.02$ and NO.\nThe two regions were somewhat displaced in the past, leading to a $<2\\sigma$ tension \nbetween the best-fit $\\delta m^2$ values \\cite{PDG1} (see, e.g., the analogous Fig.~4 in \\cite{Capozzi:2018ubv}). The \ncurrent regions in Fig.~\\ref{Fig_04} appear to be in very good agreement, largely as a \nresult of a slightly smaller day-night asymmetry in SK-IV 2970-day solar data, shifting\nthe solar $\\delta m^2$ best fit upwards and closer to the KamLAND one \\cite{SK2020}. \nWe find that this shift does not alter the combined solar and KamLAND constraints on \n$\\theta_{13}$, namely, $\\sin^2\\theta_{13}\\simeq 0.014\\pm 0.015$ \n (see, e.g., Fig.~5 in \\cite{Capozzi:2018ubv}). Results for IO (not shown)\nwould be almost identical for all parameters $(\\delta m^2,\\,\\sin^2\\theta_{12},\\,\\sin^2\\theta_{13})$.\nIn conclusion, solar and KamLAND data are\nnot only in very good agreement about the $(\\nu_1,\\,\\nu_2)$ oscillation parameters\n$(\\delta m^2,\\,\\sin^2\\theta_{12})$, but are also consistent with the \nmeasurement $\\sin^2\\theta_{13}\\simeq 0.02$ at SBL reactors. \n\n \n\n\n\n\n\n\nFigure~\\ref{Fig_05} shows the covariance of the pair $(\\sin^2\\theta_{23},\\,\\sin^2\\theta_{13})$\nfor increasingly rich data sets, in both NO (top) and IO (bottom), \n\\textcolor{black}{with the corresponding $\\chi^2$ functions separately minimized for each mass ordering}. \nThe $\\theta_{23}$ octant ambiguity leads to two quasi-degenerate solutions at $1\\sigma$, that generally merge \nat $\\sim \\!2\\sigma$. The leading appearance amplitude in LBL accelerators, scaling as $\\sin^2\\theta_{23}\\sin^2\\theta_{13}$, \ninduces an anticorrelation between the two angles in the left panels: the higher $\\theta_{23}$, the smaller $\\theta_{13}$.\nIn the middle panels, the results from SBL reactors only (represented by $\\pm 2\\sigma$ error bars) tend to prefer\nslightly the upper-octant solution\n(with lower values of $\\theta_{13}$) in both NO and IO, \nas confirmed by the combination with LBL accelerators (continuous curves). In the right panels,\nhowever, adding atmospheric data (that include SK-IV \\cite{SK2020,SKmap}) flips the octant preference in NO, while\nconfirming it in IO. We conclude that current hints about\nthe $\\theta_{23}$ octant are still rather fragile. \n\\vspace*{-4mm}\n\n\\begin{figure}[b]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.6\\textwidth]{Fig_05.pdf}\n\\vspace*{-2.5mm}\n\\caption{\\label{Fig_05}\n\\footnotesize Regions allowed in the plane $(\\sin^2\\theta_{23},\\,\\sin^2\\theta_{13})$ for increasingly rich data sets:\nSolar + KamLAND + LBL accelerator data (left panels), plus SBL reactor data (middle panels), plus Atmospheric data (right panels). Top and bottom panels refer, respectively, to NO and IO as taken separately (i.e., without any relative $\\Delta\\chi^2$ offset). The error bars in the middle panels show the $\\pm2\\sigma$ range for $\\theta_{13}$ arising from SBL reactor data only. \n}\n\\end{minipage}\n\\end{figure}\n\\newpage\n\n\\begin{figure}[t!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.6\\textwidth]{Fig_06.pdf}\n\\vspace*{-2mm}\n\\caption{\\label{Fig_06}\n\\footnotesize As in Fig.~\\ref{Fig_05}, but in the plane \n\\textcolor{black}{$(\\sin^2\\theta_{23},\\,|\\Delta m^2|)$.}\n The error bars in the middle panels show the $\\pm2\\sigma$ range for \n \\textcolor{black}{$|\\Delta m^2|$} \n arising from SBL reactor data only. \n}\n\\end{minipage}\n\\end{figure}\n\n\\begin{figure}[b!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.6\\textwidth]{Fig_07.pdf}\n\\vspace*{-2mm}\n\\caption{\\label{Fig_07}\n\\footnotesize As in Fig.~\\ref{Fig_06}, but in the plane $(\\sin^2\\theta_{23},\\,\\delta)$.\n}\n\\end{minipage}\n\\end{figure}\n\n\n\nFigure~\\ref{Fig_06} shows the covariance of the pair \n\\textcolor{black}{$(\\sin^2\\theta_{23},\\,|\\Delta m^2|)$}. \nIn this case, \nthere is a marked preference of SBL reactor data for relatively ``high'' values of \n\\textcolor{black}{$|\\Delta m^2|$} \n($\\pm 2\\sigma$ error bars), \nas compared with LBL accelerator data. A compromise is more easily reached for NO, featuring a smaller \ndifference between the $\\Delta m^2$ values derived from SBL reactor and LBL accelerator data. \nThis explains why the preferred mass ordering flips from inverted to normal, when \npassing from Fig.~\\ref{Fig_01} to Fig.~\\ref{Fig_02}; see also \\cite{deSalas:2020pgw,Esteban:2020cvm}.\nFor the same reason, maximal values of $\\theta_{23}$ (corresponding to the lowest values of \n\\textcolor{black}{$|\\Delta m^2|$}\nallowed by LBL accelerator data) are slightly more disfavored by adding SBL reactor data.\nThe overall preferences for NO and for nonmaximal $\\theta_{23}$ are confirmed by atmospheric data that,\nhowever, move the best fit in NO from the upper to the lower octant. \nWe emphasize that SBL reactor data, despite having no direct sensitivity to sign($\\Delta m^2$) and $\\theta_{23}$,\ncontribute to constrain (via covariances) these two variables, in combination with other datasets.\n\n \n\n\n\n\n\n\n\n\n\nFigure~\\ref{Fig_07} shows the covariance of the pair $(\\sin^2\\theta_{23},\\,\\delta)$. The octant\nambiguity leads to two quasi-degenerate best fits, surrounded by allowed regions that merge at $2\\sigma$ or $3\\sigma$. \nIn IO there is rather stable preference for the CP-violating case $\\delta \\simeq 3\\pi\/2$ in all data combinations, \nwith no significant correlation\nwith $\\theta_{23}$. In NO the allowed $\\delta$ range is always larger, and includes the\nCP-conserving case $\\delta \\simeq \\pi$ at $2\\sigma$; moreover, a slight negative correlation between $\\delta$ and $\\sin^2\\theta_{23}$ emerges when adding SBL\nreactor data. It is difficult to trace the origin of these null or small covariances, since\nthe interplay between $\\delta$ and $\\sin^2\\theta_{23}$ (and with $\\sin^2\\theta_{13}$) is rather subtle, see e.g.\\ \n\\cite{Minakata:2013eoa,Coloma:2014kca}. \nIn any case, the negative correlation emerging in NO slightly amplifies the effect of adding \natmospheric neutrino data, that prefer\nboth $\\delta\\sim 3\\pi\/2$ and the lower octant of $\\theta_{23}$, thus disfavoring $\\delta \\simeq \\pi$ in a synergic way.\nQuantitatively, we find that the CP-conserving value $\\delta=\\pi$ is disfavored \nat 90\\% C.L.\\ (or $\\sim\\!1.6\\sigma$, see Fig.~\\ref{Fig_03}), while recent analyses not\nincluding SK-IV atmospheric data allowed this value at $<1\\sigma$ \\cite{deSalas:2020pgw,Esteban:2020cvm}.\nAlthough these covariance effects are admittedly small in current data, they are expected\nto grow with increasing statistics and accuracy\nin LBL accelerator experiments, whose results we comment in more detail through the so-called bi-event plots, \nderived from bi-probability plots.\n\n\\newpage\n\n\n\\begin{figure}[t]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.83\\textwidth]{Fig_08.pdf}\n\\vspace*{-5.mm}\n\\caption{\\label{Fig_08}\n\\footnotesize \nBi-event plots: Total number of $\\nu$ and $\\overline\\nu$ appearance events \nfor T2K and NOvA, in four possible combinations. The slanted ellipses represent the theoretical expectations for NO (blue) and IO (red),\nand for two representative values of $\\sin^2\\theta_{23}$: 0.45 (lower octant, thin ellipses) and 0.57 (upper octant, thick ellipses). The CP-conserving value $\\delta=\\pi$ and the CP-violating value $\\delta=3\\pi\/2$\nare marked as a circle and a star, respectively. Each gray band represents one datum with its $\\pm1\\sigma$ statistical error (from \\cite{T2K2020,NOvA2020}); the combination of any two data provides a (black dashed) $1\\sigma$ error ellipse, whose center is marked by a cross. See the text for details.}\n\\end{minipage}\n\\vspace*{-2mm}\n\\end{figure}\n\n\nBi-probability plots, charted by the $\\nu_\\mu\\to\\nu_e$ and $\\overline\\nu_\\mu\\to\\overline\\nu_e$ appearance probabilities in LBL accelerator experiments at fixed neutrino energy,\ndisplay the cyclic dependence on $\\delta$ through ellipses \n\\cite{Minakata:2001qm} and help to understand parameter degeneracies \\cite{Minakata:2002qi}.\nSee \\cite{Kelly:2020fkv} for a related discussion, in the context of recent T2K and NOvA data. \nAfter integration over energy (weighted by $\\nu$ fluxes and cross sections), the probabilities\ncan be converted into total number of appearance events and thus into bi-event plots, preserving elliptic shapes\n \\cite{Mena:2006uw}. Such theoretical ellipses can be directly compared with the measured number of events; \nsee, e.g., the presentations at {\\em Neutrino 2020\\\/} \\cite{Nu2020} by T2K \\cite{T2K2020} and NOvA \\cite{NOvA2020}. \nAlthough we use the full energy spectra (and not their integrals) in our LBL accelerator data analysis, we\nthink that bi-event plots can help to highlight some issues emerging in the comparison of current T2K and NOvA data. \n\n \n \nFigure~\\ref{Fig_08} show the $\\nu$ and $\\overline\\nu$ appearance events \nfor T2K and NOvA, in four possible combinations. The grey bands and the black ellipses represent the data with their \n$1\\sigma$ statistical errors, while the colored ellipses represent the theoretical expectations for NO (blue) and IO (red),\nfor two representative values of $\\theta_{23}$ in the lower octant (thin) or upper octant (thick). Two representative values\nof $\\delta$ ($\\pi$ and $3\\pi\/2$) are also marked on each ellipse. \n\nWe first consider the two experiments separately, as shown in \nthe upper left panel for T2K ($\\nu$ vs $\\overline\\nu$) and in the lower right panel for NOvA ($\\nu$ vs $\\overline\\nu$). In T2K, the best agreement of theory and data is reached for NO; in this ordering, there is a clear preference for $\\delta=3\\pi\/2$, and a slight preference for the upper\noctant of $\\theta_{23}$. In NOvA, all the four\ntheoretical ellipses are close to the experimental one, but the overlap is larger in NO; in this ordering,\nthere is a preference for $\\delta=\\pi\/2$ with respect to $3\\pi\/2$, with no significant distinction of the $\\theta_{23}$ octants. \n\nWe then rearrange exactly the same information (both data and predictions)\nby combining T2K and NOvA separately in the $\\nu$ and $\\overline\\nu$ channels,\nas shown in the upper right panel for $\\overline\\nu$, and in the lower left panel for $\\nu$. \nIn both plots, the best agreement of data and theory is now reached for IO. In such ordering,\nthere is a clear preference for $\\delta=3\\pi\/2$, as well as for the upper (lower) octant in the $\\nu$ ($\\overline\\nu$) channel. \n\nIn conclusion, Fig.~\\ref{Fig_08} shows that, as far as the three oscillation unknowns are concerned \n(mass ordering, $\\theta_{23}$ octant, CP symmetry), separate and combined T2K and NOvA data provide us with different indications, signalling a ``tension'' between such results; see also the\ndiscussion in \\cite{deSalas:2020pgw,Esteban:2020cvm,Kelly:2020fkv}.\n\\textcolor{black}{Ultimately, the tension reflects the fact that NOvA (T2K) observes relatively symmetric (asymmetric) rates of $\\nu$ and \n$\\overline\\nu$ in their current appearance data.}\n\n\n\\subsection{Remarks}\n\\label{sec:remarks}\n\nThe current hints about the \nthree oscillation unknowns are less converging and more fragile than in the recent past \n\\cite{Capozzi:2020}, due to the T2K-NOvA tension. As for its origin,\nFig.~\\ref{Fig_08} shows that possible statistical fluctuations of the data (at the level of one or two standard deviations) might play a role. However, it makes sense to \nspeculate if there is more than just statistics behind the tension. \nOne possibility is to invoke nonstandard neutrino interactions, that would induce\ndifferent effects along the T2K and NOvA baselines \\cite{Chatterjee:2020kkm,Denton:2020uda}. \nBarring new physics beyond the $3\\nu$ paradigm, we surmise that standard\ninteractions of neutrinos in nuclei might also play a systematic role.\n \nIt is widely recognized that both the total and the differential neutrino cross sections in nuclei are not known \naccurately enough for the purposes of LBL accelerator experiments \\cite{Mosel:2016cwa,Alvarez-Ruso:2017oui}. Roughly speaking,\nnormalization and energy reconstruction uncertainties in $\\nu_\\mu$ cross sections \naffect the measurement of $\\theta_{23}$ and $\\Delta m^2$, respectively, while\nsystematics on the relative $\\nu_\\mu\/\\nu_e$ and $\\nu\/\\overline\\nu$ cross sections affect the LBL experimental sensitivity\nto $\\theta_{13}$, $\\delta$ and the mass ordering. \nNote, e.g., that a 1\\% systematic error on the reconstructed neutrino energy $E^\\mathrm{rec}_\\nu$ is transferred to $\\Delta m^2$ \nvia the leading $\\Delta m^2\/E$ dependence of the oscillation phase. The formal $\\sim\\!1 \\%$ error on $\\Delta m^2$ \nemerging from the global fit (Table~\\ref{Tab:Synopsis}) implicitly posits \nthat energy reconstruction errors are known at sub-percent level and are independent in different experiments,\nwhich may be an optimistic representation of the current uncertainties. \n\nWith increasingly higher statistics and accuracy, cross-section\nsystematics may start to affect both known and unknown oscillation parameters\nextracted from detailed energy spectra. Possible parameter biases have been shown to arise by swapping \ndifferent cross section models in simulations of prospective LBL data \n\\cite{Coloma:2013rqa,Coloma:2013tba,Benhar:2015wva}. We remark that\na percent-level bias on $\\Delta m^2$ as measured at LBL\naccelerators, in comparison with the $\\Delta m^2$ measurement at reactors, might alter the current combined preference for NO\n(see Fig.~\\ref{Fig_06} and related discussion).\nAlthough all these effects can be reduced \nby tuning interactions models to cross-section data from near detectors \\cite{T2K2020,NOvA2020}, \nas a matter of fact T2K and NOvA use two different such models, while no single model or neutrino generator\ncan be currently tuned to agree with world cross-section data \\cite{Alvarez-Ruso:2017oui,Barrow:2020gzb}. \n\n\nSummarizing, the global analysis of current data shows a subtle interplay between the known oscillation parameters \n\\textcolor{black}{$(|\\Delta m^2|,\\,\\sin^2\\theta_{23})$} \nand the three unknowns $(\\delta,\\,\\mathrm{sign}(\\theta_{23}-\\pi\/4),\\,\\mathrm{sign}(\\Delta m^2))$, \nas discussed through Figs.~\\ref{Fig_05}--\\ref{Fig_07}. Although there are overall hints in favor of NO\n(at $2.5\\sigma$), CP violation (at $1.6\\sigma$) and lower $\\theta_{23}$ octant (at $1.6\\sigma$), the T2K--NOvA tension (Fig.~\\ref{Fig_08}) warrants some caution. Neutrino interaction systematics might affect all these parameters, in a way that escapes control in\nglobal fits by external users, since the complexity of the near-to-far analysis chain\ncan be handled only by the experimental collaborations. \nIt is thus encouraging that T2K and NOvA are planning a joint analysis \\cite{JOINT}. In this context, we practically\nsuggest that these two experiments try to swap interaction models or neutrino generators in their separate simulations, \nso as to gauge the relative size of cross-section systematics and tuning effects, before attempting a combined fit. \nIn general, we suggest that experiments sharing potential relevant systematics \n(e.g., neutrino fluxes $\\Phi_\\alpha$ and interactions in water $\\sigma_\\beta$ for atmospheric neutrinos) collaborate on \na detailed comparison of such uncertainties and possibly towards joint data analyses. \nOf course, experimental developments on $\\Phi_\\alpha$ and $\\sigma_\\beta$ should be accompanied by \ncorresponding advances in nuclear theory.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Nonoscillation data, analysis methods and results}\n\\label{Sec:Nonosc}\n\n\n\n\nIn this section we introduce recent nonoscillation data that were not included in our previous work \\cite{Capozzi:2020}, together with the methodology used for their analysis in terms of the three absolute mass observables $(m_\\beta,\\,m_{\\beta\\beta},\\Sigma)$. We include the latest $m_\\beta$ upper bounds from the KATRIN experiment \\cite{KATRIN2021} and \nintroduce a method to combine the latest $0\\nu\\beta\\beta$ constraints in terms \nof upper bounds on $m_{\\beta\\beta}$, including correlated uncertainties on their nuclear matrix elements. We also enlarge the ensemble of cosmological datasets presented in \\cite{Capozzi:2020} in the light of the current lively discussion\non tensions in cosmology \\cite{DiValentino:2021izs,DiValentino:Snowmass,Challenge2021}, that might suggest possible inconsistencies among different data (or alterations of the standard $\\Lambda$CDM model). In particular, we consider an ``alternative'' dataset, that is exempt from the Planck lensing anomaly, at the price of being restricted to recent cosmic microwave background (CMB) anisotropy observations from ACTPol-DR4 \\cite{Aiola:2020azj} plus WMAP9 \\cite{Bennett:2012zja} \nand selected Planck data \\cite{Aghanim:2018eyx,Aghanim:2019ame,Aghanim:2018oex}. The alternative option provides a nonzero best fit and\nmore conservative upper bounds on\n$\\Sigma$, as compared with the ``default'' \ndataset described in \\cite{Capozzi:2020}. \nWe shall highlight the different sensitivities to $\\Sigma$ and to the mass ordering in the default and alternative options, as examples of admissible cosmological variants with rather different impact on global neutrino data analyses. \n\n\n\n\\subsection{Single beta decay and constraints on $m_\\beta$}\n\n\nThe KATRIN $\\beta$-decay experiment has recently released the results of the second campaign of measurements \\cite{KATRIN2021}. In combination with the results of the first campaign \\cite{Aker:2019uuj}, they constrain\nat $1\\sigma$ the effective squared mass $m^2_\\beta$ as:\n\\begin{equation}\n\\label{eq:KATRIN}\nm_\\beta^2 = 0.1 \\pm 0.3\\ \\mathrm{eV}^2,\n\\end{equation}\nwith an approximately gaussian distribution around the best fit, currently in the physical region\n$m^2_{\\beta}>0$ \\cite{KATRIN2021} (\nwhile it was negative in the first campaign \\cite{Aker:2019uuj}). The upper bound at 90\\% C.L.\\ ($\\sim\\! 1.6\\sigma$) \ncorresponds to $m^2_\\beta<0.6$~eV$^2$ or $m_\\beta<0.8$~eV, representing the first constraint on the effective\n$\\beta$-decay mass in the sub-eV range \\cite{KATRIN2021}. Note that variants of the statistical analysis may lead to small \ndifferences in the second significant digit of $m^2_\\beta$ or $m_\\beta$ \\cite{KATRIN2021},\nnot considered herein. We implement the datum in Eq.~(\\ref{eq:KATRIN}) via a contribution $((x-0.1)\/0.3)^2\\in \\chi^2$, where $x=(m_\\beta\/\\mathrm{eV})^2$. \n\n\\subsection{Neutrinoless double beta decay and constraints on half-lives and $m_{\\beta\\beta}$}\n\n\n\n\nNeutrinoless double beta decay \\cite{PDG6} can be considered as the process of creation of two matter particles (electrons) \\cite{Vissani:2021gdw}, occurring if neutrinos are of Majorana type \\cite{Bilenky:2019gzn}. \nWithin the $3\\nu$ paradigm, the decay half-life $T$ is given by\n\\begin{equation}\n\\label{eq:2beta}\n\\frac{1}{T_i} = G_i |M_i|^2 m^2_{\\beta\\beta} = S_i\n\\end{equation}\nwhere the index $i$ labels the $0\\nu\\beta\\beta$ nuclide, characterized by a phase space $G_i$ and a nuclear matrix element (NME) $M_i$, while $m_{\\beta\\beta}$ is the effective Majorana mass. The inverse half life $S_i=1\/T_i$ represents, up to a constant factor, the observable decay rate or signal strength. \n\n\n\n\nCurrent experiments are consistent with null signal ($S=0$), placing lower limits on $T$ and upper limits on $m_{\\beta\\beta}$ \\cite{Formaggio:2021nfz} via Eq.~(\\ref{eq:2beta}). In deriving separate and combined limits on $m_{\\beta\\beta}$, two issues arise: (1) experimental results are often given in terms of \n90\\% C.L.\\ bounds on $T$ (say, $T>T_{90}$), with little or no information on the probability distribution of $T$ (or of $S$);\n(2) theoretical uncertainties on the NME are rather large (and correlated among nuclides, e.g., via the axial coupling)\n\\cite{Engel:2016xgb}. See e.g.\\ \n\\cite{Biller:2021bqx} for a recent discussion. \nWe describe below our approach to these issues, in order to build first a probability distribution for half-lives $T_i$ and then, including NME's, for the effective mass $m_{\\beta\\beta}$. \n\n\nWe limit our analysis to experiments placing limits $T_{90}> 10^{25}$~y, namely: \nGERDA \\cite{Agostini:2020xta}\nand MAJORANA \\cite{Alvis:2019sil}\nfor $^{76}$Ge;\nCUORE \\cite{Adams:2021rbc}\nfor $^{130}$Te;\nKamLAND-Zen 400 \\cite{KamLAND-Zen:2016pfg}\nand 800 (preliminary) \\cite{Gando:2020cxo} \nand EXO-200 \\cite{Anton:2019wmi}\nfor $^{136}$Xe. For each experiment we need not only a limit ($T_{90}$) but the probability distribution of $T_i$ or, equivalently, a function $\\Delta\\chi^2(S_i)$. \nUnfortunately, such detailed information is not provided by current\nexperiments in an explicit or user-friendly way; see \\cite{Biller:2021bqx,Caldwell:2017mqu} for recent attempts to \nparametric reconstructions.\n\nWe adopt a $\\Delta\\chi^2(S_i)$ parametrization inspired by \\cite{Caldwell:2017mqu} and based on the following considerations.\nIn $0\\nu\\beta\\beta$ searches with zero background and nearly null results, the likelihood $\\cal L$ of a signal $S>0$ should be a poissonian with a scaling coefficient $\\mu$ ($\\cal L \\sim \\mathrm{exp}(-\\mu S)$), leading to a linear\ndependence on $S$ ($\\Delta \\chi^2 \\sim \\ln {\\cal{L}} \\propto S$) \\cite{Biller:2021bqx}. In $0\\nu\\beta\\beta$ searches with nonnegligible background subtraction, the dependence is expected to be nearly gaussian ($\\Delta \\chi^2 \\propto (S-S_0)^2$),\nwhere the best-fit signal $S_0$ may fall either in the physical region $(S_0\\geq 0)$\nor in the unphysical one $(S_0<0)$. All these limiting cases can be covered by a quadratic form\n\\begin{equation}\n\\label{eq:abc}\n\\Delta\\chi^2(S_i) = a_i\\, S_i^2 + b_i\\,S_i + c_i \\ ,\n\\end{equation}\nas previously advocated in \\cite{Caldwell:2017mqu} on\nan empirical basis. Note that the offset $c_i$ is set by the condition that $\\Delta\\chi^2\\geq 0$ \nin the physical region $S_i\\geq0$. In particular, for $a_i>0$, a $\\Delta\\chi^2$ minimum in the physical region implies\n$b_i\\leq 0$ and $c_i=b_i^2\/4a_i$, while in the unphysical one it implies $b_i> 0$ and $c_i=0$. For $a_i=0$ one recovers the linear limit, that implies $b_i>0$ and $c_i=0$. The case $a_i<0$ is never realized.\n\n\n\n\nIn order to assess the coefficients $(a_i,\\,b_i,\\,c_i)$, we have carefully sifted \nthe information contained in the experimental publications \n\\cite{Agostini:2020xta,\nAlvis:2019sil,\nAdams:2021rbc,\nKamLAND-Zen:2016pfg,\nGando:2020cxo,\nAnton:2019wmi} and in\navailable PhD theses conducted within EXO-200\n\\cite{Jewell,Ziegler}, KamLAND-Zen 400 \\cite{Sayuri} and KamLAND-Zen 800 (preliminary) \\cite{Ozaki}.\nWe find that the linear approximation advocated in \\cite{Biller:2021bqx} for GERDA, can be \nroughly applied also to MAJORANA (up to subleading corrections at small $S_i$, \nneglected herein). The other experimental bounds on $T_i$ can be reasonably\napproximated by parabolic $\\Delta\\chi^2$ curves, setting the various coefficients $(a_i,\\,b_i,\\,c_i)$. We also\nrequire that our 90\\% C.L.\\ limit $\\Delta \\chi^2(S_{90})=2.706$ reproduces the $T_{90}$ limit reported by each \nexperiment, up to their quoted significant digits. \nWe have further checked that, by shifting the minimum of each $\\Delta\\chi^2$ function from $S=S_0$ to $S=0$,\nthe corresponding 90\\% C.L.\\ limits are in reasonable agreement with the reported sensitivities\nfor the null hypothesis.\nWe are thus confident that current experimental results are fairly well represented by our $\\Delta\\chi^2$'s.\nFinally, we combine different experiments probing the same nuclide,\nby adding up their $\\Delta\\chi^2$ functions. \n\n\n\n\n\n\n\\begin{table}[t!]\n\\centering\n\\resizebox{.8\\textwidth}{!}{\\begin{minipage}{\\textwidth}\n\\caption{\\label{tab:abc}\nNeutrinoless double beta decay: Details of the adopted parametrization $\\Delta\\chi^2(S_i)=a_i\\, S_i^2 + b_i\\,S_i + c_i \n$ for the signal strength $S_i=1\/T_i$, expressed\nin units of 10$^{-26}$~y$^{-1}$. The first two columns report the nuclide and the\nname of the experiment(s). The next three columns report our evaluation of the \ncoefficients $(a_i,\\,b_i,\\,c_i)$, for the various experiments, taken either separately (upper six rows) or in combination\nfor the same nuclide (lower three rows). The sixth column reports our 90\\% C.L.\\ \n($\\Delta\\chi^2=2.706$) half-life limits $T_{90}$ in units of $10^{26}$~y, to be compared with\nthe experimentally quoted ones in the seventh column (in the same units). Pertinent references are listed in the last column. \n}\n\\begin{ruledtabular}\n\\begin{tabular}{rlrrrccc}\nNuclide \t& Experiment(s) \t\t\t& $a_i~~~$ \t& $b_i~~~$ \t& $c_i~~~$ \t& $T_{90}\/10^{26}\\,\\mathrm{y}$ & $T_{90}$ (expt.)\n\t\t\t\t\t\t\t\t\t\t& References \\\\ \n\\hlin\n$^{76}$Ge\t& GERDA\t\t\t\t\t\t& 0.000 \t& \\textcolor{black}{4.867} \t& 0.000 \t& 1.800\t& 1.8\n\t\t\t\t\t\t\t\t\t\t& \\cite{Agostini:2020xta}\\\\\n$^{76}$Ge\t& MAJORANA\t\t\t\t\t& 0.000\t\t& \\textcolor{black}{0.731} \t& 0.000\t\t& 0.270 & 0.27 \n\t\t\t\t\t\t\t\t\t\t& \\cite{Alvis:2019sil}\\\\\n$^{130}$Te\t& CUORE\t\t\t\t\t\t& 0.245\t\t& $-0.637$ \t& 0.414\t\t& 0.216 & 0.22 \n\t\t\t\t\t\t\t\t\t\t& \\cite{Adams:2021rbc}\\\\\n$^{136}$Xe\t& KamLAND-Zen 400\t\t\t& 0.540\t\t& 2.374\t\t& 0.000\t\t& 1.065 & 1.07 \n\t\t\t\t\t\t\t\t\t\t&\\cite{KamLAND-Zen:2016pfg,Sayuri} \\\\\n$^{136}$Xe\t& KamLAND-Zen 800 prelim.\t& 1.006\t\t& $-0.169$\t& 0.007 \t& 0.580 & 0.58 \n\t\t\t\t\t\t\t\t\t\t&\\cite{Gando:2020cxo,Ozaki}\\\\\n$^{136}$Xe\t& EXO-200\t\t\t\t\t& 0.440\t\t& $-0.338$ \t& 0.065\t\t& 0.350 & 0.35 \n\t\t\t\t\t\t\t\t\t\t& \\cite{Anton:2019wmi,Jewell,Ziegler}\\\\\n\\hlin\n$^{76}$Ge\t& GERDA + MAJORANA\t\t\t& 0.000\t\t& 5.598\t\t& 0.000\t\t& 2.070 & \\textemdash \n\t\t\t\t\t\t\t\t\t\t& This work \\\\\n$^{130}$Te\t& CUORE\t(same as above)\t\t& 0.245\t\t& $-0.637$ \t& 0.414\t\t& 0.216 & 0.22 \n\t\t\t\t\t\t\t\t\t\t& \\cite{Adams:2021rbc} \\\\\n$^{136}$Xe\t& KamLAND-Zen (400 + 800 prelim.) + EXO-200\t\t\t\n\t\t\t\t\t\t\t\t\t\t& 1.986\t\t& 1.867\t\t& 0.000\t\t& 1.267 & \\textemdash \n\t\t\t\t\t\t\t\t\t\t& This work\n\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{minipage}}\n\\end{table}\n\n\nTable~\\ref{tab:abc} reports our numerical results for the coefficients $(a_i,\\,b_i,\\,c_i)$ used in Eq.~(\\ref{eq:abc}),\nfor both separate and combined bounds. [We formally keep up to four significant digits, \nto avoid accumulation of round-off errors in the analysis.] By combining GERDA and MAJORANA, \nwe evaluate a 90\\% C.L.\\ limit as high as $T>2.07\\times10^{26}$~y for the $^{76}$Ge half life, about 15\\% higher than from GERDA alone. A similar improvement is obtained for $^{136}$Xe, reaching $T>1.27\\times10^{26}$~y in the\ncombination of KamLAND-Zen and EXO data. For $^{130}$Te, CUORE alone sets the bound $T>0.22\\times10^{26}$~y. \n\n\n\n\n\n\n\n\nFigure~\\ref{Fig_09} shows our parametrized $\\Delta\\chi^2$ functions\nin terms of $1\/T=S$ (bottom abscissa) and of $T$ (top abscissa). The left and right panels refer to separate experiments and to combinations for the same nuclide, respectively. The dotted horizontal lines intersect all curves at the 90\\% C.L.\\ limit\n$T_{90}$. Note that the hierarchy of bounds at $90\\%$ C.L.\\ is not necessarily preserved at different statistical levels, since\nsome curves cross each other. This is another reason to suggest that the experimental collaborations \nexplicitly provide their probability\nprofiles for $T$, rather than focusing on a single C.L.\\ limit ($T_{90}$), that \nprovides a poor summary of the data and a limited comparison of different results.\n\n\n\n\\begin{figure}[b!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.88\\textwidth]{Fig_09.pdf}\n\\caption{\\label{Fig_09}\n\\footnotesize \nNeutrinoless double beta decay: $\\Delta\\chi^2$ functions as defined in Eq.~(\\ref{eq:abc}),\nin terms of the half life $T$ (top abscissa) and of $S=1\/T$ (bottom abscissa). \nLeft and right panels: separate experiments and their combinations for the same nuclide, respectively.\nDotted horizontal lines intersect the curves at 90\\% C.L.}\n\\end{minipage}\n\\end{figure}\n\nIn order to translate constraints from $T_i$ to $m_{\\beta\\beta}$, one needs to know the phase space factors $G_i$ and\nthe nuclear matrix elements $|M_i|$ in Eq.~(\\ref{eq:2beta}). The $G_i$ can be accurately computed, see\n\\cite{Deppisch:2020ztt,Mirea:2015nsl} for recent calculations. The $|M_i|$ embed complex nuclear physics, currently treated\nin a variety of approaches that, unfortunately, still carry significant uncertainties despite the \ntheoretical progress, see e.g.\\ the recent reviews in \\cite{Engel:2016xgb,Coraggio:2020iht,Cirigliano:2020yhp,Dolinski:2019nrj,Ejiri:2019ezh,Vergados:2016hso,DellOro:2016tmg,Barea:2015kwa}. It is \ncommon practice to select a set of published values $\\{M_i\\}$ in order to obtain a set of upper \nbounds $\\{m_{\\beta\\beta}\\}$, whose \nspread may be taken as indicative of the theoretical uncertainties; see \\cite{Biller:2021bqx} for a very recent application.\nHowever, this procedure overlooks significant correlations among the NME uncertainties of different nuclides \n\\cite{Faessler:2008xj,Faessler:2013hz,Faessler:2011rv,Lisi:2015yma} \nthat, as shown below, are as important as the uncertainties themselves in obtaining conservative bounds.\n\nIn a given nuclear model, estimating NME covariances requires massive numerical experiments to generate many $M_i$ variants. To our knowledge, this task has been performed only in \\cite{Faessler:2008xj} within the quasiparticle random phase approximation (QRPA), by varying the axial coupling $g_A$, the short-range correlations, the model basis, and \nthe renormalization procedure, while requiring consistency with available $2\\nu\\beta\\beta$ data. Apart \nfrom the subsequent papers \\cite{Faessler:2013hz,Faessler:2011rv,Lisi:2015yma}, and\ndespite the potential relevance of NME covariance issues \\cite{Engel:2015wha},\nwe are aware of just one independent\n(but preliminary) correlation matrix estimate in a different model \\cite{Gautam:2017bgq} \nand of a single statistical analysis \\cite{Ge:2017erv} based \non the correlations in \\cite{Faessler:2008xj}.\nIn the absence of novel estimates of NME covariances, we shall use the only available results of \\cite{Faessler:2008xj} at face value. \nWe have checked\nthat, in any case, the NME uncertainties estimated therein are conservative enough to cover within $\\pm2\\sigma$ most\n(and within $\\pm3\\sigma$ all) of the $M_i$ values\ncompiled in \\cite{Engel:2016xgb,Coraggio:2020iht,Cirigliano:2020yhp,Dolinski:2019nrj,Ejiri:2019ezh,Vergados:2016hso,DellOro:2016tmg,Barea:2015kwa} for the three nuclides.\n\nWe summarize and use the results in \\cite{Faessler:2008xj} as follows. In order to deal with large $|M_i|$ variations, \npossibly hitting the unphysical range $|M_i|<0$, the (adimensional) $|M_i|$ values are parametrized in terms of the logarithms $\\eta_i$,\n\\begin{equation}\n |M_i| = e^{\\eta_i} = e^{\\overline \\eta_i+\\Delta \\eta_i} = |\\overline M_i|e^{\\Delta \\eta_i}\\ , \n\\end{equation}\nwhere the overlined symbols represent central values, $|\\overline M_i|=e^{\\overline\\eta_i}$. The index $i=1,\\,2,\\,3$ runs over $^{76}$Ge, $^{130}$Te, $^{136}$Xe.\n The NME variations $\\Delta \\eta_i$ are endowed with variances $\\sigma^2_i$\nand a covariance matrix $\\sigma_{ij}=\\rho_{ij}\\sigma_i\\sigma_j$, whose inverse defines the weight matrix $w_{ij}=(\\sigma_{ij})^{-1}$. For any choice of $m_{\\beta\\beta}$ and of $\\Delta \\eta_i$, the signal strength in the\n$i$-th nuclide is given by\n\\begin{equation}\nS_i = G_i |M_i|^2 m^2_{\\beta\\beta} = q_i m^2_{\\beta\\beta} e^{2\\Delta \\eta_i}\\ , \n\\end{equation}\nwhere $q_i=G_i |\\overline M_i|^2$. The strengths $S_i$ carry two $\\Delta \\chi^2$ contributions: an experimental one coming from $\\Delta \\chi^2(S_i)$, and a theoretical one coming from NME covariances. These contributions are coupled by---and must be minimized over---the three variations $\\{\\Delta\\eta_i\\}$\n\\begin{eqnarray}\n\\chi^2(m_{\\beta\\beta}) &=& \\min_{\\Delta_i}\\left(\n\\sum_{i=1}^3 \\Delta\\chi^2(S_i)+\\sum_{i,j=1}^3 w_{ij}\\, \\Delta\\eta_i\\Delta\\eta_j\\right)\\\\\n&=& \\min_{\\Delta_i}\\left( \\sum_{i=1}^3 \\left( a_i q^2_i m^4_{\\beta\\beta}e^{4\\Delta\\eta_i}\n+b_i q_i m^2_{\\beta\\beta}e^{2\\Delta\\eta_i} + c_i\\right) + \\sum_{i,j=1}^3 w_{ij}\\, \\Delta\\eta_i\\Delta\\eta_j\\right)\\ ,\n\\end{eqnarray}\nwhere the first line holds in general, while the second line refers to the specific parametrization in\nEq.~(\\ref{eq:abc}). \nMinimization yields three coupled equations in the three $\\Delta\\eta_i$ unknowns, to be solved numerically. \nNeglecting NME correlations (or errors)\namounts to setting $\\rho_{ij} = \\delta_{ij}$ (or $\\Delta\\eta_i=0$). Bounds for a single nuclide are obtained by selecting a specific index $i$. Subtraction of a $\\chi^2_{\\min}$ offset (if any) yields\nthe desired $\\Delta\\chi^2(m_{\\beta\\beta})$. Table~\\ref{tab:Old} reports the values of $\\sigma_{ij}$ and $q_i$ used herein. \n\n\n \n \nFigure~\\ref{Fig_10} shows the resulting bounds on $m_{\\beta\\beta}$ for \nthe three nuclides taken separately (left panel) and in combination (right panel). The solid and dotted curves refer to our estimates with and and without NME uncertainties, respectively. Currently, the most constraining results\nare obtained by combining $^{76}$Ge data, followed by weaker constraints from $^{136}$Xe and $^{130}$Te data.\nIn the\nright panel, the case with uncorrelated NME uncertainties is also shown (dashed line).\nThe effect of correlations is noticeable and leads to more conservative bounds;\nin fact, when the NME of different nuclides are positively correlated,\nthey are more likely to become all smaller at the same time (with respect to the uncorrelated case), allowing larger \nvalues of $m_{\\beta\\beta}$. Including correlated errors, we obtain the overall bound\n$m_{\\beta\\beta}<0.11$~eV at $2\\sigma$; the same bound was previously estimated (although in a less refined way) as $m_{\\beta\\beta}<0.14$~eV in \\cite{Capozzi:2020} and as $m_{\\beta\\beta}<0.18$~eV in \\cite{Capozzi:2017ipn}, reflecting the steady experimental progress in the last few years. \n\n \n\n\n\n\\begin{table}[b!]\n\\centering\n\\begin{minipage}{.5\\textwidth}\n\\caption{\\label{tab:Old} \\footnotesize \nNeutrinoless double beta decay: NME covariance matrix $\\sigma_{ij}$ \nand auxiliary coefficients $q_i=G_i|\\overline M_i|^2$, as derived from the results in\n\\cite{Faessler:2008xj}; see the text for details. The $q_i$\nare given in units of $10^{-26}\\,\\mathrm{y}^{-1}\\, \\mathrm{eV}^{-2}$, for $m_{\\beta\\beta}$ expressed in eV. }\n\\begin{ruledtabular}\n\\begin{tabular}{cc|ccc|c}\n$i$ & Nuclide & & $\\sigma_{ij}$ & & $q_i$ \\\\\n\\hline\n1 & $^{76}$Ge & 0.0790 & & & 56.6 \\\\\n2 & $^{130}$Te & 0.0920 & 0.0135 & & 210.0 \\\\\n3 & $^{136}$Xe & 0.0975 & 0.1437 & 0.1858 & 73.1 \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{minipage}\n\\end{table}\n\n\n \n\\newpage\n\n\nWe emphasize that the above methodology can be applied\n to $0\\nu\\beta\\beta$ data consistent with either a null signal or a positive detection.\nIt may include generic likelihoods for the half-lives $T_i$ (hopefully provided by the experimental collaborations) \n and alternative evaluations of the NME covariances (possibly computed in different nuclear models). \n\n\n\n\n \n\n\n\\begin{figure}[t!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.88\\textwidth]{Fig_10.pdf}\n\\caption{\\label{Fig_10}\n\\footnotesize \nNeutrinoless double beta decay: Our estimated bounds on $m_{\\beta\\beta}$, expressed in terms of $N_\\sigma=\\sqrt \\Delta\\chi^2$. \nThe left and right panels refer, respectively, to separate and combined bounds from the three nuclides, \nwith (solid) or without (dotted) NME uncertainties. In the right panel, \nthe case with uncorrelated uncertainties is also shown (dashed).}\n\\end{minipage}\n\\end{figure}\n\n\n\n\n\n\n\n \n\n\n\n \n\n\n\n\n\\vspace*{-2mm}\n\\subsection{Cosmology and constraints on $\\Sigma$}\n\\vspace*{-1mm}\n\nIn this section we discuss various choices for cosmological data \ncombinations, enlarging the set of cases previously considered in \\cite{Capozzi:2020}, in order to deal\nwith known tensions about lensing data. We then focus \non two specific cases, dubbed as default and alternative options, leading to different implications for $\\Sigma$ and \nthe sensitivity to mass ordering. \n\nWe remind that in \\cite{Capozzi:2020} we\nanalyzed the following data in various combinations: the complete Planck 2018 data (Planck) on the\nangular CMB temperature power spectrum (TT) \nplus polarization power spectra (TE, EE) \\cite{Aghanim:2018eyx,Aghanim:2019ame}\nand lensing reconstruction power spectrum (lensing) \\cite{Aghanim:2018oex}; \na compilation of Baryon Acoustic Oscillation (BAO) measurements, given by data from the 6dFGS~\\cite{6dFGS}, SDSS MGS~\\cite{mgs}, and BOSS DR12~\\cite{bossdr12} surveys; and the Hubble constant datum [$H_0$({\\tt R19})] from HST observations of Cepheids in the Large Magellanic Cloud measurements \\cite{Riess:2019cxk}. We assume the standard 6-parameter $\\Lambda$CDM model augmented with nonzero neutrino masses ($\\Lambda$CDM+$\\Sigma$) and, in some cases, we add\nan extra empirical parameter $A_{\\mathrm{lens}}$ to marginalize over the excess amount of gravitational lensing \nemerging in the fits of the Planck data \\cite{Aghanim:2018eyx};\n\\textcolor{black}{see also \\cite{RoyChoudhury:2019hls} for a similar approach.}\n\nHere we also consider an alternative way to deal with the Planck lensing problem, in the light of the wider debate\nabout data tensions in the standard $\\Lambda$CDM model \\cite{DiValentino:2021izs,Challenge2021}. In particular,\nrather than introducing an additional parameter to account for unknown systematics, one may consider\nrestricting the analysis to alternative CMB datasets not affected by internal or mutual tensions. \nA possibility is offered by the combination of the recent CMB anisotropy data from ACTPol-DR4\n \\cite{Aiola:2020azj} with WMAP9 data \\cite{Bennett:2012zja}, together with a \nPlanck-derived conservative gaussian prior on the optical depth to reionization $\\tau$\n[$\\tau_\\mathrm{prior}=0.065 \\pm 0.015$]. The ACT Collaboration explored this combination in \\cite{Aiola:2020azj},\nfinding no evidence for a lensing anomaly, and obtaining a relaxed\nupper bound on the total neutrino mass, $\\Sigma < 1.2$~eV at $2\\sigma$. \nUsing the same data combination, we get an upper bound in excellent agreement, $\\Sigma<1.21$~eV; interestingly, we also find that the best fit is at $\\Sigma\\simeq 0.70$~eV, with no significant difference between NO and IO. If we replace the gaussian prior on $\\tau$\nwith the actual Planck polarization data at large angular scale (dubbed Planck LowE) \n\\cite{Aghanim:2019ame}, that directly constrain the value of $\\tau$, we obtain a slightly stronger upper bound $\\Sigma<1.12$~eV (with a best fit at \n$\\Sigma\\simeq 0.58$~eV). Finally, in order to achieve a more complete combination of CMB data not in tension with each other,\n we further add to ACT and WMAP the independent Planck lensing reconstruction results \n\\cite{Aghanim:2018oex}, that do not suffer of the lensing anomaly. In this case\nthe upper bound becomes $\\Sigma<0.96$~eV, with a best fit at $\\Sigma\\simeq 0.54$~eV, as obtained with CMB-only data.\nOf course, by including additional data, such as BAO measurements, the upper bound on $\\Sigma$ would \nbe pushed back to $\\sim 10^{-1}$~eV (or even below as shown in \\cite{DiValentino:2021hoh}), \nat the price of a significant mutual tension among different datasets;\nthese fits, that would be more comprehensive but also more discordant, are not further discussed herein. \n\n\n\\newpage \n\nTable~\\ref{Tab:Cosmo} reports, for convenience, the bounds on $\\Sigma$ for both the cosmological inputs\nconsidered in \\cite{Capozzi:2020} (cases 0--9) and the new ones discussed above (cases 10--12). These\ninputs can be roughly divided into two categories: a first group (cases 0--6) where one includes at face value Planck CMB results (plus\nother data) despite the Planck lensing anomaly, obtaining\nrelatively strong upper bounds on $\\Sigma$ and a noticeable sensitivity to mass ordering; \nand a second group (cases 7--12), where one ``solves'' the lensing anomaly (by either adding an extra model parameter\nor by considering alternative CMB data), with weaker upper bounds on $\\Sigma$ and no significant sensitivity\nto NO vs IO. Hereafter we focus on just two representative cases for these two different categories, namely, case \\#3 (dubbed ``default'' as in \\cite{Capozzi:2020}) and case \\#12 (dubbed ``alternative''). \n\n\n\n\n\n\\begin{table}[t]\n\\resizebox{.78\\textwidth}{!}{\\begin{minipage}{\\textwidth}\n\\caption{\\label{Tab:Cosmo} \n\\footnotesize Results of the global $3\\nu$ analysis of cosmological data \nwithin the standard $\\Lambda\\mathrm{CDM}+\\Sigma$ model (possibly augmented with the $A_\\mathrm{lens}$ parameter). \nThe inputs numbered from 0 to 9 are the same as in \\cite{Capozzi:2020}, and refer to various combinations\nof the Planck 2018 angular CMB temperature power spectrum (TT) \nplus polarization power spectra (TE, EE), lensing potential power spectrum (lensing), Barion Acoustic Oscillations (BAO), and the Hubble constant from HST observations of Cepheids in the Large Magellanic Cloud, $H_0$({\\tt R19}).\nThe inputs numbered from 10 to 12 are new and refer to ACTPol-DR4 and WMAP9 data, in combination with \na prior on optical depth to reionization ($\\tau_\\mathrm{prior}$), \nPlanck polarization data at large angular scale (lowE), and lensing data. \nFor each case we report the $2\\sigma$ upper bound on the sum of $\\nu$ masses $\\Sigma$ (marginalized over NO and IO), together\nwith the $\\Delta\\chi^2$ difference between IO and NO, using cosmology only. In the last two columns, we report the \nsame information as in the previous two columns, but using cosmological data plus $ m_\\beta$ and $m_{\\beta\\beta}$ constraints. The specific cases numbered 3 and 12 are dubbed as default\nand alternative, see the text for details.}\n\\vspace*{0mm}\n\\centering\n\\begin{ruledtabular}\n\\begin{tabular}{cllcc|cc}\n\\multicolumn{3}{l}{Cosmological inputs for nonoscillation data analysis} & \\multicolumn{2}{c|}{Results: Cosmo only} & \\multicolumn{2}{c}{Cosmo + $m_\\beta$ + $m_{\\beta\\beta}$} \\\\[1mm]\n\\# & Model & Data set & $\\Sigma$ ($2\\sigma$) & $\\Delta\\chi^2_\\mathrm{IO-NO}$ & $\\Sigma$ ($2\\sigma$) & $\\Delta\\chi^2_\\mathrm{IO-NO}$ \\\\[1mm]\n\\hlin\n 0 & $\\Lambda\\mathrm{CDM}+\\Sigma$\t\t\t\t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} \t\t\t\t\t\t\t\t\t& $<0.34$ eV & $ 0.9$ & $<0.32$ eV & $ 1.0$ \\\\\n\\hlin\n 1 & $\\Lambda\\mathrm{CDM}+\\Sigma$\t\t\t\t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + lensing\t\t\t\t\t\t\t& $<0.30$ eV & $ 0.8$ & $<0.28$ eV & $ 0.9$ \\\\\n 2 & $\\Lambda\\mathrm{CDM}+\\Sigma$ \t\t\t\t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + BAO\t\t\t\t\t\t\t\t& $<0.17$ eV & $ 1.6$ & $<0.17$ eV & $ 1.8$ \\\\\n 3 & $\\Lambda\\mathrm{CDM}+\\Sigma$ \t\t\t\t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + BAO + lensing\t\t\t\t\t\t& $<0.15$ eV & $ 2.0$ & $<0.15$ eV & $ 2.2$ \\\\\n\\hlin\n 4 & $\\Lambda\\mathrm{CDM}+\\Sigma$\t\t\t\t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + lensing + $H_0$({\\tt R19})\t& $<0.13$ eV & $ 3.9$ & $<0.13$ eV & $ 4.0$ \\\\\n 5 & $\\Lambda\\mathrm{CDM}+\\Sigma$ \t\t\t\t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + BAO + $H_0$({\\tt R19}) \t\t& $<0.13$ eV & $ 3.1$ & $<0.13$ eV & $ 3.2$ \\\\\n 6 & $\\Lambda\\mathrm{CDM}+\\Sigma$ \t\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + BAO + lensing + $H_0$({\\tt R19}) \\ \\\t& $<0.12$ eV & $ 3.7$ & $<0.12$ eV & $ 3.8$ \\\\\n\\hlin\n 7 & $\\Lambda\\mathrm{CDM}+\\Sigma+A_\\mathrm{lens}$\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + lensing\t\t\t\t\t\t\t& $<0.77$ eV & $ 0.1$ & $<0.66$ eV & $ 0.1$ \\\\\n 8 & $\\Lambda\\mathrm{CDM}+\\Sigma+A_\\mathrm{lens}$\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + BAO\t\t\t\t\t\t\t\t& $<0.31$ eV & $ 0.2$ & $<0.30$ eV & $ 0.3$ \\\\\n 9 & $\\Lambda\\mathrm{CDM}+\\Sigma+A_\\mathrm{lens}$\t& Planck {\\scriptsize TT,\\,TE,\\,EE} + BAO + lensing\t\t\t\t\t\t& $<0.31$ eV & $ 0.1$ & $<0.30$ eV & $ 0.2$ \\\\\n\\hlin\n 10 & $\\Lambda\\mathrm{CDM}+\\Sigma$\t& ACT + WMAP + $\\tau_\\mathrm{prior}$\t\t\t\t\t\t\t\t\t\t\t\t\t& $<1.21$ eV & $-0.1$ & $<1.00$ eV & $ 0.1$ \\\\\n 11 & $\\Lambda\\mathrm{CDM}+\\Sigma$\t& ACT + WMAP + Planck lowE\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& $<1.12$ eV & $-0.1$ & $<0.87$ eV & $ 0.1$ \\\\\n 12 & $\\Lambda\\mathrm{CDM}+\\Sigma$\t& ACT + WMAP + Planck lowE + lensing\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t& $<0.96$ eV & $0.0$ & $<0.85$ eV & $ 0.1$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{minipage}}\n\\end{table}\n\n\n\nFigure~\\ref{Fig_11} shows the $\\Delta\\chi^2$ curves for the default and alternative cases. In the default case, cosmological data\npush $\\Sigma$ towards its lowest physical values in both IO and NO, and favors the latter at the level of $\\sim\\! 1.5\\sigma$. In the alternative case, there\nis a preference $(<2\\sigma)$ for higher $\\Sigma$ values, with a best fit at $\\Sigma\\simeq 0.54$~eV and an upper \nlimit $\\Sigma< 0.96$~eV at $2\\sigma$, while there are only minor differences between NO and IO.\nRoughly speaking, the alternative case corresponds to a putative cosmological ``signal'' for\nneutrino masses, equivalent to $\\Sigma \\simeq 0.54 \\pm 0.22$~eV (with symmetrized $1\\sigma$ errors). \n\n\n\n\n\n\\begin{figure}[b!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.48\\textwidth]{Fig_11.pdf}\n\\vspace*{-3mm}\n\\caption{\\label{Fig_11}\n\\footnotesize \nCosmology: $\\Delta\\chi^2$ curves for NO (blue) and IO (red) for the cases numbered in Table~\\ref{Tab:Cosmo} as \\#3 (left, solid) and \\#12 (right, dashed), taken as representative of default and alternative options, respectively. The cases \n\\#10 and \\#11 (not shown) would be qualitatively similar to case \\#12.}\n\\end{minipage}\n\\end{figure}\n\n\\newpage\n\\textcolor{black}{\nWe remark that, at fixed $\\Sigma$, the small differences between NO and IO fits are due to: a slight sensitivity\nof cosmological data to the different ordering of $\\nu$ masses at small $\\Sigma$; \nthe conversion from fit probability densities $P(\\Sigma)$ to $\\chi^2(\\Sigma)$ functions, as the $P$ normalization\ncovers different physical $\\Sigma$ ranges in NO and IO; and, to a lesser extent ($\\Delta\\chi^2<0.1$), to \nnumerical fit inaccuracies for $P\\to 0$ (i.e., for high $\\chi^2)$. See also the comments in Sec.~{II~C} of \\cite{Capozzi:2017ipn}.\n} \n \nSummarizing, the two cases in Fig.~\\ref{Fig_11} correspond to two qualitatively different outcomes, \nthat might persist in future cosmological data analyses, as opposite examples of the tradeoff between \ncompleteness and consistency of inputs.\n On the one hand, by \ncombining various datasets at face value (regardless of possible tensions), one may obtain strong upper limits,\n$\\Sigma0.06$~eV, see Fig.~\\ref{Fig_12}); however, such values \nare disfavored by $0\\nu\\beta\\beta$ data at $>1\\sigma$ in Fig.~\\ref{Fig_10}. A best-fit \ncompromise is reached for intermediate values, $\\Sigma\\sim 0.4$~eV and $m_{\\beta\\beta}\\simeq 0.05$~eV,\nsurrounded by large $2\\sigma$ allowed regions. In the right panel, note that both \ncosmology and $0\\nu\\beta\\beta$ data constrain the correlations bands from above,\nleading to a joint $2\\sigma$ bound $\\Sigma<0.85$~eV, stronger than the bound from cosmology only \n($\\Sigma<0.96$~eV, see also Table~\\ref{Tab:Cosmo}). In all cases,\ncurrent $\\beta$-decay data play a minor role in the overall fit.\n\nThe implications of Fig.~\\ref{Fig_13} can be summarized as follows. In the default case, \nit appears that the current KATRIN experiment (probing $m_\\beta>0.2$~eV) is\nnot expected to find any signal, while planned $0\\nu\\beta\\beta$ experiments are expected to probe at\nleast the region covered by both NO and IO ($m_{\\beta\\beta}>0.02$~eV). The region covered \nonly by NO ($m_{\\beta\\beta}<0.02$~eV) is more difficult to probe, and becomes eventually \nprohibitive as $m_{\\beta\\beta}$ vanishes, see e.g. \\cite{Dolinski:2019nrj,Pascoli:2007qh,Penedo:2018kpc}.\nIn the alternative case, a much larger phase space is amenable to $\\beta$ decay and $0\\nu\\beta\\beta$ decay\nsearches. Cosmological searches may find a signal for $\\Sigma$ in a wide sub-eV range. \nNeutrinoless double beta decay data might find a signal for $m_{\\beta\\beta}$ anywhere \nbelow the current bounds. \nThe KATRIN experiment might find a signal in its sensitivity region $(m_\\beta>0.2)$~eV,\nor at least strengthen significantly the upper bounds on $m_\\beta$. \n\n\\newpage\n\n\n\n\n\n\n\\begin{figure}[t!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.95\\textwidth]{Fig_14.pdf}\n\\caption{\\label{Fig_14}\n\\footnotesize \n$N_\\sigma$ bounds on the single nonoscillation parameters \n$\\Sigma$ (left), $m_{\\beta\\beta}$ (center) and $m_\\beta$ (right), assuming default cosmological inputs. \nThe combination of nonoscillation data induces the offset\nbetween the absolute minima in IO (red) and NO (blue).\n}\n\\end{minipage}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.95\\textwidth]{Fig_15.pdf}\n\\caption{\\label{Fig_15}\n\\footnotesize \nAs in Fig.~\\ref{Fig_14}, but assuming alternative cosmological inputs. \n}\n\\end{minipage}\n\\end{figure}\n\n\\subsection{Results on single nonoscillation observables}\n\nFigures~\\ref{Fig_14} and \\ref{Fig_15} show the projected bounds on single nonoscillation parameters for the \ndefault and alternative cases, respectively. In both cases we account for the NO--IO offset coming\nfrom the combination of all nonoscillation data (i.e., the rightmost numbers in rows \\#3 and \\#12 of Table~\\ref{Tab:Cosmo}\n(that were omitted in the previous Figs.~\\ref{Fig_12} and \\ref{Fig_13}). A vertical rise of $N_\\sigma$ occurs\nwhen the lower physical limits are reached.\nThese two figures quantify \nprevious considerations about the default and alternative options: the first exemplifies the case \nof strong upper bounds on $\\Sigma$ from cosmology, accompanied by some sensitivity to the mass ordering, \nand by hard-to-probe phase spaces for $m_\\beta$ and $m_{\\beta\\beta}$; the second represents the case of \nweaker upper bounds (and a possible signal) for $\\Sigma$, with scarce sensitivity to the mass ordering but \nmore optimistic expectations for $m_\\beta$ and $m_{\\beta\\beta}$ signals. Together with \n\\textcolor{black}{Fig.~\\ref{Fig_03}}, \nthe above Figs.~\\ref{Fig_14} and \\ref{Fig_15} provide a neat summary \nof what we (do not) know in the standard $3\\nu$ paradigm.%\n\\footnote{\n\\textcolor{black}{\nAt present, we stick to the viewpoint expressed in \\cite{Fogli:2004as} and prefer to project away \nunobservable quantities, such as the lightest neutrino mass $m_0$ and the two Majorana phases $\\eta_{1,2}$ (as defined in \\cite{PDG1}). Of course,\nwhen significant (and convergent) signals will emerge among the three observables\n$(m_\\beta,\\,m_{\\beta\\beta},\\,\\Sigma)$, meaningful bounds on $m_0$ (and possibly weak hints on $\\eta_{1,2}$) \nmay also be derived.}\n}\n\n\n\n\\section{Synthesis}\n\nWe conclude our work by merging the information coming from the analysis of oscillation and nonoscillation data,\nthat have one important observable in common: the mass ordering. Figure~\\ref{Fig_16} shows a histogram with separate and combined contributions to the \n$\\Delta\\chi^2_{\\mathrm{IO-NO}}$. The first bin adds up the contributions from oscillation data, starting from\nthe negative one in the combination of LBL accelerator, solar and KamLAND data, that becomes positive by adding SBL reactor data, and further increases with atmospheric data. The second bin shows the range spanned by all the cases considered in Table~\\ref{Tab:Cosmo}, for the fit to cosmological data only.\n The third bin shows the slight change induced by adding \ncurrent constraints on $m_\\beta$ and $m_{\\beta\\beta}$, that provide an extra upward shift (see Tab.~\\ref{Tab:Cosmo}).\nThe fourth bin adds up the contents of the first and third bins, providing an overall indication\nin favor of NO in the range $\\sim 2.5$--$3.2\\sigma$. \n Although none of the single oscillation or nonoscillation data sets provides compelling evidence for \n normal ordering yet, their current combination clearly favors this option at a global $\\sim 3\\sigma$ level. \n\n\\begin{figure}[t!]\n\\begin{minipage}[c]{0.85\\textwidth}\n\\includegraphics[width=0.8\\textwidth]{Fig_16.pdf}\n\\vspace*{-7mm}\n\\caption{\\label{Fig_16}\n\\footnotesize \nBreakdown of contributions to the IO-NO $\\chi^2$ difference from oscillation and nonoscillation data. The latter span the range \nof all the cosmological input variants reported in Table~\\ref{Tab:Cosmo}, as indicated by the horizontal lines (the thick one \ncorresponding to the \ndefault case).}\n\\end{minipage}\n\\end{figure}\n\n\nIn conclusion, the main results of our global analysis can be essentially summarized in terms of bounds $N_\\sigma=N_\\sigma(p)$,\nas shown in the following figures: \nFig.~\\ref{Fig_03} for the neutrino oscillation parameters \n$p=\\delta m^2,\\,\n\\textcolor{black}{|\\Delta m^2|},\n\\,\\theta_{12},\\,\\theta_{23},\\,\\theta_{13},\\,\\delta$; \nFigs.~\\ref{Fig_14} and \\ref{Fig_15} for the nonoscillation observables $p=m_\\beta,\\,m_{\\beta\\beta},\\,\\Sigma$,\nin two representative cases for cosmological inputs; and Fig.~\\ref{Fig_16} for the discrete mass ordering parameter, $p=\\mathrm{sign}(\\Delta m^2)=\\mathrm{NO(+)\/IO}(-)$.\nFinishing the fabric of the $3\\nu$ paradigm amounts to having convergent, \nnarrow and linear $N_\\sigma$ bounds, for one surviving mass\nordering, in terms of any continuous $3\\nu$ oscillation parameter and nonoscillation observable $p$ (with \nthe possible exception of $m_{\\beta\\beta}$, if neutrinos have a Dirac nature).\nAt present, this goal has been reached for $p=\\delta m^2,\\,\n\\textcolor{black}{|\\Delta m^2|},\n\\,\\theta_{12},\\,\\theta_{13}$\nand, to some extent, for $p=\\theta_{23}$ (up to an octant ambiguity). The current \nresults for $p=\\delta,\\,m_\\beta,\\,m_{\\beta\\beta},\\,\\Sigma$ and NO\/IO\nmay instead be considered as initial, shaky steps of a long march towards the characterization \nof the neutrino-antineutrino differences and of the absolute neutrino mass spectrum. \nOn the way, we shall learn a lot about neutrino properties in many different contexts, \nclarify the origin of old and new data tensions, and possibly find obstacles\nthat, tearing away the fabric of the $3\\nu$ paradigm, may reveal hidden new physics. \n\n\n\n\n\n\\acknowledgments\n\n \n\nWe are grateful to M.\\ Nakahata for informing us about the latest public release\nof the Super-Kamiokande atmospheric neutrino (preliminary) analysis \\cite{SKmap}.\nWe thank L.\\ Pandola and M. Sisti for useful discussions about\n$0\\nu\\beta\\beta$ decay results. This work is partly supported by the Italian Ministero dell'Universit\\`a e Ricerca (MUR) through\nthe research grant number 2017W4HA7S ``NAT-NET: Neutrino and Astroparticle Theory Network'' under the program PRIN 2017,\nand by the Istituto Nazionale di Fisica \nNucleare (INFN) through the ``Theoretical Astroparticle Physics'' (TAsP) project. \nThe work of F.C. is supported by the U.S.\\ Department of\nEnergy under the award number DE-SC0020250.\nE.D.V.\\ acknowledges the support of the Addison-Wheeler Fellowship awarded by the Institute of Advanced Study at Durham University.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDissipative nonlinear wave equations are well-posed in $H^k(\\Real^n)\\times H^{k-1}(\\Real^n)$\nfor all $k\\in [1,2]$ and $n\\geq 1$ under the monotonicity condition of Lions and Strauss~\\cite{LiSt}.\nThis global result makes no exception for nonlinear dissipations of supercritical power, as determined by invariant\nscaling, and gets around the stringent conditions for well-posedness of general nonlinear wave equations; see\nPonce and Sideris~\\cite{PS}, Lindblad~\\cite{Lind}, Wang and Fang~\\cite{WaFa} and the references therein.\n\nThe monotonicity method has been less effective in studying higher regularity.\nIt is still an open question whether\nsupercritical problems are globally well-posed in Sobolev spaces with index $k>2$.\nThe purpose of this paper is to give an affirmative answer when $n=3$ and initial data have radial symmetry.\nTo state the result, let $\\Box u =u_{tt}-\\Delta u$ be the d'Alembertian in $\\Real^{3+1}$ and $Du=(\\nabla u,\\partial_t)$\nbe the space-time gradient of $u$. Standard notations are also $\\|\\cdot\\|_q$, for the norm in $L^q(\\Real^3)$ with $q\\in[1,\\infty]$,\nand $D^\\alpha$, for the partial derivative of integer order $\\alpha=(\\alpha_1,\\alpha_2,\\alpha_3,\\alpha_4)$.\n\nWe consider the dissipative nonlinear wave equation\n\\begin{eqnarray}\n\\label{main}\n \\Box u+|u_t|^{p-1}u_t=0,& & x\\in \\Real^3,\\quad t>0,\n \\end{eqnarray}\nwith the Cauchy data\n\\begin{eqnarray}\n\\label{data}\n u|_{t=0}=u_0,\\qquad u_t|_{t=0}=u_1,& & x\\in \\Real^3.\n \\end{eqnarray}\nOur main assumptions are $p\\geq 3$ and $(u_0,u_1)\\in H^{k}_{\\rm rad}(\\Real^3)\\times H^{k-1}_{\\rm rad}(\\Real^3)$, where\nthe radially symmetric Sobolev spaces are defined as\n\\begin{eqnarray*}\nH^k_{\\rm rad}(\\Real^3)=\\{u \\in H^k(\\Real^3): (x_i\\partial_{x_j}-x_j\\partial_{x_i})u(x)=0\\ \\hbox{ for } \\ 1\\leq i0$. Then\n\\begin{equation}\n\\label{scaling}\n\\|u_\\lambda(t)\\|_{\\dot{H}^k}=\\lambda^{(2-p)\/(p-1)+k-3\/2}\\|u(\\lambda t)\\|_{\\dot{H}^k},\n\\end{equation}\nso the critical space for well-posedness is $\\dot{H}^{k_c}(\\Real^3)\\times \\dot{H}^{k_c-1}(\\Real^3)$ with\n\\[\nk_c=3\/2+(p-2)\/(p-1).\n\\]\nHowever, the result of \\cite{LiSt} shows that $k\\in [1,2]$ is sufficient for any $p>1$.\nHere the monotone dissipation plays a decisive role, since (\\ref{main}) remains dissipative after applying $D^\\alpha$ with $|\\alpha|=1$:\n\\begin{equation}\n\\label{diff.main}\n\\Box D^\\alpha u+p|u_t|^{p-1}D^\\alpha u_t=0.\n\\end{equation}\nWe can use $u_t$ and $D^\\alpha u_t$ as multipliers for (\\ref{main}) and (\\ref{diff.main}), respectively, to obtain\n\\begin{eqnarray*}\n \\|Du(t)\\|^2_{2} + 2\\int_0^t \\|u_s(s)\\|^{p+1}_{p+1}\\: ds & = & \\|Du(0)\\|^2_{2},\\\\\n \\|DD^\\alpha u(t)\\|^2_{2} + 2p\\int_0^t \\||u_s(s)|^{(p-1)\/2}D^\\alpha u_s(s)\\|^2_{2}\\: ds & = & \\|DD^\\alpha u(0)\\|^2_{2}.\n\\end{eqnarray*}\nAs derivatives of order $k\\leq 2$ turn out to be {\\em a priori}\nbounded, the corresponding well-posedness results readily follow from the monotonicity method\nof \\cite{LiSt}.\n\nTwo differentiations of (\\ref{main}) yield an equation that is no longer dissipative.\nIt becomes a nontrivial task to derive uniform estimates for derivatives of order three and higher.\nSome information is provided by the invariant scaling, which predicts the non-concentration\nof second-order norms if $k=2$ and $(2-p)\/(p-1)+1\/2>0$ in (\\ref{scaling}). Thus, the range of subcritical exponents is $p<3$.\nFor the complementary range $p\\geq 3$ and $k>2$, the global well-posedness of (\\ref{main})\nin $H^k(\\Real^3)\\times H^{k-1}(\\Real^3)$ is a difficult problem.\nUntil recently, the proof has been known only in the critical case $p=3$ with radially symmetric data~\\cite{TUY}.\n\nThis paper shows that no critical exponent exists for the regularity of dissipative nonlinear wave equations with radial symmetry.\nThe development of singularities is prevented by the monotonicity of seminorms involving\nsecond-order derivatives. We actually obtain, after setting $r=|x|$ and\n$X_{\\pm}(r,t)=(\\partial_t \\pm \\partial_r)(r\\p_tu)$, that\n\\[\n\\max_{\\pm}\\sup_{r>0}|X_{\\pm}(r,t)| \\leq \\max_{\\pm}\\sup_{r>0}|X_{\\pm}(r,0)|, \\quad t\\geq 0.\n\\]\nSuch decreasing quantities exist only for linear and dissipative nonlinear wave equations\nin $\\Real^3$ with radial data. The proof is based on differentiating (\\ref{main}) in $t$ and rewriting the equation for $\\partial_tu$ as\n\\[\n(\\partial_t \\mp \\partial_r)X_{\\pm}+\\frac{p}{2}|\\partial_t u|^{p-1}(X_++X_-)=0.\n\\]\nA. Haraux \\cite{Ha} has applied the same idea to derive $W^{2,\\infty}(\\Real)\\times W^{1,\\infty}(\\Real)$ estimates in the one-dimensional case.\nIn higher dimensions with radial symmetry, similar $W^{1,\\infty}(\\Real)\\times L^{\\infty}(\\Real)$ estimates away from $r=0$ are used by\nJoly, Metivier and Rauch~\\cite{JoMeRa}, Liang~\\cite{Li} and Carles and Rauch~\\cite{CaRa}.\nSince the one-dimensional reduction does not work for general data, the global well-posedness in $H^{k}(\\Real^3)\\times H^{k-1}(\\Real^3)$, with $k>2,$\nremains completely open.\n\n\nThe rest of this paper is organized as follows. Section~2 contains\nseveral basic facts and estimates for the wave equation with radially symmetric data in\n$\\Real^3$. The regularity problem in $H^3_{\\rm rad}(\\Real^3)\\times H^{2}_{\\rm rad}(\\Real^3)$\nis studied in Section~3.\nIn the final Section~4, we establish Theorem~1.3 about well-posedness in $H^k_{\\rm\nrad}(\\Real^3)\\times H^{k-1}_{\\rm rad}(\\Real^3)$ with $k\\geq 4$ and\n$C^\\infty_{\\rm rad}(\\Real^3)\\times C^\\infty_{\\rm rad}(\\Real^3)$ with\ncompact support.\n\n\n\n\\section{Basic Estimates}\n\n\nFirst of all, we state the energy estimates and\nStrichartz estimates for the radial wave equation in $\\Real^3\\times \\Real$.\n\n\\begin{lem}\n\\label{lem02} Let $u$ be a solution of the Cauchy problem\n in $\\Real^3\\times \\Real$\n\\[\n\\Box u=F,\\qquad u|_{t=0}=u_0,\\qquad u_t|_{t=0}=u_1.\n\\]\n\n(a) For any source and initial data, $u$ satisfies the energy\nestimate\n\\begin{eqnarray*}\n\\|Du(t)\\|_{2} \\leq C(\\|\\nabla u_0\\|_{2}+\\|u_1\\|_{2})+C\\int_0^t\n\\|F(s)\\|_{2}\\: ds\n\\end{eqnarray*}\nwith an absolute constant $C$ for all $t\\geq 0.$\n\n\n(b) For radial source and initial data, $u$ is also a radial\nfunction which satisfies\n\\begin{eqnarray*}\n\\left(\\int_0^t \\|u(s)\\|_{\\infty}^2 \\: ds \\right)^{1\/2} \\leq\nC(\\|\\nabla u_0\\|_{2}+\\|u_1\\|_{2})+C\\int_0^t \\|F(s)\\|_2\\: ds\n\\end{eqnarray*}\nfor all $t\\geq 0.$\n\\end{lem}\n\nPart (a) can be found in Strauss~\\cite{St}, H\\\"{o}rmander~\\cite{Ho}, and Shatah and Struwe~\\cite{ShSt2}.\nEstimate (b) is the so-called ``radial Strichartz estimate\" in $3D$.\nKlainerman and Machedon~\\cite{KlMa} have found the\nhomogeneous version of (b) which implies the non- homogeneous estimate stated here.\n\nThe following is a collection of useful facts concerning local solvability and other\nproperties of problem (\\ref{main}), (\\ref{data}).\nThese results can be found in \\cite{St}, \\cite{Ho} and \\cite{ShSt2}.\n\n\\begin{lem}\n\\label{lem00} Let $k\\geq 3$ and\n $(u_0,u_1)\\in H^k(\\Real^3)\\times H^{k-1}(\\Real^3).$\n\n(a) There exists $T>0,$ such that problem (\\ref{main}), (\\ref{data})\nhas a unique solution $u$ satisfying\n\\[\nD^\\alpha u \\in C([0,T], H^{k-|\\alpha|}(\\Real^3)),\\qquad |\\alpha|\\leq\nk.\n\\]\nMoreover, we have\n\\[\n\\sup_{t\\in [0,T]}\\|D^\\alpha u(t)\\|_2\\leq C_k,\n\\]\nwhere $T$ and $C_k$ can be chosen to depend continuously on\n$\\|u_0\\|_{H^k}+\\|u_1\\|_{H^{k-1}}.$\n\n(b) The continuation principle holds: if $T_*=T_*(u_0,u_1)$ is the\nsupremum of all numbers $T$ for which (a) holds, then either\n$T_*=\\infty$ or\n\\[\n\\sup_{t\\in [0,T_*)}\\|D^\\alpha u(t)\\|_2=\\infty\n\\]\nfor some $\\alpha$ with $|\\alpha|\\leq k.$\n\n(c) If the data $(u_0,u_1)$ are radially symmetric, the solution\n$(u,u_t)$ is also radially symmetric.\n\n\\end{lem}\n\nNext, we state two preliminary estimates for problem (\\ref{main}),\n(\\ref{data}). These results, called the energy dissipation laws, are already discussed in the\nintroduction.\n\\begin{lem}\n\\label{lem01} Assume that\n $(u_0,u_1)\\in H^3(\\Real^3)\\times H^{2}(\\Real^3)$ and $|\\alpha|=1.$\nLet $u$ be the solution of problem (\\ref{main}), (\\ref{data})\nfor $t\\in [0,T]$, given by Lemma~\\ref{lem00}. Then\n\\begin{eqnarray*}\n\\frac{1}{2}\\|Du(t)\\|_2^2+\\int_0^t \\|u_s(s)\\|^{p+1}_{p+1}\\: ds & = & \\frac{1}{2}\\|Du(0)\\|_2^2,\\\\\n\\frac{1}{2}\\|DD^\\alpha u(t)\\|_2^2+p\\int_0^t \\||u_s(s)|^{(p-1)\/2}D^\\alpha\nu_s(s)\\|^2_{2}\\: ds & = & \\frac{1}{2}\\|DD^\\alpha u(0)\\|_2^2,\n\\end{eqnarray*}\nfor all $t\\in[0,T].$ Thus, the following norms of $u$ are uniformly bounded:\n\\begin{eqnarray*}\n& & \\|Du(t)\\|_2\\leq \\|Du(0)\\|_2,\\quad \\ \\|DD^\\alpha u(t)\\|_2\\leq\n\\|DD^\\alpha u(0)\\|_2,\\quad |\\alpha|\\leq 1,\\\\\n& & \\int_0^T(\\|u_s(s)\\|^{p+1}_{p+1}+\\||u_s(s)|^{(p-1)\/2}D^\\alpha u_s(s)\\|^{2}_{2})\\: ds\\leq \\|Du(0)\\|_2^2+\\|DD^\\alpha u(0)\\|_2^2.\n\\end{eqnarray*}\n\\end{lem}\n\n{\\bf Proof.}\nMultiplying equation (\\ref{main}) by $u_t$, we get\n\\begin{eqnarray*}\n0=(\\Box u+|u_t|^{p-1}u_t)u_t = \\left(\\frac{|Du|^2}{2}\\right)_t\n-\\hbox{div}(u_t\\nabla u) +|u_t|^{p+1}.\n\\end{eqnarray*}\nThe first-order energy identity follows from the integration on $\\Real^3$ and divergence theorem\nif $u(x,t)$ has compact support with respect to $x.$\nMore generally, we can\napproximate $(u_0(x),u_1(x))$ with compactly supported $C^\\infty$\nfunctions and use the finite propagation speed to show that the\nboundary integral of $\\hbox{div}(u_t\\nabla u)$ is zero. Property (a)\nin Lemma~\\ref{lem00} implies that the approximations will converge\nto the actual solution.\n\nSimilarly, we can differentiate equation (\\ref{main}) and multiply\nwith $D^\\alpha u_t(x,t)$ to show the second-order energy identity. Let us recall\nthat we work with solutions whose third-order derivatives belong to\n$L^2(\\Real^3).$\n \\qed\n\nFinally, we state the strong version of Strauss inequality. The original version\ncan be found as Radial Lemma 1 of Strauss \\cite{St1}.\n\n\\begin{lem}\n\\label{lem07}\n(Strong version of Strauss Inequality \\cite{St1}) Let $U\\in H^{1}_{\\rm rad}(\\Real^3)$. There exists a constant $C>0$,\nsuch that for every $R>0$\n\\[\nR |U(R)|\\le C\\|U\\|_{H^1(|x|>R)}\\rightarrow 0 \\ \\hbox{as}\\ R\\rightarrow\\infty.\n\\]\n\\end{lem}\n\n\\section{Global existence of radial solutions in $H^{3}\\times H^2$}\n\nThe following lemma is essential for the proof of Theorem~\\ref{th1}.\nThis approach to $L^\\infty$ estimates of second-order derivatives is borrowed from Haraux~\\cite{Ha}.\n\n\\begin{lem}\n\\label{lem:Haraux} Let $(u_0,u_1)\\in H^3_{\\rm rad}(\\Real^3)\\times H^{2}_{\\rm rad}(\\Real^3)$ and\nlet $u$ be the local solution of problem (\\ref{main}), (\\ref{data})\nconstructed in Lemma \\ref{lem00}. There exists a positive constant\n$C=C(\\|u_0\\|_{H^3},\\|u_1\\|_{H^2})$,\nsuch that\n\\begin{eqnarray}\n& & \\sup_{r\\in[0,\\infty)}r|\\p_t^2u(r,t)|\\le C,\\label{H_Est1}\\\\\n& & \\sup_{r\\in[0,\\infty)} |\\p_t u(r,t)|\\le C,\\quad\n\\sup_{r\\in[0,\\infty)}r|\\p_r\\p_{t}u(r,t)|\\le C\\label{H_Est2},\n\\end{eqnarray}\nfor all $t\\in[0,T_*)$.\n\\end{lem}\n{\\bf Proof.} Since the solution of (\\ref{main}) is radially symmetric, the equation becomes\n\\begin{eqnarray}\n\\label{main:rad}\n(\\p^2_{t}-\\partial^2_{r})(ru)+r|\\p_tu|^{p-1}\\p_tu=0,\n\\end{eqnarray}\nwhere $r=|x|$. Let us define\n\\[\nX_{\\pm}(r,t)=(\\partial_t \\pm \\partial_r)(r\\p_tu).\n\\]\nWe differentiate (\\ref{main:rad}) in $t$ to obtain an equation for $\\partial_t u$.\nThen we multiply through by $X_{\\pm}^{2k-1}$ with $k\\in\\N$ to obtain\n\\[\n\\frac{(\\partial_t-\\partial_r)}{2k}\\{X_+(r,t)\\}^{2k}\n+pr|\\partial_tu|^{p-1}\\partial^2_t u\\{X_{+}(r,t)\\}^{2k-1}=0\n\\]\nand\n\\[\n\\frac{(\\partial_t+\\partial_r)}{2k}\\{X_-(r,t)\\}^{2k}\n+pr|\\partial_tu|^{p-1}\\partial^2_tu\\{X_{-}(r,t)\\}^{2k-1}=0.\n\\]\nAdding the above two equations and using $X_+(r,t)+X_-(r,t)=2r\\partial^2_tu$, we have\n\\[\n\\frac{(\\partial_t-\\partial_r)}{2k}X_+^{2k}+\\frac{(\\partial_t+\\partial_r)}{2k}X_-^{2k}\n+p|\\partial_tu|^{p-1}\\frac{X_++X_-}{2}(X_+^{2k-1}+X_-^{2k-1})=0.\n\\]\nHere we remark that the following inequality holds for $a,b\\in \\R$ and $k \\in \\N$:\n\\[\n(a+b)(a^{2k-1}+b^{2k-1})\\ge0.\n\\]\nFrom this inequality with $a=X_+$ and $b=X_-$, we derive\n\\begin{eqnarray*}\n\\partial_t(X_+^{2k}+X_-^{2k})+\\partial_r(X_-^{2k}-X_+^{2k})\\le 0.\n\\end{eqnarray*}\nIt is easy to see that integration on $[0,R]\\times[0,t]$ implies\n\\begin{eqnarray*}\n\\int_{0}^{R}\\{X_+^{2k}(r,t)+X_-^{2k}(r,t)\\}dr & \\le &\\int_{0}^{R}\\{X_+^{2k}(r,0)+X_-^{2k}(r,0)\\}dr\\\\\n& & +\\int_{0}^{t}\\{X_+^{2k}(R,s)-X_-^{2k}(R,s)\\}ds,\n\\end{eqnarray*}\nsince $X_-^{2k}(0,t)-X_+^{2k}(0,t)=0$. We also notice that $|X_{\\pm}(R,t)|\\rightarrow 0$ as\n$R\\rightarrow \\infty$, which is a consequence of Lemma \\ref{lem07}. Thus, we get\n\\[\n\\|X_{+}(\\cdot,t)\\|_{L^{2k}(\\Real_+)}^{2k}+\\|X_{-}(\\cdot,t)\\|_{L^{2k}(\\Real_+)}^{2k}\n\\le\\|X_{+}(\\cdot,0)\\|_{L^{2k}(\\Real_+)}^{2k}+\\|X_{-}(\\cdot,0)\\|_{L^{2k}(\\Real_+)}^{2k}.\n\\]\nThe two terms on the right hand side satisfy\n\\[\n\\|X_{\\pm}(\\cdot,0)\\|_{L^{2k}(\\Real_+)}^{2k}\\leq \\|X_{\\pm}(\\cdot,0)\\|_{L^{\\infty}(\\Real_+)}^{2k-2}\\|X_{\\pm}(\\cdot,0)\\|_{L^2(\\Real_+)}^2,\n\\]\nwhere $\\|X_{\\pm}(\\cdot,0)\\|_{L^2(\\Real_+)}<\\infty$ by Lemma~\\ref{lem07}\nand $(u_0,u_1)\\in H^3_{\\rm rad}(\\Real^3)\\times H^2_{\\rm rad}(\\Real^3)$.\nThis observation yields a more convenient estimate:\n\\begin{eqnarray*}\n\\|X_{+}(\\cdot,t)\\|_{L^{2k}(\\Real_+)}^{2k}+\\|X_{-}(\\cdot,t)\\|_{L^{2k}(\\Real_+)}^{2k}&\\le& \\|X_+(\\cdot,0)\\|_{L^{\\infty}(\\Real_+)}^{2k-2}\\|X_+(\\cdot,0)\\|_{L^2(\\Real_+)}^2\\\\\n&&+\\|X_-(\\cdot,0)\\|_{L^{\\infty}(\\Real_+)}^{2k-2}\\|X_-(\\cdot,0)\\|_{L^2(\\Real_+)}^2.\n\\end{eqnarray*}\nLetting $k\\rightarrow\\infty$, we have that\n\\begin{eqnarray}\n\\label{boundedness}\n& & \\max\\{\\|X_+(\\cdot,t)\\|_{L^{\\infty}(\\Real_+)},\\|X_-(\\cdot,t)\\|_{L^{\\infty}(\\Real_+)}\\}\\nonumber\\\\\n& & \\le \\max\\{\\|X_+(\\cdot,0)\\|_{L^{\\infty}(\\Real_+)},\\|X_-(\\cdot,0)\\|_{L^{\\infty}(\\Real_+)}\\}.\n\\end{eqnarray}\n\nThe remaining part of the proof is standard.\nSince $X_{+}(r,t)+X_{-}(r,t)=2r\\p^2_{t}u$, claim (\\ref{H_Est1})\nfollows from (\\ref{boundedness}).\nWe can write $2\\p_r(r\\p_{t}u)=X_{+}(r,t)-X_{-}(r,t)$ and\n\\[\n2r|\\p_tu(r,t)|\\le \\int_{0}^{r}|X_{+}(\\rho,t)-X_{-}(\\rho,t)|d\\rho\\le Cr.\n\\]\nHence $\\|\\p_tu(\\cdot,t)\\|_{L^{\\infty}(\\Real_+)}\\leq C\/2<\\infty$,\nwhich is the first claim in (\\ref{H_Est2}).\nWe finally observe that $2r\\p_r\\p_t u(r,t)=X_{+}(r,t)-X_{-}(r,t)-2\\p_tu(r,t)$, so the second claim in\n(\\ref{H_Est2}) follows from the first one and (\\ref{boundedness}). The proof is complete.\n\\qed\n\n\\vskip 0.5truecm\n\n{\\bf Proof of Theorem~\\ref{th1}.} It is sufficient to show that $\\|u(t)\\|_{H^3}+\\|u_t(t)\\|_{H^2}$\ndoes not blow up as $t\\rightarrow T_*$, where $u$ is the local solution of (\\ref{main}) given by\nLemma~\\ref{lem00}. The first and second order norms of $u$ are {\\it a priori} bounded from Lemma~\\ref{lem01},\nso the global existence of $u$ is guaranteed by the next result about third order norms.\n\n\\begin{prop}\n\\label{pr05} Assume that\n $(u_0,u_1)\\in H^3_{\\rm rad}(\\Real^3)\\times H^{2}_{\\rm rad}(\\Real^3)$ and\nlet $u$ be the local solution of problem (\\ref{main}), (\\ref{data})\nconstructed in Lemma~\\ref{lem00}. There exists a positive constant $C=C(u_0,u_1,T_*)$\nsuch that\n\\begin{eqnarray*}\n\\sum_{|\\alpha|=3}\\|D^{\\alpha}u(t)\\|_2 \\le C(u_0,u_1,T_*)\n\\end{eqnarray*}\nfor all $t\\in [0,T_*).$\n\\end{prop}\n\n {\\bf Proof.} Differentiating (\\ref{main}) twice, we find that\n $D^\\alpha u$ is a weak solution of\n\\[\n \\Box D^\\alpha u+p|u_t|^{p-1}D^\\alpha u_t+p(p-1)|u_t|^{p-3}u_tD^{\\alpha_1}u_tD^{\\alpha_2}u_t=0,\n\\]\nwhere $\\alpha=\\alpha_1+\\alpha_2$ with $|\\alpha_1|=1$ and $|\\alpha_2|=1$.\nThus, $D^\\alpha u$ satisfies the energy\nestimate in Lemma~\\ref{lem02} (a):\n\\begin{eqnarray*}\n \\|DD^\\alpha u(t)\\|_2 & \\leq & C\\|DD^\\alpha u(0)\\|_2\\\\\n & & + C\\int_0^t \\|u_s^{p-1}(s)D^\\alpha u_s(s)\\|_2 \\: ds\\\\\n & &+C\\int_0^t \\|u_s^{p-2}(s) D^{\\alpha_1}u_s(s)D^{\\alpha_2}u_s(s)\\|_2\\: ds\n\\end{eqnarray*}\nfor $t\\in [0,T_*)$. We notice that\n\\[\n\\|u_s^{p-1}(s)D^\\alpha u_s(s)\\|_2\\le C \\|D^{\\alpha}u_s(s)\\|_2,\n\\]\nby (\\ref{H_Est1}) in Lemma~\\ref{lem:Haraux}, and\n\\[\n\\|u_s^{p-2}(s) D^{\\alpha_1}u_s(s)D^{\\alpha_2}u_s(s)\\|_2\\le C \\left\\|\\frac{D^{\\alpha_1}u_s(s)}{|\\cdot|}\\right\\|_2,\n\\]\nby (\\ref{H_Est1}) and (\\ref{H_Est2}) in Lemma~\\ref{lem:Haraux}. Thus,\n\\begin{eqnarray*}\n \\|DD^\\alpha u(t)\\|_2 & \\leq & C\\|DD^\\alpha u(0)\\|_2\\\\\n & & + C\\int_0^t \\|D^\\alpha u_s(s)\\|_2 \\: ds\\\\\n & &+C\\int_0^t \\left\\|\\frac{D^{\\alpha_1}u_s(s)}{|\\cdot|}\\right\\|_2 \\: ds.\n\\end{eqnarray*}\nThe third term of the right hand can be estimated by Hardy's inequality:\n\\begin{eqnarray*}\n \\|DD^\\alpha u(t)\\|_2 & \\leq & C\\|DD^\\alpha u(0)\\|_2\\\\\n & & + C\\int_0^t \\|D^\\alpha u_s(s)\\|_2 \\: ds\\\\\n & &+C\\int_0^t\\|DD^{\\alpha_1}u_s(s)\\|_2 \\: ds.\n\\end{eqnarray*}\nWe add these estimates for all $|\\alpha|=2$ to get\n\\[\n\\sum_{|\\alpha|=3}\\|D^{\\alpha}u(t)\\|_2 \\: \\leq C\\sum_{|\\alpha|=3}\\|D^{\\alpha}u(0)\\|\n+C\\int_{0}^{t}\\sum_{|\\alpha|=3}\\|D^\\alpha u(s)\\|_2ds.\n\\]\nMaking use of Gronwall's inequality, we finally have\n\\[\n\\sum_{|\\alpha|=3}\\|D^{\\alpha}u(t)\\|_2 \\le C\\sum_{|\\alpha|=3}\\|D^{\\alpha}u(0)\\|\ne^{Ct}.\n\\]\nThis completes the proof of global existence.\n\\qed\n\n\n\n{\\bf Proof of Corollary~\\ref{cor011}.} We can now verify the uniform estimates of $L^2$ norms and square integrability of\n$L^{\\infty}$ norms on $[0,\\infty)$.\nNotice that inequality (b) in Lemma~\\ref{lem02} holds on any interval $[t_0,t]$ for all $|\\alpha|=1$.\nHence we get\n\\begin{eqnarray*}\n \\sum_{|\\alpha|=1}\\left(\\int_{t_0}^t \\|D^\\alpha u(s)\\|_\\infty^2\\: ds\\right)^{1\/2}\n & \\leq & C\\sum_{|\\alpha|=1}\\|DD^\\alpha u(t_0)\\|_2\\\\\n & & +C\\sum_{|\\alpha|=1}\\int_{t_0}^t \\|u_s^{p-1}(s)D^\\alpha u_s(s)\\|_2\\: ds.\n\\end{eqnarray*}\nIt is convenient to abbreviate the left hand side as\n\\[\nN_1(t)=\\sum_{|\\alpha|=1}\\left(\\int_{t_0}^t \\|D^\\alpha u(s)\\|_\\infty^2\\: ds\\right)^{1\/2},\\qquad t\\geq t_0,\n\\]\nand estimate the integrand on the right hand aside as\n$$\\|u_s^{p-1}(s)D^\\alpha u_s(s)\\|_2\\le C\\|D^{\\alpha}u(s)\\|_{\\infty}\\|u_s^{(p-1)\/2}(s)D^\\alpha u_s(s)\\|_2,$$\nfrom (\\ref{H_Est2}). Applying the Cauchy inequality on $[t_0,t]$ to the initial estimate,\n\\begin{eqnarray*}\nN_1(t) & \\leq & C\\sum_{|\\alpha|=1}\\|DD^\\alpha u(t_0)\\|_2\\\\\n & & +CN_1(t)\\left(\\int_{t_0}^t \\|u_s^{(p-1)\/2}(s)Du_s(s)\\|_2^2\\: ds\\right)^{1\/2}.\n\\end{eqnarray*}\nThe above integral converges on $[0,\\infty)$ by Lemma~\\ref{lem01}. We can find $t_0,$ such that\n\\begin{equation}\n\\label{conv}\n C\\left(\\int_{t_0}^t \\|u_s^{(p-1)\/2}(s)Du_s(s)\\|_2^2\\:\nds\\right)^{1\/2}\\leq \\frac{1}{2},\\qquad t\\geq t_0.\n\\end{equation}\nThus, $\\d N_1(t)\\leq 2C\\sum_{|\\alpha|= 1}\\|DD^\\alpha u(t_0)\\|_2$ for\nall $t\\geq t_0.$ If $t3$}\n\nHere we consider equation (\\ref{main}) with a supercritical integer $p=2m+1$, $m\\in \\N$.\nFor $k\\geq 3$ and $t\\in[0,T_*)$, let us define\n\\[\nE_{k}(t)=\\sum_{|\\alpha|=k-1}\\|DD^{\\alpha}u(t)\\|_2,\\qquad\nN_{k}(t)=\\sum_{1\\le|\\beta|\\le k-1}\\left(\\int_{0}^{t}\\|D^{\\beta}u(s)\\|_{\\infty}^2 ds\\right)^{1\/2},\n\\]\nwhere $u$ is the local solution of (\\ref{main}), (\\ref{data}) constructed in Lemma \\ref{lem00}.\n\n{\\bf Proof of Theorem \\ref{th01}.} We use induction in $k$.\nFrom Theorem \\ref{th1}, we know that the following hold when $k=3$:\n\\begin{equation}\n\\label{ind}\nT_*=\\infty,\\quad \\sup_{t\\in[0,\\infty)}E_k(t)<\\infty,\\quad \\sup_{t\\in[0,\\infty)}N_k(t)<\\infty.\n\\end{equation}\nAssuming these claims for $k$, we will verify them for $k+1$. The proof starts from applying $D^{\\alpha}$ to (\\ref{main})\nwith $p=2m+1$ and solving for the highest derivatives:\n\\begin{eqnarray}\n\\Box D^{\\alpha} u &= &c_{m,1}u_t^{2m}D^{\\alpha}u_t\\nonumber\\\\\n & &+c_{m,2}u_t^{2m-1}\\sum_{\\alpha_1+\\alpha_2=\\alpha}D^{\\alpha_1}u_tD^{\\alpha_2}u_t \\nonumber\\\\\n & &+c_{m,3}u_t^{2m-2}\\sum_{\\alpha_1+\\alpha_2+\\alpha_3=\\alpha}D^{\\alpha_1}u_tD^{\\alpha_2}u_tD^{\\alpha_3}u_t\\label{main:k-th}\\\\\n & & +\\cdots\\nonumber\\\\\n& &+c_{m,l}u_t^{2m+1-l}\\sum_{\\alpha_1+\\alpha_2+\\cdots+\\alpha_{l}=\\alpha}D^{\\alpha_1}u_t\\cdots D^{\\alpha_{l}}u_t=0, \\nonumber\n\\end{eqnarray}\nwhere $l=\\min\\{2m+1,k\\}$, $c_{m,i}$ are constants and $|\\alpha_i|\\geq 1$ for $i=1,\\cdots, l.$\nMaking use of Lemma \\ref{lem02} (a), we obtain\n\\begin{eqnarray}\n\\|DD^{\\alpha} u(t)\\|_2 &\\le& C\\|DD^{\\alpha}(0)\\|_2 +C\\int_{0}^{t} \\|u_s^{2m}D^{\\alpha}u_s\\|_2ds\\nonumber\\\\\n&&+\\sum_{\\alpha_1+\\alpha_2=\\alpha}C\\int_0^t \\|u_s^{2m-1}D^{\\alpha_1}u_sD^{\\alpha_2}u_s\\|_2ds\\nonumber\\\\\n&&+\\sum_{\\alpha_1+\\alpha_2+\\alpha_3=\\alpha}C\\int_0^t \\|u_s^{2m-2}D^{\\alpha_1}u_sD^{\\alpha_2}u_sD^{\\alpha_3}u_s\\|_2ds\\label{k-deriv1}\\\\\n&& +\\cdots\\nonumber\\\\\n&&+\\sum_{\\alpha_1+\\alpha_2+\\cdots+\\alpha_{l}=\\alpha}C\\int_0^t \\|u_s^{2m+1-l}D^{\\alpha_1}u_s\\cdots D^{\\alpha_{l}}u_s\\|_2\\}ds.\n\\nonumber\n\\end{eqnarray}\n\nOnly the first integrand involves a derivatives of order $k+1$. From (\\ref{H_Est2}), we get\n\\begin{equation}\n\\label{k-deriv2}\n\\|u_s^{2m}D^{\\alpha}u_s\\|_2\\leq C\\|u_s\\|_{\\infty}^2E_{k+1}(s).\n\\end{equation}\n\nIn the second integrand, there are two cases: $\\max\\{|\\alpha_1|,|\\alpha_2|\\}=k-1$\nand $\\max\\{|\\alpha_1|,|\\alpha_2|\\}\\leq k-2$. We rely on (\\ref{H_Est2}) to derive\n\\begin{eqnarray}\n\\|u_s^{2m-1}D^{\\alpha_1}u_sD^{\\alpha_2}u_s\\|_2\n&\\le& \\|u_s\\|_{\\infty}^{2m-2}\\|u_s\\|_{\\infty} \\|Du_s\\|_{\\infty}E_k(s)\\nonumber\\\\\n&\\le& C\\left(\\sum_{1\\le|\\beta|\\le2}\\|D^{\\beta}u(s)\\|_{\\infty}^2\\right)E_k(s)\\label{k-deriv3}\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\|u_s^{2m-1}D^{\\alpha_1}u_sD^{\\alpha_2}u_s\\|_2\n&\\le& \\|u_s\\|_{\\infty}^{2m-2}\\|u_s\\|_{\\infty}\\|D^{\\gamma}u_s\\|_{\\infty}E_{k-|\\gamma|+1}(s)\\nonumber\\\\\n&\\le& C\\left(\\sum_{1\\le|\\beta|\\le k-1}\\|D^{\\beta}u(s)\\|_{\\infty}^2\\right)E_{k-|\\gamma|+1}(s),\\label{k-deriv4}\n\\end{eqnarray}\nrespectively, where $\\gamma$ is such that $|\\gamma|=\\max\\{|\\alpha_1|,|\\alpha_2|\\}$.\n\\par\\noindent\n\nThe third integrand also admits $\\max\\{|\\alpha_1|,|\\alpha_2|,|\\alpha_3|\\}=k-2$.\nLet $\\gamma$ be the multiindex satisfying $|\\gamma|=k-2.$ Applying (\\ref{H_Est2}), we have\n\\begin{eqnarray}\n\\|u_s^{2m-2}D^{\\alpha_1}u_sD^{\\alpha_2}u_sD^{\\alpha_3}u_s\\|_2\n&\\le& \\|u_s\\|_{\\infty}^{2m-2}\\|D^{\\gamma}u_s\\|_{\\infty}\\|Du_s\\|_{\\infty}E_2(s)\\nonumber\\\\\n&\\le&C\\left(\\sum_{1\\le|\\beta|\\le k-1}\\|D^{\\beta}u(s)\\|_{\\infty}^2\\right).\\label{k-deriv5}\n\\end{eqnarray}\n\nFinally, we consider all integrands with $\\max\\{|\\alpha_{j}|: j=1,2,\\ldots, l\\}\\leq k-3.$\nSince $k\\ge4$, we can use the Sobolev embedding and (\\ref{H_Est2}) to obtain\n\\begin{eqnarray}\n&&\\|u_s^{2m+1-l}D^{\\alpha_1}u_sD^{\\alpha_2}u_s\\cdots D^{\\alpha_{j}}u_s\\|_2\\nonumber\\\\\n&\\le& \\|u_s\\|_{\\infty}^{2m+1-l}\\left(\\sum_{1\\le|\\beta|\\le k-2}\\|D^{\\beta}u(s)\\|_{\\infty}^2\\right)\n\\left(\\sum_{j=1}^{k}E_{j}(s)\\right)^{l-2}\\label{k-deriv6}\n\\\\\n&\\le& C\\left(\\sum_{1\\le|\\beta|\\le k-2}\\|D^{\\beta}u(s)\\|_{\\infty}^2\\right).\\nonumber\n\\end{eqnarray}\n\nWe combine the basic estimate (\\ref{k-deriv1}) with estimates (\\ref{k-deriv2})--(\\ref{k-deriv6}). This yields\n\\begin{eqnarray*}\n \\|DD^\\alpha u(t)\\|_2 & \\leq & C\\|DD^\\alpha u(0)\\|_2\n+ C\\int_0^t \\|u_s(s)\\|_{\\infty}^2 E_{k+1}(s) + CN_{k}(t)+C_k\n\\end{eqnarray*}\nfor all $t\\in [0,T_*)$, where the constant $C_k=C_k(\\|u_0\\|_{H^{k}},\\|u_1\\|_{H^{k-1}})$.\nAdding all such estimates with $|\\alpha|=k$ and using (\\ref{ind}), we have\n\\begin{eqnarray*}\n E_{k+1}(t)\\leq C_{k+1}+C\\int_0^t \\|u_s(s)\\|_\\infty^2 E_{k+1}(s) \\: ds,\n\\end{eqnarray*}\nfor some constant $C_{k+1}=C_{k+1}(\\|u_0\\|_{H^{k+1}},\\|u_1\\|_{H^{k}}).$ From the Gronwall inequality,\n\\[\nE_{k+1}(t)\\leq C_{k+1}\\exp\\left(\\int_{0}^{t}\\|u_s(s)\\|_\\infty^2ds\\right).\n\\]\nThis shows that $E_{k+1}(t)$ is uniformly bounded on every finite subinterval of\n$[0,T_\\ast).$ Hence, $u$ can be continued to a global solution, such that\n$\\sup_{t\\in [0,\\infty)}E_{k+1}(t)<\\infty.$\n\nIt remains to check the uniform estimates in $L_t^2L_x^{\\infty}$.\nWe apply Lemma \\ref{lem02} (b) to the solution of (\\ref{main:k-th}). The calculations are very similar to the above ones, so we\ngive only the final estimate:\n\\begin{eqnarray*}\n \\sum_{|\\alpha|=k}\\left(\\int_{0}^t \\|D^\\alpha u(s)\\|_\\infty^2\\: ds\\right)^{1\/2}\n & \\leq & C\\int_0^t \\|u_s(s)\\|_\\infty^2 E_{k+1}(s) \\: ds+ C_{k+1}.\n\\end{eqnarray*}\nNoticing that $\\sup_{t\\in[0,\\infty)}E_{k+1}(t)<\\infty$ and $\\sup_{t\\in[0,\\infty)}N_1(t)<\\infty$, we obtain\nthe final estimate in (\\ref{ind}) with $k+1$, i.e., $\\sup_{t\\in[0,\\infty)}N_{k+1}(t)<\\infty$.\nThe proof of higher regularity by induction is complete.\n\nSimilarly, we can show that $C^\\infty$ regularity is preserved during the evolution of compactly supported radial data.\n \\qed\n\n\n\n\n\n\n\n\n\n\\begin{center}\nACKNOWLEDGMENTS\n\\end{center}\nThe authors are very grateful to Professor Hideo Kubo for his useful comments.\n\n\\vskip 0.5truecm\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeebu b/data_all_eng_slimpj/shuffled/split2/finalzzeebu new file mode 100644 index 0000000000000000000000000000000000000000..9129fb59bdae72b5623810b8d1c0c764cc6e2c6f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeebu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIt is rarely possible to model realistic physical systems with exact\nsolutions to the equations of some general underlying theory.\nDespite this, many interesting problems deviate only slightly from a\nmodel problem that can be understood exactly. Such solutions are\nusually tractable only because of symmetry assumptions. Once they're\nunderstood, perturbation theory may be used to understand many\nsystems that ``almost'' satisfy the appropriate symmetry principle.\nWhile this statement has a clear intuitive meaning, quantifying it\ncan be difficult. It is also not necessarily obvious how -- or if it\nis meaningful -- to uniquely propagate a symmetry from solutions\nwhere it is exact into the perturbations that break it. These issues\nare particularly important in the context of conservation laws. As\nan example, one might want to know how to construct approximately\nconserved quantities that are unique generalizations of some exact\ncounterpart in a similar system. There would hopefully be a sense in\nwhich such quantities varied slowly for some class of small\nperturbations.\n\nSome steps towards understanding problems like these are explored\nhere in the context of affine collineations (of which Killing\nvectors are special cases) associated with curved spacetimes. While\nthese kinds of symmetries rarely exist, there are various senses in\nwhich approximate replacements can usually be introduced. One method\nfor finding vector fields that are ``almost Killing'' is to write\ndown an action whose value provides some sense for how nearly a\nparticular flow preserves the metric \\cite{Matzner}. There are\nimportant caveats to this interpretation, although the final result\nis that any vector field extremizing such an action satisfies a\nfairly simple generalization of Killing's equation. Various reasons\nhave been given for suggesting other extensions as well\n\\cite{AlmostStat,YanoBochner,Komar1,Komar2,YorkSym}. While any\ngenuine Killing vectors that might exist are solutions to all of\nthese equations, it is not usually clear how the remaining fields\nshould be interpreted. This problem arises even in flat spacetime.\n\nWhat form an approximate symmetry should take is highly dependent on\nits intended use. One application is in the estimation of a black\nhole's angular momentum. This requires finding rotational Killing\nfields on certain 2-spheres foliating a horizon. Various methods\nhave therefore been developed for defining such objects using only\nthe intrinsic geometry of these surfaces\n\\cite{S2Conformal,S2KT,S2Killing}. The concept of approximate\nKilling fields has also been adapted for use on initial data sets\nused in $3+1$ splits of Einstein's equation \\cite{DainInitData}.\n\nThe approach taken here is to define a set of vector fields in a\nfour dimensional volume that can all be viewed as analogs of known\nsymmetries in Minkowski spacetime. The physical interpretation is\nthat these fields may be viewed as generators of approximate\nsymmetries by a specified observer. Any sufficiently small region\nnear a particular point can be made to look nearly flat. Some\nstructures from Minkowski spacetime may be therefore be introduced\nvery near this point. Approximate symmetries that take advantage of\nthis fact are proposed in Sect. \\ref{Sect:Symmetries}. It is then\nshown in Sect. \\ref{Sect:GAC} that analogous objects can also be\nintroduced near an observer's worldline. These sorts of vector\nfields can actually be extended in a non-perturbative way to finite\nregions around the point or worldline from which they were\nconstructed. A well-defined subset provides a precise analog of the\nPoincar\\'{e} group. Translations, rotations, and boosts very near an\nobserver extend in a useful way to cover large portions of the\nspacetime. Any exact symmetries that may exist are included as\nspecial cases. Some connections to conservation laws are discussed\nin Sect. \\ref{Sect:ConsLaws}, and a simple example involving\ngravitational plane wave is finally presented in Sect.\n\\ref{Sect:Example}.\n\n\n\\subsection*{Exact symmetries}\n\nGiven some spacetime $(\\mathcal{M},g_{ab})$, there are several types\nof exact symmetries that may be discussed. The most common of these\ntake the form of vector fields whose associated diffeomorphisms\npreserve some geometric structure. The most ubiquitous examples are\nthe Killing fields. Their flows preserve the metric. Vector fields\n$Y^a_{\\mathrm{K}}$ with this property satisfy\n\\begin{equation}\n \\Lie_{Y_{\\mathrm{K}}} g_{ab} = 0 . \\label{KillingDef}\n\\end{equation}\nAny solutions that may exist can be used to find conserved\nquantities associated with geodesics or matter distributions\n\\cite{Wald}, identify mass centers \\cite{SchattStreub1}, simplify\nEinstein's equation \\cite{SimplifyEinstWSyms, ExactSolns}, classify\nits solutions \\cite{ExactSolns, Hall}, and so on.\n\nThis utility has (among other reasons) motivated various\ngeneralizations of \\eqref{KillingDef}. Perhaps the simplest of these\narises from considering flows that preserve the metric only up to\nsome constant factor:\n\\begin{equation}\n \\Lie_{Y_{\\mathrm{H}} } g_{ab} = 2 c g_{ab} .\n \\label{HomotheticDef}\n\\end{equation}\nAny $Y^a_{\\mathrm{H}}$ satisfying this equation with constant $c$ is\nknown as a homothetic vector field or homothety. Allowing the\ndilation factor $c$ to vary would define a conformal Killing vector.\nThese objects preserve the metric up to an arbitrary multiplicative\nfactor. The standard Killing vectors are special cases of either\nclass. Like them, conformal and homothetic vector fields usually do\nnot exist. Their presence can be very useful, however. The existence\nof a proper (non-Killing) homothetic vector is often used to define\na notion of geometric self-similarity, for example. Such objects\ntherefore appear in certain models of gravitational collapse and\ncosmology. They are also related to the appearance of critical\nphenomena in general relativity \\cite{SelfSimilar}.\n\nA simple generalization of the homotheties can be found by\nconsidering vector fields that satisfy\n\\begin{equation}\n \\nabla_a \\Lie_{Y_{\\mathrm{A}}} g_{bc} = 0 .\n \\label{AffineDef}\n\\end{equation}\nSolutions to this equation are known as affine collineations. They\nare the generators of infinitesimal affine transformations. Killing\nand homothetic vector fields are special cases. All of the affine\ncollineations may be interpreted geometrically as preserving the\nLevi-Civita connection. This means that $\\Lie_{Y_{\\mathrm{A}}}$ and\n$\\nabla_a$ commute when acting on arbitrary tensor fields. Geodesics\nand their affine parameters are also preserved under the action of\nany $Y_{\\mathrm{A}}^a$. Although this might seem to be a significant\ngeneralization of the Killing vector concept, solutions rarely\nexist. The only non-flat vacuum spacetimes that admit non-homothetic\naffine collineations are the \\textit{pp}-waves \\cite{RareAffine}.\nSimilarly, it has been shown that proper homotheties cannot exist in\nany asymptotically flat vacuum spacetime with positive Bondi mass\n\\cite{RareHomothety}. Despite these results, interesting affine\ncollineations can occasionally be identified in geometries that are\nnot Ricci-flat. Doing so provides a number of simplifications for\nvarious problems. Some of these derive from the fact that\n\\begin{equation}\n K_{ab} = \\Lie_{Y_{\\mathrm{A}}} g_{ab} \\label{AffineKT}\n\\end{equation}\nis a second-rank Killing tensor; i.e.\n\\begin{equation}\n \\nabla_{(a} K_{bc)} = 0. \\label{KTDef}\n\\end{equation}\nIt should be noted that not all symmetric tensors satisfying this\nequation can be derived from affine collineations. One\ncounterexample is the Killing tensor associated with Carter\nconstants in Kerr.\n\nTransformations generated by affine collineations can be viewed as\nmapping geodesics into geodesics. They preserve the affine\nparameters of each curve. Dropping this latter requirement recovers\nthe so-called projective collineations. A precise definition may be\nfound in \\cite{Hall}, although it will not be needed here. One of\ntheir interesting consequences is that they leave invariant the\nprojective curvature tensor:\n\\begin{equation}\n \\Lie_{Y_{\\mathrm{P}}} \\big( R^{a}{}_{bcd} - \\frac{2}{3} \\delta^{a}_{[c} R_{d]b} \\big) = 0.\n\\end{equation}\nVector fields satisfying this equation are not always projective,\nhowever.\n\nThe list of definitions here could keep growing as new fields are\nadded that preserve more and more geometric structures.\nInterestingly, the quantities introduced so far all share a very\nuseful characteristic that does not easily generalize: the space of\nvector fields in each of the mentioned classes has finite dimension.\nFurthermore, any single element is uniquely determined by its value\nand the values of its first one or two derivatives at a single\npoint. These properties are well-known for Killing fields. Four\ndimensional spacetimes (which is all that will be considered here)\nadmit a maximum of 10 linearly independent Killing vectors. At most\none homothety can exist that is not itself Killing. The maximum\nnumber of (not necessarily proper) conformal Killing vectors is 15,\nand the affine collineations total no more than 20. Finally, the\nspace of projective collineations has a maximum of 24 dimensions\n\\cite{Hall}. Properties like these do not hold for vector fields\nwhose flows leave invariant the Riemann, Ricci, or Einstein\ncurvature tensors of a given spacetime. Despite this, the class of\napproximate symmetries introduced below is constructed so as to have\nfinite dimension. Any given member is fixed by its value together\nwith the values of its first derivatives at a point. Unlike the\nexact symmetries, these objects always exist at least in some finite\nregion. After fixing a reference frame, the space of approximate\nsymmetries has exactly 20 dimensions. Ten of these will be\nidentifiable as generalized Killing vectors, while the remaining ten\nwill be related to more general affine collineations.\n\nIt has already been remarked that the presence of Killing fields\nimplies the existence of various conserved quantities. The same can\nalso be said for more general collineations. Extensive discussions\nof exact symmetries and associated integrals of the geodesic\nequation may be found in \\cite{GeodesicConsts}. Many of these\nsymmetries are non-Noetherian in the sense that they preserve the\nequations of motion, but not the action. Despite this, their\npresence allows constants of motion to also be assigned to arbitrary\nstress-energy distributions satisfying Einstein's equation\n\\cite{KatzinLevine, Collinson}. As will be discussed in Sect.\n\\ref{Sect:ConsLaws}, generalizations of these quantities can be\nassociated with any approximate affine collineations that are\nidentified.\n\n\\section{Symmetries near a point}\\label{Sect:Symmetries}\n\nGeneric symmetries in general relativity are usually discussed in\nthe context of asymptotically flat spacetimes. There then exist\napproximate notions of isometry that improve as one approaches\ninfinity \\cite{Wald}. Generalizations of these ideas also exist for\ngeometries with somewhat more complicated (but still highly\nsymmetric) asymptotic behavior like that of anti-de Sitter\n\\cite{AsymptAdS}. The assumption of a simple limiting form for the\ngeometry makes it convenient to invariantly describe certain\nproperties of a spacetime in terms of ``measurements at infinity.''\nQuantities that may be identified as a spacetime's total energy or\nangular momentum appear naturally, for example.\n\nWhile useful in many contexts, these ideas do not always translate\ninto observations made by physical observers. Measurements like\nthose expected from gravitational wave detectors do come very close\nto fitting into this formalism. Others can require a more local\ndescription. In particular, it is sometimes important to understand\nwhat given observers would experience inside strongly curved regions\nof spacetime. Abstracting the concept of an observer to a timelike\nworldline $\\Gamma$, vector fields may be introduced in (say) some\nconvex neighborhood $W$ of $\\Gamma$ that act like approximations to\nKilling fields or more general collineations. This is always\npossible, and the symmetries these vectors generalize become exact\non $\\Gamma$ itself. Limiting collineations can evidently be useful\non scales that are either very large or very small. It is much less\nclear how to easily describe systems at intermediate distances.\n\n\\subsection{Motivation}\n\nThe idea of a local symmetry just outlined is best introduced by\nfirst considering vector fields $\\psi^a(x,\\gamma)$ that generalize\nthe affine collineations in some reasonable way near a fixed\nreference point $\\gamma$. Let these objects be defined inside a\nnormal neighborhood $N$ of this point. It is intuitively obvious\nthat vector fields may always be chosen such that $\\nabla_a \\LieS\ng_{bc}$ vanishes at $\\gamma$. While this condition is reasonable to\nrequire, it is not very interesting by itself. Much more can be said\nif each vector field in this class is uniquely fixed in $N$ by\nknowledge of\n\\begin{equation}\n \\psi^\\sfa(\\gamma,\\gamma) , \\qquad \\nabla_\n \\sfa \\psi^{\\mathsf{b}}(\\gamma,\\gamma) . \\label{InitDataFirst}\n\\end{equation}\nIt will be assumed that each $\\psi^a$ depends linearly on this\ninitial data with no degeneracy. This implies that there are always\n$4+16=20$ linearly independent vector fields defined about any given\npoint in a four dimensional spacetime. Note that indices in\n\\eqref{InitDataFirst} have been written in a sans-serif font to\nemphasize that they are associated with the preferred point\n$\\gamma$.\n\nApproximate affine collineations with the appropriate properties may\nbe constructed by projecting symmetries of the tangent space\n$T_\\gamma N$ into $N$ using the exponential map. Consider the linear\ntransformations\n\\begin{equation}\n X^\\sfa \\rightarrow X^\\sfa + \\epsilon B_{{\\mathsf{b}}}{}^{a} X^{\\mathsf{b}}\n \\label{XFormVect}\n\\end{equation}\nof vectors $X^\\sfa$ in this space parameterized by an arbitrary\ntensor $B_{{\\mathsf{b}}}{}^{\\sfa}$. Being a vector space, $T_\\gamma N$ has a\npreferred origin. Adding another term to \\eqref{XFormVect} to shift\nthat origin would be awkward. The translational symmetries that such\na procedure might produce are certainly important, although\ngenerating them requires a more subtle treatment described below.\nFor now, consider only the vector fields\n\\begin{equation}\n \\Psi^\\sfa = X^{\\mathsf{b}} B_{{\\mathsf{b}}}{}^{\\sfa}\n \\label{AffineTangent}\n\\end{equation}\nassociated with homogeneous transformations of the given form. These\nclearly satisfy\n\\begin{equation}\n \\frac{\\partial}{\\partial X^\\sfa} \\Lie_\\Psi g_{{\\mathsf{b}} {\\mathsf{c}}} = 2 \\frac{\\partial}{\\partial\n X^\\sfa} \\left( g_{{\\mathsf{d}}({\\mathsf{c}}}(\\gamma) \\frac{\\partial}{\\partial X^{{\\mathsf{b}})}} \\Psi^{{\\mathsf{d}}} \\right)\n =0,\n\\end{equation}\nso they are affine within $T_\\gamma N$ in the sense of\n\\eqref{AffineDef}. Such transformations can be made to induce shifts\n$x \\rightarrow x + \\epsilon \\psi$ in spacetime points associated\nwith vectors $X^\\sfa$ via\n\\begin{equation}\n x = \\exp_\\gamma X .\n \\label{ExpMap}\n\\end{equation}\n\nA simple relation between $\\psi^a$ and $\\Psi^\\sfa$ is found by\nintroducing Synge's world function $\\sigma(x,y) = \\sigma(y,x)$. This\ntwo-point scalar returns one-half of the squared geodesic distance\nbetween its arguments. The assumption that $N$ be a normal\nneighborhood of $\\gamma$ ensures that $\\sigma(x,\\gamma)$ is uniquely\ndefined for all points $x$ in this region. Many of its properties\nare reviewed in \\cite{Synge, PoissonRev}. Most importantly for the\nproblem at hand, the first derivative of the world function\neffectively inverts the exponential map. Any set $\\{ \\gamma, x, X^a\n\\}$ satisfying \\eqref{ExpMap} is related via\n\\begin{equation}\n X_{\\sfa} = - \\sigma_{\\sfa}(x,\\gamma),\n \\label{XDef}\n\\end{equation}\nwhere the common shorthand $\\sigma_\\sfa = \\nabla_\\sfa \\sigma =\n\\partial \\sigma\/ \\partial \\gamma^\\sfa$ has been used. The\nright-hand side of (\\ref{XDef}) generalizes the concept of a\nseparation vector between two points. It is useful in that a\nstraightforward expansion shows that linear transformations of the\nform (\\ref{XFormVect}) effectively shift spacetime points by an\namount parameterized with a vector $\\psi^{a}(x,\\gamma)$ satisfying\n\\begin{equation}\n\\Psi^{\\sfa} = - \\sigma^{\\sfa}{}_{a} \\psi^{a} . \\label{PsiTopsi}\n\\end{equation}\nIf the various components of $X^\\sfa$ as defined in (\\ref{XDef}) are\nused as coordinates, the bitensor $-\\sigma^{\\sfa}{}_{a} = - g^{\\sfa\n{\\mathsf{b}}}\n\\partial^2 \\sigma\/ \\partial x^a \\partial \\gamma^{{\\mathsf{b}}}$\nreduces to the identity. Components of $\\psi^a$ and $\\Psi^{\\sfa}$\nare therefore identical in normal coordinate systems of this type.\nIn general, it is useful to introduce\n\\begin{equation}\n H^{a}{}_{\\sfa} = [ - \\sigma^{\\sfa}{}_{a} ]^{-1} \\label{HDef}\n\\end{equation}\nas the matrix inverse of the operator appearing in \\eqref{PsiTopsi}.\nThis always exists in the regions considered here. Using\n\\eqref{AffineTangent} now shows that\n\\begin{equation}\n \\psi^{a}(x,\\gamma) = - H^{a}{}_{\\sfa} \\sigma_{{\\mathsf{b}}} B^{{\\mathsf{b}} \\sfa} .\n \\label{PsiPart}\n\\end{equation}\nHolding $\\gamma$ fixed, this equation defines a 16-parameter family\nof vector fields generated by $B_{\\sfa {\\mathsf{b}}}=\\nabla_\\sfa \\psi_{\\mathsf{b}}\n(\\gamma,\\gamma)$. Every such $\\psi^a$ vanishes at $\\gamma$. It also\nsatisfies \\eqref{AffineDef} at this point. In flat spacetime, these\nvector fields coincide everywhere with exact affine collineations.\n\nNot all such symmetries are included in (\\ref{PsiPart}), however.\nThe four translational Killing fields are missing. These can be\nobtained by considering transformations that directly shift the base\npoint $\\gamma$. Perturbations of this form cannot leave $X^\\sfa$\nfixed, as the initial and final vectors must be elements of\ndifferent spaces. Introducing some $A^\\sfa$, we therefore demand\nthat $X^\\sfa$ be parallel-transported along the curve that $\\gamma$\nfollows under the one-parameter family of transformations\n\\begin{equation}\n \\gamma \\rightarrow \\gamma + \\epsilon A .\n\\end{equation}\nUsing this together with \\eqref{XDef} and the homogeneous\ntransformation \\eqref{XFormVect} generates the full 20-parameter\nfamily of approximate affine collineations\n\\begin{equation}\n \\psi^{a} = H^{a}{}_{\\sfa} ( \\sigma^{\\sfa}{}_{{\\mathsf{b}}} A^{\\mathsf{b}} - \\sigma_{\\mathsf{b}} B^{{\\mathsf{b}} \\sfa}\n ) . \\label{JacobiFirstDer}\n\\end{equation}\nGiven any $A^{\\sfa}$ and $B^{\\sfa {\\mathsf{b}}}$, these objects all satisfy\n\\begin{equation}\n \\nabla_\\sfa \\LieS g_{{\\mathsf{b}} {\\mathsf{c}}} (\\gamma) = 0 . \\label{JacobiAffine}\n\\end{equation}\nThe initial data\n\\begin{equation}\n A^\\sfa = \\psi^\\sfa(\\gamma,\\gamma) , \\qquad B^{\\sfa {\\mathsf{b}}} =\n \\nabla^\\sfa\n \\psi^{\\mathsf{b}}(\\gamma,\\gamma) \\label{InitialData}\n\\end{equation}\ndetermine $\\psi^a(x,\\gamma)$ throughout $N$. In Minkowski spacetime,\none finds that\n\\begin{equation}\n \\psi^\\alpha = A^\\alpha + (x-\\gamma)^\\beta B_{\\beta}{}^{\\alpha}\n\\end{equation}\nin the usual coordinates. These coincide exactly with all of the\naffine collineations in this geometry.\n\nIn general, vector fields satisfying (\\ref{AffineDef}) in a curved\nspacetime also have the form (\\ref{JacobiFirstDer}) for some\n$A^\\sfa$ and $B^{\\sfa {\\mathsf{b}}}$. This is most easily seen by noting\nthat vector fields with the given form have been obtained before as\ngeneral solutions to the equation of geodesic deviation (also known\nas the Jacobi equation) \\cite{Dix70a}\n\\begin{equation}\n \\sigma^{b} \\sigma^{c} ( \\nabla_{b} \\nabla_{c} \\psi_{a} -\n R_{abc}{}^{d} \\psi_{d} ) = 0. \\label{Jacobi}\n\\end{equation}\nFor any fixed $x$, this is an ordinary differential equation along\nthe geodesic connecting that point to $\\gamma$. Solving it\nrepeatedly for all geodesics in $N$ passing through this origin\nreproduces the vector fields (\\ref{JacobiFirstDer}). It is clear\nthat such solutions always exist as long as the geometry is\nreasonably smooth. These are the spacetime's Jacobi fields about\n$\\gamma$. The bitensors $H^{a}{}_{\\sfa} \\sigma^{\\sfa}{}_{{\\mathsf{b}}}$ and\n$H^{a}{}_{\\sfa} \\sigma_{\\mathsf{b}}$ are known as Jacobi propagators.\n\nSolutions to the geodesic deviation equation effectively map one\ngeodesic into another while preserving the affine parameters of both\ncurves. It was noted above that this is the defining characteristic\nof affine collineations. The difference is that such vector fields\nmust map \\textit{every} geodesic into another geodesic. This\nintuitive argument makes it clear that all affine collineations --\nor Killing fields as special cases -- must be solutions of\n(\\ref{Jacobi}). The proof follows from noting that second\nderivatives of any exact affine collineation $Y_{\\mathrm{A}}^a(x)$\nmust satisfy\n\\begin{equation}\n \\nabla_{b} \\nabla_{c} Y^a_{\\mathrm{A}} = - R_{bdc}{}^{a} Y^d_{\\mathrm{A}} .\n \\label{Del2Affine}\n\\end{equation}\nThis result is clear from \\eqref{Del2General}, and is actually\nequivalent to \\eqref{AffineDef}. Substituting it into (\\ref{Jacobi})\nshows that all affine collineations are indeed special cases of\nJacobi fields. As expected, $Y^a_{\\mathrm{A}}$ satisfies the\ngeodesic deviation equation along all geodesics; even those that do\nnot pass through $\\gamma$. This point illustrates precisely how\n\\eqref{Jacobi} generalizes the equation defining an affine\ncollineation. It is simply \\eqref{Del2Affine} contracted into\n$\\sigma^b \\sigma^c$. Alternatively, the Jacobi equation is\nequivalent to \\eqref{LieSig7}.\n\nTo summarize, the following is now evident:\n\\begin{theorem}\\label{Thm:JacobiBasic}\n Let $N$ be a normal neighborhood of some point $\\gamma$. Define a Jacobi\n field $\\psi^a(x,\\gamma)$ to be a solution of \\eqref{Jacobi} throughout this region. It is explicitly given by\n \\eqref{JacobiFirstDer} for some initial data with the form\n \\eqref{InitialData}. The set of all Jacobi fields about a fixed $\\gamma$ forms a 20-dimensional group in\n four spacetime dimensions. Each element satisfies $\\mathcal{L}_\\psi \\nabla =0$ at\n $\\gamma$. Furthermore, all affine collineations are members of\n this group.\n\\end{theorem}\nThese properties motivate our identification of the Jacobi fields as\ngeneralizations of affine collineations near $\\gamma$.\n\nFurther results that strengthen this decision are derived in the\nappendix. Even though few Jacobi fields are genuine affine\ncollineations, all can be interpreted as exact symmetries of certain\nquantities connected with the spacetime's geometric structure:\n\\begin{theorem}\\label{Thm:JacobiExactSyms}\n Given a Jacobi field defined as in theorem \\ref{Thm:JacobiBasic},\n it is always true that\n \\begin{equation}\n \\LieS \\sigma^\\sfa = \\LieS \\sigma^a = \\LieS \\sigma^{\\sfa}{}_{a} =\n \\LieS H^{a}{}_\\sfa =0 ,\n \\end{equation}\n where one of the arguments in each of these equations is taken to be the\n origin $\\gamma$.\n\\end{theorem}\nLie derivatives on two-point tensor fields are defined to act\nindependently on each argument. See \\eqref{LieSig1}, for example.\nQuantities appearing in this theorem are all important in Riemann\nnormal coordinate systems parameterizing arbitrary points $x$ by the\ncomponents of $X^\\sfa = -\\sigma^\\sfa (x,\\gamma)$. In terms of a more\ndirect interpretation of the Jacobi fields as approximately\nsatisfying \\eqref{AffineDef}, Lie derivatives of the metric with\nrespect to an arbitrary $\\psi^a$ are strongly constrained by the\nidentities \\eqref{LieSig6}-\\eqref{LieSig9}.\n\nStatements of this sort do not exhaust the connections between the\nJacobi equation and a spacetime's symmetries. There is a sense in\nwhich higher-rank Killing tensors that may exist are also solutions\nto the geodesic deviation equation \\cite{CavigliaBasic}.\nFurthermore, projective collineations can be shown solve an\ninhomogeneous form of (\\ref{Jacobi}) proportional to $\\sigma_a$\n\\cite{Caviglia}. These observations will not be discussed any\nfurther here, although it is possible that they could be used to\ngeneralize the present framework.\n\n\\subsection{Special cases} \\label{Sect:SpecialJacobi}\n\nIt is often useful to single out a subset of the Jacobi fields\ndistinguished by antisymmetric $B_{\\sfa {\\mathsf{b}}} = \\nabla_\\sfa\n\\psi_{\\mathsf{b}}$. These may be said to generalize only the Killing fields\nof a given spacetime. Distinguishing them with a subscript ``K,''\nthey clearly satisfy $\\Lie_{\\psi_{\\mathrm{K}}} g_{\\sfa {\\mathsf{b}}}(\\gamma)\n= 0$ as well as (\\ref{JacobiAffine}). Such objects form a\n10-dimensional group that may be thought of as a generalization of\nthe Poincar\\'{e} group. They have been suggested before as useful\ngenerators for the linear and angular momenta of extended matter\ndistributions \\cite{Dix70a, Dix74, Dix79}. Fixing a hypersurface\n$\\Sigma$ that passes through $\\gamma$ and the worldtube of some\nwell-behaved spatially-compact stress-energy distribution $T^{ab}$,\nlet\n\\begin{equation}\n p_\\sfa(\\gamma,\\Sigma) A^\\sfa + \\frac{1}{2} S_{\\sfa{\\mathsf{b}}}(\\gamma,\\Sigma) B^{[\\sfa{\\mathsf{b}}]} = \\int_\\Sigma T^{a}{}_{b}\n \\psi^b_{\\mathrm{K}} \\rmd S_a .\n \\label{DixMomenta}\n\\end{equation}\nVarying the 10 free parameters here determines the four linear\nmomenta $p^\\sfa$ and six angular momenta $S^{\\sfa{\\mathsf{b}}} =\nS^{[\\sfa{\\mathsf{b}}]}$. Explicit formulae are easily found using\n(\\ref{JacobiFirstDer}). They coincide with standard definitions in\nflat spacetime (where all $\\psi^a_{\\mathrm{K}}$ are Killing).\n\nTheorem \\ref{Thm:JacobiExactSyms} is easily expanded for these\nvector fields:\n\\begin{corollary}\\label{Thm:JacobiExactSymsKilling}\n Given any Jacobi field $\\psi^a_{\\mathrm{K}}$ satisfying\n $\\mathcal{L}_{\\psi_{\\mathrm{K}}} g_{\\sfa {\\mathsf{b}}} =0$,\n \\begin{equation}\n \\mathcal{L}_{\\psi_{\\mathrm{K}}} \\sigma =\n \\mathcal{L}_{\\psi_{\\mathrm{K}}} \\sigma_\\sfa =\n \\mathcal{L}_{\\psi_{\\mathrm{K}}} \\sigma_a =\n \\mathcal{L}_{\\psi_{\\mathrm{K}}} \\sigma_{\\sfa a} =\n \\mathcal{L}_{\\psi_{\\mathrm{K}}} H^{a \\sfa} = 0.\n \\end{equation}\n Again, one argument in each of these equations is assumed to be $\\gamma$.\n\\end{corollary}\nThis follows from the well-known identity \\cite{Synge, PoissonRev}\n\\begin{equation}\n \\sigma^\\sfa \\sigma_\\sfa = \\sigma^{a} \\sigma_{a} = 2 \\sigma ,\n \\label{SigIdent2}\n\\end{equation}\nand its first derivative\n\\begin{equation}\n \\sigma^{\\sfa}{}_{a} \\sigma^{a} = \\sigma^\\sfa .\n \\label{SigIdent1}\n\\end{equation}\nBy definition, $\\sigma = g_{\\sfa {\\mathsf{b}}}(\\gamma) X^\\sfa X^{\\mathsf{b}}\/2$ is\none-half of the squared geodesic distance between $\\gamma$ and $x$.\nKilling-type Jacobi fields based at $\\gamma$ therefore drag both\narguments of $\\sigma(x,\\gamma)$ in such a way that distances are\npreserved.\n\nIt is also possible to identify Jacobi fields that act like\nhomotheties near $\\gamma$. These are distinguished by letting\n\\begin{equation}\n B_{(\\sfa {\\mathsf{b}})} = \\frac{1}{2} \\Lie_{\\psi_{\\mathrm{H}}} g_{\\sfa {\\mathsf{b}}}(\\gamma) = c g_{\\sfa\n {\\mathsf{b}}} (\\gamma). \\label{HomB}\n\\end{equation}\nAs in \\eqref{HomotheticDef}, $c$ is an arbitrary constant. For\nsimplicity, the purely Killing components of some prospective\n$\\psi^a_{\\mathrm{H}}$ can be removed by setting $A_\\sfa = B_{[\\sfa\n{\\mathsf{b}}]} = 0$ and $c \\neq 0$. Substitution into (\\ref{JacobiFirstDer})\nthen shows that\n\\begin{equation}\n \\psi^{a}_{\\mathrm{H}} = - c H^{a}{}_{\\sfa} \\sigma^\\sfa = c\n \\sigma^{a}. \\label{JacobiHom}\n\\end{equation}\nThis second equality follows from contracting $\\delta^{a}_{b} =\n-H^{a}{}_{\\sfa} \\sigma^{\\sfa}{}_{b}$ with $\\sigma^{b}$ and using\n\\eqref{SigIdent1}.\n\nThe simplicity of (\\ref{JacobiHom}) is interesting, although perhaps\nnot surprising. It is consistent with the interpretation of\n$-\\sigma^\\sfa$ as a ``separation vector'' between $x$ and $\\gamma$.\nAs has been noted before, $\\sigma^{\\sfa}{}_{b} \\rightarrow -\n\\delta^{\\alpha}_{\\beta}$ in a normal coordinate system. The\ncomponents of $\\sigma^{a}$ would therefore be equal to $X^\\alpha$.\nThe normal coordinate functions themselves act as components of an\napproximately homothetic vector field. This is unique up to a\nconstant factor and the addition of Killing-type Jacobi fields. It\ngeneralizes the dilations of flat spacetime.\n\nGeneralized Killing tensors of various types can also be generated\nfrom Jacobi fields. In analogy to \\eqref{AffineKT}, let\n\\begin{equation}\n \\mathcal{K}_{ab} = \\LieS g_{ab} .\n \\label{KTApprox1}\n\\end{equation}\nThese objects exactly satisfy \\eqref{KTDef} at $\\gamma$, and\npresumably approximate it near this point. It is straightforward to\nwrite down other objects which also have this property. For example,\ntwo (possibly identical) Killing-type Jacobi fields $\\psi^a$ and\n$\\bar{\\psi}^a$ can be used to define\n\\begin{equation}\n \\mathcal{K}'_{ab} = \\psi_{(a}\n \\bar{\\psi}_{b)} .\n \\label{KTApprox2}\n\\end{equation}\nThis expression clearly generalizes to approximate Killing tensors\nof any rank. Exact second-rank Killing tensors probably exist that\ncannot be written in either of these forms, so it is unclear how\nuseful they are.\n\nVery near $\\gamma$, it is possible to approximate the Jacobi fields\nexplicitly. This will be especially useful in Sect.\n\\ref{Sect:ConsLaws} below, where a notion of gravitational current\nis introduced with respect to a given vector field. Consider a\nTaylor expansion of $\\LieS g_{ab}$ in powers of $X^\\sfa$. The first\ntwo terms in this series are trivially obtained from\n\\eqref{JacobiAffine} and \\eqref{InitialData}. Better approximations\ninvolve third and higher derivatives of $\\LieS g_{ab}$ in the limit\n$x \\rightarrow \\gamma$. The lowest order interesting terms can be\nfound from \\eqref{Del2Lieg} and \\eqref{Del3Lieg}. Making use of\n\\eqref{Sig4Coinc}, the final results are that\n\\begin{equation}\n \\fl \\quad \\LieS g_{ab} \\simeq \\sigma^{\\sfa}{}_{a} \\sigma^{{\\mathsf{b}}}{}_{b} \\big[ \\LieS g_{\\sfa {\\mathsf{b}}} - \\frac{1}{3} X^{\\mathsf{c}} X^{\\mathsf{d}} ( \\LieS R_{\\sfa {\\mathsf{c}} {\\mathsf{b}} {\\mathsf{d}}}\n + \\frac{1}{2} X^{\\mathsf{f}} \\LieS \\nabla_{\\mathsf{f}} R_{\\sfa {\\mathsf{c}} {\\mathsf{b}} {\\mathsf{d}}}) \\big] + \\Or(X^4) , \\label{LieGExpandJacobi}\n\\end{equation}\nand\n\\begin{eqnarray}\n \\fl \\quad \\nabla_c \\LieS g_{ab} \\simeq - \\frac{2}{3} \\sigma^{\\sfa}{}_{a} \\sigma^{{\\mathsf{b}}}{}_{b}\n \\sigma^{{\\mathsf{c}}}{}_{c} X^{\\mathsf{d}} \\big[ R_{{\\mathsf{d}} {\\mathsf{c}}\n (\\sfa}{}^{{\\mathsf{f}}} \\LieS g_{{\\mathsf{b}}) {\\mathsf{f}}} + g_{{\\mathsf{f}}(\\sfa} \\LieS R_{{\\mathsf{b}}) {\\mathsf{d}} {\\mathsf{c}}}{}^{\\mathsf{f}} + \\frac{3}{4}\n X^{\\mathsf{f}} \\nonumber\n \\\\\n \\fl \\qquad ~ \\times \\big( \\frac{1}{3} g_{\\mathsf{h} (\\sfa} \\LieS\n \\nabla^{\\mathsf{h}} R_{{\\mathsf{b}}) {\\mathsf{d}} {\\mathsf{f}} {\\mathsf{c}}} - g_{\\mathsf{h} \\sfa } \\LieS \\nabla_{\\mathsf{d}} R_{{\\mathsf{f}} ({\\mathsf{b}}\n {\\mathsf{c}})}{}^{\\mathsf{h}} - g_{\\mathsf{h} {\\mathsf{b}}} \\LieS \\nabla_{\\mathsf{d}}\n R_{{\\mathsf{f}} (\\sfa {\\mathsf{c}})}{}^{\\mathsf{h}} \\big) \\big] + \\Or(X^3).\n \\label{LieGGradJacobi}\n\\end{eqnarray}\nLie derivatives here are evaluated at $\\gamma$, so they only involve\n$A_\\sfa$, $B_{\\sfa {\\mathsf{b}}}$, $g_{\\sfa {\\mathsf{b}}}$, $R_{\\sfa {\\mathsf{b}} {\\mathsf{c}}\n{\\mathsf{d}}}$, and its first two derivatives. The factors of\n$\\sigma^{\\sfa}{}_{a}$ in front of these equations are used as a\nconvenient means for converting tensors at $\\gamma$ into tensors at\n$x$. It is perhaps more typical to use parallel propagators\n$g^{\\sfa}{}_{a}$ for this purpose \\cite{PoissonRev}, although the\naforementioned simplicity of $\\sigma^{\\sfa}{}_{a}$ in normal\ncoordinates makes it an attractive alternative. There is very little\ndifference at low orders regardless. $\\sigma^{\\sfa}{}_{a}$ can be\nfreely interchanged with $-g^{\\sfa}{}_{a}$ in\n\\eqref{LieGGradJacobi}. This is also possible in\n\\eqref{LieGExpandJacobi} when $B_{(\\sfa {\\mathsf{b}})} = 0$.\n\nApproximations like these are not useful over regions where the\ncurvature changes significantly, or on length scales approaching the\ncurvature radius. An alternative approach is to expand the various\nbitensors built from $\\sigma$ using its definition as an integral\nalong a geodesic. Simplifications can often be introduced by\nignoring all terms nonlinear in the Riemann tensor. A general method\nfor this type of weak-field procedure may be found in \\cite{Synge,\ndeFelice}. Specific details involved with expanding the Jacobi\nfields in this way will not be given here.\n\n\\section{Symmetries near a worldline}\\label{Sect:GAC}\n\nThe Jacobi fields just discussed generalize the idea of a Killing\nfield or more general affine collineation in a normal neighborhood\nof a given point. This is useful for some purposes, although it does\nnot have a very clear physical interpretation. The choice of origin\nshould presumably correspond to a preferred point, although there\nare few of these that might arise in practice. It is often more\nuseful to base the idea of an approximate symmetry off of a given\ntimelike worldline rather than a single point. This could correspond\nto the path of some observer. In some cases, the physical system\npicks out preferred reference frames. A binary star system\nexperiencing no mass transfer can admit three center-of-mass frames\n(rigorously defined in \\cite{Dix70a, EhlRud,CM}), for example. Two\nof these are associated with the individual stars, while the third\ndescribes the system as a whole. There are also preferred observers\nin most cosmological models. Expressing a system's dynamics in terms\nof quantities associated with these frames has an obvious physical\ninterpretation. The distinction between approximate symmetries\ndefined with respect to a point versus a worldline is closely\nanalogous to the one between Riemann and Fermi normal coordinate\nsystems.\n\nThe concept of an observer here will be taken to mean a timelike\nworldline $\\Gamma$ together with a set of hypersurfaces $\\Sigma(s)$\nthat foliate a surrounding worldtube $W$. It will be assumed that\neach of these hypersurfaces is a normal neighborhood of the point\n$\\gamma(s)$ where it intersects the central worldline. Each of them\nis therefore formed by a collection of radially-emanating geodesics\nof (usually) finite length. The most typical examples would be the\npast-directed null geodesics or the spacelike set orthogonal to\n$\\dot{\\gamma}^\\sfa = \\rmd \\gamma^\\sfa\/\\rmd s$ at $\\gamma(s)$. Other\nchoices are possible, however. Regardless, a worldline and foliation\ntogether will be referred to as an observer's reference frame.\n\n\\subsection{A family of Jacobi fields}\n\nSymmetries adapted to a particular frame can be constructed using a\none-parameter family of Jacobi fields $\\psi^a(x,\\gamma(s))$. Any\nsuch family is fixed by specifying $A_\\sfa(s)$ and $B_{\\sfa{\\mathsf{b}}}(s)$\nas defined in \\eqref{InitialData}. An optimal way of connecting\ninitial data between different points on $\\Gamma$ therefore must be\nfound. Before considering this problem, it is first useful to\ncollapse the family of Jacobi fields into an ordinary vector field\n$\\xi^a(x)$. Let $\\tau(x)$ be defined so as to identify which leaf of\nthe foliation includes an arbitrary point $x$ in the worldtube $W$.\nMore concisely, it always satisfies $x \\in \\Sigma(\\tau(x))$. The\nassumption that each hypersurface is a normal neighborhood of an\nappropriate point on $\\Gamma$ implies that $\\tau$ is always\nsingle-valued. Now set\n\\begin{equation}\n \\xi^a (x) = \\psi^a(x, \\gamma(\\tau(x))) .\n \\label{XiDef}\n\\end{equation}\nThe generalized affine collineations (GACs) to be defined below will\nbe of this form for a particular class of families\n$\\psi^a(x,\\gamma(s))$.\n\nOne potential application for a generalized symmetry constructed\nusing a particular frame is in the definition of quantities that\nmight be approximately conserved as one moves along $\\Gamma$. As an\nexample, consider integrals of conserved stress-energy tensors\nsimilar to (\\ref{DixMomenta}). One might define the component of\nmomentum generated by a $\\xi^a$ of the form \\eqref{XiDef} to be\n\\begin{equation}\n \\ItP_\\xi(s) = \\int_{\\Sigma(s)} T^{a}{}_{b}\n \\xi^b \\rmd S_a . \\label{PDef}\n\\end{equation}\nThe evolution of this quantity crucially depends on how the\nparameters $A_\\sfa(s)$ and $B_{\\sfa {\\mathsf{b}}}(s)$ in (\\ref{InitialData})\nare connected along $\\Gamma$. It is well-known that $\\ItP_\\xi$ is\nconserved if $\\xi^a$ is Killing and no matter flows across the\nboundary of the worldtube. In this case, initial data for the\none-parameter family of Jacobi fields must satisfy the Killing\ntransport (KT) equations on $\\Gamma$:\n\\numparts\n\\begin{eqnarray}\n \\mathrm{D}A_\\sfa\/\\rmd s &= \\dot{\\gamma}^{\\mathsf{b}} B_{{\\mathsf{b}} \\sfa}\n \\label{KTA}\n \\\\\n \\mathrm{D} B_{\\sfa {\\mathsf{b}}} \/ \\rmd s &= - R_{\\sfa {\\mathsf{b}}\n {\\mathsf{c}}}{}^{\\mathsf{d}} \\dot{\\gamma}^{\\mathsf{c}} A_{\\mathsf{d}} .\n \\label{KTB}\n\\end{eqnarray}\n\\endnumparts\nIf there exists an exact Killing vector $Y^a_{\\mathrm{K}}$ such that\n$A^\\sfa =Y^\\sfa_{\\mathrm{K}}$ and $B^{\\sfa {\\mathsf{b}}} = \\nabla^\\sfa\nY^{\\mathsf{b}}_{\\mathrm{K}}$ at a given $s = s_0$, relations like these will\nhold for all $s$. Furthermore, momenta $p^\\sfa$ and $S^{\\sfa {\\mathsf{b}}}$\nidentified using (\\ref{DixMomenta}) would satisfy\n\\begin{equation}\n 0 = (\\dot{p}_{\\sfa} - \\frac{1}{2} S^{{\\mathsf{b}} {\\mathsf{c}}} R_{{\\mathsf{b}} {\\mathsf{c}} \\mathsf{d} \\sfa}\n \\dot{\\gamma}^{\\mathsf{c}}) Y^{\\sfa}_{\\mathrm{K}} + \\frac{1}{2} (\n \\dot{S}_{\\sfa {\\mathsf{b}}} - 2 p_{[\\sfa} \\dot{\\gamma}_{{\\mathsf{b}}]} )\n \\nabla^{\\sfa} Y^{\\mathsf{b}}_{\\mathrm{K}} . \\label{Papapetrou}\n\\end{equation}\nIf there were a full complement of ten Killing vectors, all possible\nversions of this expression would together be equivalent to the\nPapapetrou equations. More generally, Papapetrou's result is only an\napproximation. Any deviations can be understood using a general\n10-parameter family of possibly approximate isometries. One might\nexpect these corrections to be minimized if $A_\\sfa$ and $B_{\\sfa\n{\\mathsf{b}}}$ always satisfy the KT equations even when no exact Killing\nfields exist.\n\\begin{definition}\\label{Def:GAC}\n Let a generalized affine collineation (GAC) $\\xi^a(x)$ associated with a\n reference frame $\\{\\Gamma, \\Sigma\\}$ be derived from a family of\n Jacobi fields via \\eqref{XiDef}. Individual elements of the family and their\n first derivatives satisfy the Killing transport equations \\eqref{KTA} and \\eqref{KTB} on $\\Gamma$.\n\\end{definition}\n\nAlthough this definition was motivated by the properties of\nconserved momenta in very particular spacetimes, it also arises from\nmuch more general (if less physical) arguments. Consider all\npossible initial data for vectors built from Jacobi fields using\n\\eqref{XiDef}. It is reasonable to suppose that any GAC should be\nexactly affine on $\\Gamma$; i.e.\n\\begin{equation}\n \\nabla_\\sfa \\LieX g_{{\\mathsf{b}} {\\mathsf{c}}} |_\\Gamma = 0.\n \\label{GACAffine}\n\\end{equation}\nIt can also be expected that $A_\\sfa$ and $B_{\\sfa {\\mathsf{b}}}$ fix\n$\\xi^a$ and its first derivatives on $\\Gamma$ just as they do for\n$\\psi^a$. Generalizing \\eqref{InitialData}, let\n\\begin{equation}\n A^{\\sfa}(s) = \\xi^\\sfa(\\gamma(s)) , \\qquad B^{\\sfa {\\mathsf{b}}}(s) =\n \\nabla^{\\sfa} \\xi^{\\mathsf{b}}(\\gamma(s)) . \\label{InitialDataXi}\n\\end{equation}\n\nWe start with the second of these constraints. Directly\ndifferentiating (\\ref{XiDef}) implies that\n\\begin{equation}\n \\LieX g_{ab} = \\LieS g_{ab} + 2 \\dot{\\psi}_{(a} \\nabla_{b)} \\tau ,\n \\label{LieXivsPsi}\n\\end{equation}\nwhere the Lie derivative with respect to $\\psi^a(x, \\tau)$ on the\nright-hand side is understood (as usual) to involve only the first\nargument of this vector field. It will be assumed that the foliation\nis always sufficiently smooth that derivatives of $\\tau$ remain\nwell-defined throughout $W$, and on $\\Gamma$ in particular.\nEvaluating \\eqref{LieXivsPsi} on the central worldline requires\nknowledge of $\\dot{\\psi}^a(\\gamma,\\gamma)$. Coincidence limits like\nthese are commonly denoted with brackets. For example,\n\\begin{equation}\n [\\dot{\\psi}^a](\\gamma) = \\lim_{x \\rightarrow \\gamma}\n \\frac{ \\partial}{\\partial s} \\dot{\\psi}^a(x,\\gamma(s)) .\n \\label{CoincDef}\n\\end{equation}\nThe convention of using different fonts for indices referring to $x$\nand $\\gamma$ cannot be consistently applied in expressions like\nthis. No confusion should arise, however. Limits like\n\\eqref{CoincDef} are easily computed using Synge's rule \\cite{Synge,\nPoissonRev}. In this case,\n\\begin{eqnarray}\n [\\dot{\\psi}^a] &= \\dot{\\gamma}^{\\mathsf{b}} [\\nabla_{\\mathsf{b}} \\psi^a] \\nonumber\n \\\\\n &= \\dot{\\gamma}^{\\mathsf{b}} ( \\nabla_{\\mathsf{b}} [\\psi^a] - \\delta^b_{\\mathsf{b}} [\\nabla_b \\psi^a]\n ) \\nonumber\n \\\\\n &= \\mathrm{D} A^a\/\\rmd s - \\dot{\\gamma}^{\\mathsf{b}} B_{{\\mathsf{b}}}{}^{a} .\n \\label{DotPsi}\n\\end{eqnarray}\nIt follows that (\\ref{InitialDataXi}) holds for all initial data iff\n\\eqref{KTA} is satisfied.\n\nThe other KT equation arises from enforcing \\eqref{GACAffine}.\nNoting (\\ref{JacobiAffine}) and (\\ref{KTA}), it must be true that\n\\begin{equation}\n [\\nabla_a \\dot{\\psi}^b] = 0.\n \\label{DelDotPsiReq}\n\\end{equation}\n$[\\ddot{\\psi}^a]$ also has to vanish, although this term is equal to\n$-\\dot{\\gamma}^b [ \\nabla_b \\dot{\\psi}^a]$. Requiring\n\\eqref{DelDotPsiReq} is therefore sufficient. Using the same type of\nprocedure as in \\eqref{DotPsi} shows that\n\\begin{equation}\n [\\nabla_a \\dot{\\psi}_b] = \\mathrm{D} B_{ab} \/ \\rmd s+ R_{abc}{}^{d}\n \\dot{\\gamma}^c A_d . \\label{DelDotPsi}\n\\end{equation}\nDeriving this is straightforward other than noting that\n\\eqref{Del2Affine} -- although mentioned for exact affine\ncollineations -- also holds for any Jacobi field at its origin.\nRegardless, the conclusion is that the second Killing transport\nequation \\eqref{KTB} ensures that $\\nabla_\\sfa \\LieX g_{{\\mathsf{b}}{\\mathsf{c}}}$\nvanishes everywhere on $\\Gamma$.\n\nNoting that the KT equations have the same significance for general\naffine collineations as they do for ordinary Killing fields\n\\cite{Hall}, it easily follows that\n\\begin{theorem}\\label{Thm:GACBasic}\n The class of all generalized affine collineations associated with\n a given reference frame forms a 20-dimensional group in four\n spacetime dimensions. Every GAC satisfies $\\LieX \\nabla = 0$ on\n $\\Gamma$, and all exact affine collineations are members of this\n class.\n\\end{theorem}\nThis is closely related to theorem \\ref{Thm:JacobiBasic}. It\nstrongly supports definition \\ref{Def:GAC} and the intuitive\nidentification of GACs with approximate symmetries inside $W$.\n\nAt least in principle, finding GACs associated with a particular\nreference frame is straightforward. Suppose that $A_\\sfa(s_0)$ and\n$B_{\\sfa {\\mathsf{b}}}(s_0)$ are given as initial data at some $\\gamma_0 =\n\\gamma(s_0)$. The goal is then to determine the $\\xi^a(x)$\nsatisfying \\eqref{InitialDataXi} at the appropriate point. This is\ndone by first applying the KT equations to the given parameters\nalong $\\Gamma$ from $\\gamma_0$ to $\\gamma(\\tau(x))$. The geodesic\ndeviation equation (\\ref{Jacobi}) is then integrated between this\nlatter point and $x$ using the initial conditions\n(\\ref{InitialData}). Both of these operations simply require finding\nthe solutions to well-behaved ordinary differential equations.\nAlternatively, $\\xi^a$ could also be obtained using the explicit\nexpression \\eqref{JacobiFirstDer} together with the KT equations and\n\\eqref{XiDef}.\n\nOur prescription for generalizing arbitrary affine collineations may\nappear somewhat awkward. Killing transport equations are being\napplied along $\\Gamma$, while the Jacobi equation is used on\ngeodesics intersecting that worldline. These two procedures are not\nas different as they might appear. Trying to use Killing transport\neverywhere would generically lead to inconsistencies. Derivatives of\nthe field expected from the KT equations would not usually match the\nderivatives computed from $\\xi^a$ itself. Only the tangential\ncomponents of these derivatives can be consistently fixed by\nintegrating ordinary differential equations along a collection of\nradial geodesics. Weakening the Killing transport equations to take\nthis into account exactly reproduces the geodesic deviation\nequation. This may be seen by rewriting \\eqref{Jacobi} as a pair of\nfirst order differential equations on geodesics connecting $x$ to\n$\\gamma(\\tau(x))$. Denote the unit tangent vector to one such\ngeodesic by $u^a(l)$. Also set $\\hat{A}^a = \\psi^a$ and\n$\\hat{B}_{ab} = \\nabla_a \\psi_b$ everywhere. It is then\nstraightforward to show that \\numparts\n\\begin{eqnarray}\n \\mathrm{D} \\hat{A}^a\/\\rmd l &= u^b \\hat{B}_{ba}\n \\\\\n u^a \\mathrm{D} \\hat{B}_{ab}\/\\rmd l &= - R_{abc}{}^{d} u^a u^d\n \\hat{A}_d.\n\\end{eqnarray}\n\\endnumparts\nThe first of these equations has exactly the same form as\n\\eqref{KTA}, while the second is essentially \\eqref{KTB} contracted\nwith $u^a$. Killing and Jacobi transport are therefore very closely\nrelated operations. The latter does not uniquely propagate\n$\\hat{B}_{ab}$ from given initial data, so it is weaker. These\nremarks also clarify in what sense Jacobi fields or GACs approximate\n\\eqref{AffineDef} or \\eqref{Del2Affine}.\n\n\n\\subsection{Special cases and properties of GACs}\n\nFrom a physical perspective, momenta like \\eqref{PDef} should be\ndefinable even in the absence of any exact isometries. This is most\nconveniently done with a particular class of GACs that generalize\nonly the Killing fields. In analogy to the Killing-type Jacobi\nfields discussed in Sect. \\ref{Sect:SpecialJacobi}, suppose that\n$B_{(\\sfa {\\mathsf{b}})}$ vanishes on at least one point of $\\Gamma$. It\nimmediately follows from \\eqref{KTB} that it must actually vanish\neverywhere. The Killing-type GACs therefore form a 10-dimensional\ngroup of vector fields satisfying\n\\begin{equation}\n \\Lie_{\\xi_\\mathrm{K}} g_{\\sfa {\\mathsf{b}}} |_\\Gamma = 0,\n \\label{GACKilling}\n\\end{equation}\nas well as \\eqref{GACAffine}. They may be thought of as generalizing\nthe Poincar\\'{e} symmetries of flat spacetime near a given observer.\n\nIt also possible to single out GACs that are approximately\nhomothetic in the sense that\n\\begin{equation}\n \\Lie_{\\xi_{\\mathrm{H}}} g_{\\sfa {\\mathsf{b}}} |_\\Gamma = 2 c g_{\\sfa\n {\\mathsf{b}}}. \\label{GACHomothetic}\n\\end{equation}\nThis requires setting $B_{(\\sfa {\\mathsf{b}})} = c g_{\\sfa {\\mathsf{b}}}$. While\nalways possible, the remaining components of the initial data cannot\nbe explicitly solved for except in the case when $\\Gamma$ is a\ngeodesic. It then self-consistent to choose $B_{[\\sfa {\\mathsf{b}}]}=0$. The\nobvious way of doing this is to normalize $\\dot{\\gamma}^\\sfa$ to\nunity and set\n\\begin{equation}\n A_\\sfa = c (s - \\bar{s} ) \\dot{\\gamma}_\\sfa\n\\end{equation}\nfor some constant $\\bar{s}$. It is easily verified that the given\nparameters satisfy the KT equations. One homothetic-type GAC\nassociated with an affinely parameterized geodesic therefore has the\nform\n\\begin{equation}\n \\xi^a_{\\mathrm{H}} = c \\big[ \\sigma^a + (\\tau - \\bar{s}) H^{a}{}_{\\sfa}\n \\sigma^{\\sfa}{}_{{\\mathsf{b}}} \\dot{\\gamma}^{\\mathsf{b}} \\big] .\n\\end{equation}\nAs usual, Killing-type Jacobi fields may be added to this without\nspoiling \\eqref{GACHomothetic}. It should also be emphasized that\nhomothetic-type GACs are not restricted to geodesic frames. This is\njust the case where closed-form solutions of the KT equations can be\nobtained by inspection.\n\nMany of the properties derived for Jacobi fields in Sect.\n\\ref{Sect:Symmetries} and the appendix can be carried over at least\npartially for the GACs. For example,\n\\begin{equation}\n \\LieX \\sigma^\\sfa = \\LieS \\sigma^\\sfa = 0 \\label{GACLie1}\n\\end{equation}\nif the arguments are of the form $(x, \\gamma(\\tau(x)))$ and $\\xi^a$\nand $\\psi^a$ are related via \\eqref{XiDef}. This may be interpreted\nas stating that spatial Fermi coordinates are preserved under flows\ngenerated by $\\xi^a$. Since the hypersurfaces $\\Sigma(s)$ can be\ndescribed as a set of geodesics intersecting $\\gamma(\\tau(x))$, it\nwill always be true that $\\sigma^a \\nabla_a \\tau =0$. This means\nthat\n\\begin{theorem}\\label{Thm:GACExactSyms}\n Given a general GAC $\\xi^a$, $\\LieX \\sigma^\\sfa = \\LieX \\sigma^a\n =0$ when the arguments of these equations are as in\n \\eqref{GACLie1}. Killing-type GACs $\\xi^a_{\\mathrm{K}}$ also satisfy $\\Lie_{\\xi_{\\mathrm{K}}} \\sigma = \\Lie_{\\xi_{\\mathrm{K}}} \\sigma_\\sfa =\n 0$ with the same restriction.\n\\end{theorem}\n\nThe identity \\eqref{LieSig6} serves to constrain Lie derivatives of\nthe metric with respect to Jacobi fields. A direct analog of this\nequation for an arbitrary GAC would involve an additional term.\nDespite this, contracting the result with $\\sigma^b$ leads to the\nsimple conclusion\n\\begin{equation}\n \\sigma^a \\sigma^b \\LieX g_{ab} = 2 \\sigma^\\sfa \\sigma^{\\mathsf{b}} B_{(\\sfa\n {\\mathsf{b}})} .\n\\end{equation}\n``Purely radial'' components of $\\LieX g_{ab}$ therefore vanish for\nKilling-type GACs.\n\nMany other results can be carried over in similar ways. One that is\nof particular interest is the behavior of $\\LieX g_{ab}$ or\n$\\nabla_a \\LieX g_{bc}$ near $\\Gamma$. As with the Jacobi fields, it\nis possible to see how close a GAC comes to being affine as its\nreference worldline is approached. Analogs of\n\\eqref{LieGExpandJacobi} and \\eqref{LieGGradJacobi} may be obtained\nusing expansions like \\eqref{LieXivsPsi} and the identity\n\\eqref{Del2General}. Simplifying terms with the Killing transport\nequations, the lowest order correction to \\eqref{LieGExpandJacobi}\nis\n\\begin{equation}\n \\Lie_{(\\xi-\\psi)} g_{ab} \\simeq \\frac{2}{3} \\sigma^{\\sfa}{}_{a}\n \\sigma^{{\\mathsf{b}}}{}_{b} \\nabla_{(\\sfa} \\tau g_{{\\mathsf{b}}) {\\mathsf{f}}}\n \\dot{\\gamma}^{\\mathsf{h}}\n X^{\\mathsf{c}} X^{\\mathsf{d}} \\LieX R_{\\mathsf{h} {\\mathsf{c}} {\\mathsf{d}}}{}^{{\\mathsf{f}}} + \\Or(X^3).\n \\label{LieGExpandGAC}\n\\end{equation}\nSimilarly, the first interesting change to \\eqref{LieGGradJacobi}\nhas the form\n\\begin{eqnarray}\n \\fl \\qquad \\nabla_a \\Lie_{(\\xi-\\psi)} g_{bc} \\simeq - \\frac{4}{3} \\sigma^{\\sfa}{}_a\n \\sigma^{{\\mathsf{b}}}{}_b \\sigma^{{\\mathsf{c}}}{}_c X^{\\mathsf{d}} \\dot{\\gamma}^{\\mathsf{f}}\n \\big[ g_{\\mathsf{h} ({\\mathsf{b}}} \\nabla_{{\\mathsf{c}})} \\tau (\\delta^{\\mathsf{l}}_\\sfa - \\dot{\\gamma}^{\\mathsf{l}} \\nabla_\\sfa \\tau) \\LieX R_{{\\mathsf{f}} (\\mathsf{l} {\\mathsf{d}})}{}^{\\mathsf{h}} \\nonumber\n \\\\\n \\qquad ~ + \\frac{1}{2} \\nabla_\\sfa \\tau (R_{{\\mathsf{d}} {\\mathsf{f}}\n ({\\mathsf{b}}}{}^{\\mathsf{h}} \\LieX g_{{\\mathsf{c}}) \\mathsf{h}} - g_{\\mathsf{h} ({\\mathsf{b}}} \\LieX R_{{\\mathsf{c}}) {\\mathsf{f}} {\\mathsf{d}}}{}^{\\mathsf{h}} ) \\big] + O(X^2).\n \\label{LieGGradGAC}\n\\end{eqnarray}\nAs before, the magnitudes of these terms depend on how close $\\xi^a$\nis to being a symmetry of the Riemann tensor on the observer's\nworldline.\n\n\\section{Mechanics and conservation laws}\n\\label{Sect:ConsLaws}\n\nIt has already been remarked that one of the main applications of\nexact symmetries in physics is to the formulation of conservation\nlaws. These take several forms. Perhaps the most basic are those\nassociated with a spacetime's geodesics. It is sometimes possible\nfor such curves to be at least partially parameterized by a number\nof constants associated with geometric symmetries. More\ninterestingly, conservation laws can also be associated with\nextended matter distributions. Assuming only that stress-energy\ntensors satisfy\n\\begin{equation}\n \\nabla_a T^{ab} = 0 \\label{StressCons}\n\\end{equation}\ntends to lead to the definition of slowly-varying parameters like\nthose discussed in connection with \\eqref{DixMomenta} and\n\\eqref{PDef}. The situation becomes much more interesting in full\ngeneral relativity. Einstein's equation implies stress-energy\nconservation, although it also connects symmetries of the geometry\nto those of the matter distribution (and vice versa). This allows\nthe introduction of exact conservation laws in arbitrary spacetimes.\n\n\\subsection{Geodesics}\n\nIt is well-known that any Killing fields that may exist provide\nfirst integrals of the geodesic equation. These can be used both to\nderive and parameterize the geodesics of a given spacetime. While\nless commonly discussed, similar quantities can also be associated\nwith other kinds of symmetries \\cite{GeodesicConsts,KatzinGeo}.\nUnlike in the Killing vector case, the presence of more general\ncollineations sometimes implies the existence of interesting\nconserved quantities that are not linear in the geodesic's\nfour-velocity. The curve's affine parameter can also appear\nexplicitly. As a direct calculation will easily verify, two\nconstants associated with an exact affine collineation\n$Y_{\\mathrm{A}}^a$ are \\numparts\\label{GeoConsts}\n\\begin{eqnarray}\n C_1 &= \\dot{y}^a \\dot{y}^b \\Lie_{Y_{\\mathrm{A}}} g_{ab}\n \\label{GeoConst1}\n \\\\\n C_2 & = \\dot{y}_a Y^a_{\\mathrm{A}} - \\frac{1}{2} l C_1 .\n \\label{GeoConst2}\n\\end{eqnarray}\n\\endnumparts\nThese quantities remain fixed along any affinely-parameterized\ngeodesic $y(l)$. The first becomes degenerate if $Y^a_{\\mathrm{A}}$\nis Killing. $C_2$ then reduces to the standard conserved quantity\nassociated with a Killing field. Other constants can sometimes be\nwritten down by combining $Y^a_{\\mathrm{A}}$ with an exact Killing\ntensor \\cite{GeodesicConsts, GeoExamples}. Such constructions will\nnot be discussed here.\n\nConsider instead expressions like those just given with\n$Y_{\\mathrm{A}}^a(x)$ replaced by some Jacobi field\n$\\psi^a(x,\\gamma)$. We then have\n\\begin{eqnarray}\n \\dot{C}_1 = \\frac{\\rmd C_1}{\\rmd l} = -\\frac{2}{l} \\frac{\\rmd C_2}{ \\rmd l} = \\dot{y}^a \\dot{y}^b \\dot{y}^c \\nabla_{(a} \\LieS\n g_{bc)} .\n\\end{eqnarray}\nThe tangent vectors are proportional to $\\sigma^a(y,\\gamma)$ for the\nspecial case of geodesics passing through $\\gamma$. It then follows\nfrom \\eqref{LieSig9} that both $C_1$ and $C_2$ remain conserved\nalong all such trajectories. Each Jacobi field generates exact\ngeodesic constants in this way. In terms of the initial data\n$A_\\sfa$ and $B_{\\sfa {\\mathsf{b}}}$, these have the values\n\\begin{eqnarray}\n C_1 &= 2 \\dot{y}^\\sfa \\dot{y}^{\\mathsf{b}} B_{(\\sfa {\\mathsf{b}})}, \\qquad\n C_2 &= \\dot{y}^\\sfa A_\\sfa \\label{GeoConstInit}\n\\end{eqnarray}\nwhen the parameter $l$ is chosen to vanish at $\\gamma$. It is clear\nfrom this that $B_{[\\sfa {\\mathsf{b}}]}$ is irrelevant. Multiple Jacobi\nfields may therefore generate the same constants on a particular\ncurve.\n\nExact affine collineations generalize these results by also applying\nto non-radial geodesics. Expansions like \\eqref{LieGGradJacobi} can\nbe used to derive how close general Jacobi fields come to this\nideal. To lowest nonvanishing order,\n\\begin{equation}\n \\dot{C}_1 \\simeq - \\frac{4}{3} (\\dot{y}^a \\sigma^{\\sfa}{}_{a}) (\\dot{y}^b \\sigma^{{\\mathsf{b}}}{}_{b}) (\\dot{y}^c\n \\sigma^{{\\mathsf{c}}}{}_{c}) X^{\\mathsf{d}} R_{{\\mathsf{d}} {\\mathsf{b}}\n {\\mathsf{c}}}{}^{{\\mathsf{f}}} \\LieS g_{\\sfa {\\mathsf{f}}} + \\Or(X^2) . \\label{Cdot}\n\\end{equation}\nThis term vanishes if $B_{(\\sfa {\\mathsf{b}})}=0$, so the Killing-type\nJacobi fields usually provide more accurate ``conservation laws''\nfor arbitrary geodesics near $\\gamma$. In these cases,\n$\\dot{C}_1(l)$ scales like $(X\/\\mathcal{R})^2\/\\mathcal{R}$, where\n$\\mathcal{R}$ is a curvature radius. This is a worst-case estimate.\n$C_1$ and $C_2$ will probably vary much more slowly if there is a\nphysical sense in which the system is approximately symmetric. Rates\nof change for $C_1$ or $C_2$ can also be constrained using\nidentities like \\eqref{LieSig7}. This effectively restricts how much\nthese parameters can vary as a geodesic moves away from $\\gamma$.\nMore of their changes tend to occur as $y(l)$ moves across rather\nthan with the radial geodesics.\n\nParameters like $C_1$ and $C_2$ can also be defined with respect to\na GAC $\\xi^a$. These should remain approximately conserved for\ngeodesics near an observer's worldline rather than geodesics near a\npoint. They are exact constants for curves passing through $\\Gamma$\nalong the reference foliation. This can be seen by using\n\\eqref{LieXivsPsi} to show that\n\\begin{equation}\n \\dot{y}^a \\dot{y}^b \\dot{y}^c \\nabla_a \\LieX g_{bc} = \\dot{y}^a \\dot{y}^b \\dot{y}^c \\nabla_{a} \\LieS g_{bc}\n\\end{equation}\nwhen $\\dot{y}^a \\nabla_a \\tau =0$. Values of $C_1$ and $C_2$ here\nare the same as in \\eqref{GeoConstInit} if the quantities in that\nequation are evaluated at the point where $y$ intersects $\\Gamma$.\nThis might be useful in a coordinate system constructed from some\ncollection of GACs. Alternatively, it can be viewed as a\ngeneralization of standard results near a given observer.\n\nFurther methods of parameterizing geodesics may be found by\ngeneralizing the constants associated with higher-rank Killing\ntensors. Given any exact Killing tensor $K_{a_1 \\cdots a_n} =\nK_{(a_1 \\cdots a_n)}$, the scalar\n\\begin{equation}\n C_K = K_{a_1 \\cdots a_n} \\dot{y}^{a_1} \\cdots \\dot{y}^{a_n}\n \\label{CK}\n\\end{equation}\nis conserved along all geodesics $y(l)$. An analog of this quantity\nfor the approximate Killing tensor \\eqref{KTApprox1} is exactly\n$C_1$ defined above. Something more interesting can be generated by\nsubstituting \\eqref{KTApprox2} into \\eqref{CK}. In the second-rank\ncase, one may define\n\\begin{equation}\n C_K = (\\dot{y}_a \\psi^a_{\\mathrm{K}}) (\\dot{y}_b \\bar{\\psi}^b_{\\mathrm{K}}) \\label{CKEx}\n\\end{equation}\nfor some Killing-type Jacobi fields $\\psi^a_{\\mathrm{K}}$ and\n$\\bar{\\psi}^a_{\\mathrm{K}}$. This is easily generalized to involve\nan arbitrary number of products, although it is only interesting to\nconsider linearly independent collections of Jacobi fields. The\nmaximum number of useful products is therefore ten. All of these can\nbe generated just from individual terms of the form $\\dot{y}_a\n\\psi^a_{\\mathrm{K}}$. These are interpreted as approximate constants\nassociated with objects that are nearly first-rank Killing tensors\n(i.e. Killing vectors). Everything of interest here can therefore be\nderived from the behavior of\n\\begin{equation}\n C_3 = \\dot{y}_a \\psi^a_{\\mathrm{K}} = C_2 + \\frac{1}{2} l C_1.\n \\label{C3}\n\\end{equation}\nAlthough this depends only on the two approximate constants defined\nbefore, it may be interpreted as an additional useful parameter. It\nhas the interesting property that\n\\begin{equation}\n \\dot{C}_3 = \\frac{1}{2} C_1 . \\label{C3Dot}\n\\end{equation}\nTime derivatives do not appear on the right hand side. Consider the\nspecial case of a geodesic that passes through $\\gamma$. Since the\nJacobi field was assumed to be Killing at its origin, $C_1 = 0$.\nThis is true everywhere, so $C_3$ also remains fixed along the\nentire geodesic. It actually coincides with $C_2$ in this case.\n\nDifferences arise when considering non-radial geodesics. It was\nremarked above that there was a sense in which $C_1$ and $C_2$ only\nvaried due to non-radial components of $\\dot{y}^a$. This type of\nstatement can be made much more precise for $C_3$. Let\n\\begin{equation}\n \\dot{y}^a(l) = u_{||} (l) \\sigma^a(y(l),\\gamma) + u^a_\\bot(l) ,\n\\end{equation}\nwhere $u^{[a}_\\bot \\sigma^{b]} =0$. Now \\eqref{LieSig6},\n\\eqref{GeoConst1}, and \\eqref{C3Dot} show that\n\\begin{equation}\n \\dot{C}_3 = \\frac{1}{2} u_\\bot^a u_\\bot^b \\LieS g_{ab} .\n \\label{C3DotBeta}\n\\end{equation}\nThis result is exact for all geodesics $y(l)$. There are many cases\nwhere $u_\\bot^a$ becomes vanishingly small as $l \\rightarrow \\pm\n\\infty$ (when the geodesic exists for these parameter values), so\n\\eqref{C3DotBeta} provides a strong restriction on how much $C_3$\ncan vary in any given situation. Very near $\\gamma$,\n\\eqref{LieGExpandJacobi} can be used to show that\n\\begin{equation}\n \\dot{C}_3 \\simeq - \\frac{1}{6} (u_\\bot^a \\sigma^{\\sfa}{}_{a}) (u_\\bot^b \\sigma^{{\\mathsf{b}}}{}_{b})\n X^{\\mathsf{c}} X^{\\mathsf{d}} \\LieS R_{\\sfa {\\mathsf{c}} {\\mathsf{b}} {\\mathsf{d}}} + \\Or(X^3) .\n\\end{equation}\nThe lowest order contributions here scale like\n$(X\/\\mathcal{R})^2\/\\mathcal{R}$. This is similar to the result\nexpected for $\\dot{C}_1$ when computed using a Killing-type Jacobi\nfield.\n\n\\subsection{Extended matter\ndistributions}\\label{Sect:ExtendedMatter}\n\nFrom a physical perspective, it is often more interesting to\nconsider possibly approximate integrals of the equations of motion\ndescribing an extended matter distribution rather than a pointlike\ntest particle. Suppose that this matter is modeled by a conserved\nstress-energy tensor $T^{ab}$. Contracting it with any exact Killing\nvectors that may exist yields conserved currents. These are\nequivalent to some subset of the typical laws of linear and angular\nmomentum conservation known in flat spacetime. More generally,\n\\eqref{StressCons} shows that\n\\begin{equation}\n \\nabla_a ( T^{a}{}_{b} Y^b ) = \\frac{1}{2} T^{ab} \\Lie_Y g_{ab} .\n\\end{equation}\nThis holds for any vector field $Y^a$, although it is convenient to\nassume that it is a Killing-type GAC. The source term on the\nright-hand side may then be considered small near $\\Gamma$. This\ntherefore serves as an approximate conservation law. As long as\nthere is no matter flow through $\\partial \\Sigma$, quantities like\n\\eqref{PDef} might be expected to vary slowly in time.\n\n\nStress-energy conservation can be seen as a consequence of the\ndiffeomorphism invariance of a system's underlying action.\nConstructions that are based on it are therefore useful in many\ntheories of gravity besides general relativity. They can also hold\nfor test bodies in fixed background geometries. This generality is\nquite restrictive. Much more can be said if the full Einstein\nequation is assumed to hold. Symmetries in the geometry are then\nrelated to symmetries in the matter fields. The presence of an exact\nKilling field $Y^a_{\\mathrm{K}}$ that also satisfies\n$\\Lie_{Y_\\mathrm{K}} T_{ab} = 0$ allows many interesting results to\nbe proven regarding the momenta $p_\\sfa$ and $S_{\\sfa {\\mathsf{b}}}$ defined\nin \\eqref{DixMomenta}. For example, the net force and torque on a\nbody can be written explicitly in terms of the Killing field and its\nfirst derivative at a point. If $Y^a_{\\mathrm{K}}$ is timelike, a\nbody's center-of-mass can be shown to follow one of its orbits.\nFurthermore, mass centers always lie on the central axis of\ncylindrically symmetric spacetimes \\cite{SchattStreub1,\nSchattStreub2}.\n\nMomenta defined in terms of $T^{a}{}_{b} \\xi^b$ are useful for many\npurposes, although they are not conserved in the absence of exact\nKilling fields. Determining how they vary over time requires\ndetailed knowledge of a body's internal structure. Alternative\ndefinitions for the linear and angular momenta of an extended body\narise when using the full Einstein equation rather than just\n\\eqref{StressCons}. Taking the trace of Ricci's identity and\nrearranging terms shows that\n\\begin{equation}\n R^{a}{}_{b} Y^b = \\frac{1}{2} ( g^{ac} g^{bd} - g^{ab} g^{cd} )\n \\nabla_b \\Lie_Y g_{cd} + \\nabla_b \\nabla^{[a} Y^{b]} .\n \\label{RicciId}\n\\end{equation}\nThis holds for any vector field $Y^a$. Note that the second term on\nthe right-hand side is always conserved. It follows that\n\\begin{eqnarray}\n \\nabla_a [ 2 (T^{a}{}_{b} Y^b -\\frac{1}{2} Y^a T^{b}{}_{b}) + j^a_Y] = 0, \\label{AffineConsLaw}\n\\end{eqnarray}\nwhere the ``gravitational current'' $j^a_Y$ associated with $Y^a$\nhas been defined by\n\\begin{equation}\n j^a_Y = \\frac{1}{8 \\pi} ( g^{ab} g^{cd} - g^{ac} g^{bd} )\n \\nabla_b \\Lie_Y g_{cd} .\n \\label{GravCurrent}\n\\end{equation}\nIt clearly vanishes if $Y^a$ is an exact affine collineation. This\nis not the only case where the current's contribution to\n\\eqref{AffineConsLaw} disappears. Using the Bianchi identity,\n\\begin{equation}\n \\nabla_a j^a_Y = - \\frac{1}{8\\pi} g^{ab} \\Lie_Y R_{ab}.\n \\label{divJ}\n\\end{equation}\nAny vector field satisfying $g^{ab} \\Lie_Y R_{ab} =0$ will therefore\ngenerate conserved matter currents involving only $T^{a}{}_{b} Y^b -\nY^a T^{b}{}_b\/2$. That such ``contracted Ricci collineations''\ngenerate conservation laws for matter distributions has been noted\nbefore in \\cite{KatzinLevine, Collinson}.\n\nThe viewpoint here will be to apply \\eqref{RicciId} and\n\\eqref{AffineConsLaw} with $Y^a$ replaced by some approximate\nsymmetry.\n\\begin{definition}\n Fix some family of Jacobi fields $\\psi^a(x,\\gamma(s))$ that generates a GAC\n via \\eqref{XiDef}. Define the generalized Komar momentum\n $\\ItP^*_\\psi$ associated with these fields by\n \\begin{equation}\n \\ItP^*_\\psi(s) = \\frac{1}{8\\pi} \\oint_{\\partial \\Sigma(s)} \\nabla^{[a}\n \\psi^{b]} \\rmd S_{ab} . \\label{PStarDef}\n \\end{equation}\n\\end{definition}\nAs the name suggests, this has the same form and interpretation as a\nKomar integral. It is convenient to assume that $s$ is a fixed\nparameter for the purpose of evaluating the derivative in\n\\eqref{PStarDef}. Directly using a GAC in place of $\\psi^a$ would\nadd a dependence on the reference frame. Note that no such\ndistinctions had to be made for the $\\ItP_\\xi$ defined in\n\\eqref{PDef}. Applying Stokes' theorem together with \\eqref{RicciId}\nand \\eqref{GravCurrent} shows that\n\\begin{equation}\n \\ItP^*_\\psi = \\int_\\Sigma \\big[ 2 (T^{a}{}_{b} \\psi^b - \\frac{1}{2}\n \\psi^a T^{b}{}_{b} ) + j^a_\\psi \\big] \\rmd S_a . \\label{PStar2}\n\\end{equation}\nThere is a well-defined sense in which changes in this quantity are\ndetermined by a combination of ``gravitational wave'' and matter\nfluxes across the boundary $\\partial \\Sigma$. In this\ninterpretation, the amount of $\\ItP^*_\\psi$ carried away from a\nsystem via gravitational waves vanishes if the GAC associated with\n$\\psi^a$ is an affine collineation.\n\nThe 20-parameter family of scalars $\\ItP^*_\\psi$ is intended to\ndefine the linear and angular momenta of an extended body. This is\nat least the interpretation for the 10-parameter subset satisfying\n$B_{(\\sfa {\\mathsf{b}})}=0$. As in \\eqref{DixMomenta}, it is possible to\nwrite these momenta in the more conventional form of tensor fields\non $\\Gamma$. Let\n\\begin{equation}\n \\ItP^*_\\psi = p_\\sfa^* A^\\sfa + \\frac{1}{2} S^*_{\\sfa {\\mathsf{b}}}\n B^{\\sfa {\\mathsf{b}}}\\label{KomarMoment}\n\\end{equation}\nfor all $A_\\sfa$ and $B_{\\sfa {\\mathsf{b}}}$. As written, the generalized\nangular momentum $S_{\\sfa{\\mathsf{b}}}^*$ needn't have any particular index\nsymmetries. Non-Killing Jacobi fields generate the symmetric\ncomponents of this tensor, although such generality isn't necessary.\nVarying among all combinations of initial data completely recovers\n$p^*_\\sfa$ and $S^*_{\\sfa {\\mathsf{b}}}$. Direct expressions can also be\nobtained with the use of \\eqref{JacobiFirstDer}. Continuing to work\nwith the less explicit form \\eqref{KomarMoment}, rates of change of\nthe tensor momenta may easily be extracted from $\\dot{\\ItP}^*_\\psi$.\nUsing the KT equations \\eqref{KTA} and \\eqref{KTB},\n\\begin{equation}\n \\dot{\\ItP}^*_\\psi = ( \\dot{p}^*_\\sfa - \\frac{1}{2} S^*_{{\\mathsf{b}} {\\mathsf{c}}}\n R^{{\\mathsf{b}} {\\mathsf{c}}}{}_{{\\mathsf{d}} \\sfa} \\dot{\\gamma}^{\\mathsf{d}} ) A^\\sfa +\n \\frac{1}{2} ( \\dot{S}^*_{\\sfa {\\mathsf{b}}} + 2 \\dot{\\gamma}_\\sfa p^*_{\\mathsf{b}}) B^{\\sfa {\\mathsf{b}}}.\n\\end{equation}\nThis is closely analogous to \\eqref{Papapetrou}. The left-hand side\nis parameterized entirely by $A_\\sfa$ and $B_{\\sfa {\\mathsf{b}}}$, so\nvarying these quantities determines all of the corrections to the\nPapapetrou equations.\n\nTwo definitions have now been suggested for the momenta of an\nextended body. The first -- summarized by \\eqref{DixMomenta} and\n\\eqref{PDef} -- is closely related to the one given by Dixon\n\\cite{Dix70a, Dix74, Dix79}. It is well-adapted to the construction\nof multipole moments for $T^{ab}$ that intrinsically take into\naccount stress-energy conservation. Mass centers defined from these\nmomenta are known to have most of the properties one might expect\n\\cite{SchattStreub1,CM}. The boundary of the worldtube $W$ isn't\nimportant as long as it lies outside of the matter distribution\nunder discussion. There is no vacuum momentum under this definition.\nUnfortunately, there does not appear to be any exact analog of\nGauss' law either. The momenta of a matter distribution (and changes\nto it) must be computed by integrating over 3-volumes. The\ngeneralized Komar integrals defined by \\eqref{PStarDef} have\ncomplementary characteristics. Their main advantage is in having a\ndirect interpretation analogous to Gauss' law. Changes in the\ncomponent of an isolated body's momentum generated by $\\psi^a$ only\ndepends on the gravitational flux $j^a_\\psi$ passing through the\nsurface $\\partial W$. The mass and angular momenta expected from\nthis definition also agree with commonly-accepted notions at least\nin appropriately symmetric spacetimes. It is potentially problematic\nthat $\\ItP^*_\\psi$ includes what is effectively a vacuum energy.\nThese scalars usually depend on the spatial extent of $W$ even when\nits boundary lies far outside of any matter distribution. This can\nmake it difficult to neatly separate the properties of disjoint\nmatter distributions, although similar situations are found even in\nordinary electromagnetism. It might be conceptually simpler to\nextend $\\partial W$ to infinity, although it is unlikely that all of\nthe bitensors used here would remain well-defined. There might also\nbe convergence problems. Related concepts presented in\n\\cite{EnergyConference} could be more useful for defining momenta\nover very large regions.\n\nSome insight into the behavior of the $\\ItP^*_\\psi$ defined here can\nbe gained by computing it for very small spheres. To be specific,\nlet $C(r,s)$ be the closed 2-surface on $\\Sigma(s)$ satisfying\n$X^\\sfa X_\\sfa = r^2$ for some $r>0$. This is effectively a sphere\nof proper radius $r$ centered at the point $\\gamma(s)$. It follows\nfrom \\eqref{GACAffine} and \\eqref{GravCurrent} that $j^a_\\psi$ is\nnegligible for small radii in the presence of matter. The momentum\ninside $C$ is approximately\n\\begin{equation}\n P^*_\\xi \\simeq \\frac{8\\pi}{3} r^3 (T^{\\sfa}{}_{{\\mathsf{b}}} A^{\\mathsf{b}} -\n \\frac{1}{2} A^{\\sfa} T^{{\\mathsf{b}}}{}_{{\\mathsf{b}}}) \\nabla_\\sfa \\tau + \\Or(r^4) .\n\\end{equation}\nIt is perhaps more interesting to consider regions that are locally\ndevoid of matter. These can be understood from the behavior of the\ngravitational current. Its approximate behavior near $\\Gamma$ is\neasily calculated from \\eqref{LieGGradJacobi} and\n\\eqref{GravCurrent}. To lowest nontrivial order,\n\\begin{eqnarray}\n \\fl \\qquad \\qquad j^a_\\psi \\simeq - \\frac{1}{8 \\pi} \\sigma^{a}{}_{\\sfa} X^{\\mathsf{b}} \\big[\n \\LieS R^{\\sfa}{}_{{\\mathsf{b}}} + \\frac{1}{3} \\big( g^{\\sfa {\\mathsf{c}}} R^{{\\mathsf{d}}}{}_{{\\mathsf{b}}} + 2 R_{{\\mathsf{b}}}{}^{{\\mathsf{c}} \\sfa {\\mathsf{d}}} \\big) \\LieS\n g_{{\\mathsf{c}} {\\mathsf{d}}} \\big] + \\Or(X^2). \\label{GravCurrentEst}\n\\end{eqnarray}\nThis vanishes in vacuum for Killing-type Jacobi fields. A little\nmore calculation finds the same conclusion at order $X^2$ as well.\nThis contrasts sharply with other quasilocal notions of vacuum\nmomentum in general relativity. An extensive review of these\nconcepts may be found in \\cite{EnergyRev}. As remarked there, the\nenergy contained in small spheres has been calculated using several\ndifferent definitions. The generic result is that it is proportional\nto the Bel-Robinson tensor, and scales like $r^5$. If this were true\nfor the definition suggested here, terms quadratic in the curvature\nwould appear at order $X^2$ in the current. These are not found.\nVacuum momenta should really only be associated with Killing-type\nsymmetries, so the relevant currents defined here decrease at least\nas fast as $r^6$ as $r \\rightarrow 0$. Other definitions in the\nliterature find more energy in very small spheres. It is not clear\nhow to interpret this, although it might have interesting\nconsequences for the use of near zones and related concepts\nconnected to the mechanics of compact bodies.\n\n\n\\section{An example: gravitational plane waves}\\label{Sect:Example}\n\nThe discussion so far has mainly focused on the behavior of\ngeneralized symmetries near the point or worldline used to construct\nthem. With the exception of general identities like \\eqref{LieSig6},\nit has not been clear what happens to these vector fields far away\nfrom their origins. It is therefore useful to consider an example.\nGiven \\eqref{JacobiFirstDer}, the Jacobi fields can all be\ncalculated simply by differentiating the world function. This makes\nit convenient to consider spacetimes where $\\sigma$ is known\nexactly. Essentially the only examples of this type are\n\\textit{pp}-waves or assorted cosmological models (see\n\\cite{WorldFunction} and references cited therein).\n\nIn the interest of understanding the generalized Komar momenta\n\\eqref{PStarDef} in a vacuum spacetime, only \\textit{pp}-waves will\nbe considered here. Coordinates may be introduced such that the\nmetric satisfies\n\\begin{equation}\n \\rmd s^2 = - 2 \\rmd u \\rmd v + a(u) \\rmd x^2 + b(u) \\rmd y^2 .\n \\label{ppGeneral}\n\\end{equation}\nIt can then be shown that one-half of the geodesic distance between\npoints with coordinates $(u,v,x,y)$ and $(\\mathsf{u}, \\mathsf{v},\n\\mathsf{x}, \\mathsf{y})$ is given by \\cite{Gunther,Friedlander}\n\\begin{equation}\n \\sigma = \\frac{1}{2} [ \\alpha (u,\\mathsf{u}) (x-\\mathsf{x})^2 + \\beta (u,\\mathsf{u}) (y-\\mathsf{y})^2 ] -\n (u-\\mathsf{u}) (v-\\mathsf{v}), \\label{SigmaExact}\n\\end{equation}\nwhere\n\\begin{equation}\n \\alpha(u,\\mathsf{u}) = \\frac{u-\\mathsf{u}}{\\int_{\\mathsf{u}}^{u} a^{-1}(w) \\rmd w} ; \\quad \\beta(u,\\mathsf{u}) = \\frac{u-\\mathsf{u}}{\\int_{\\mathsf{u}}^{u} b^{-1}(w) \\rmd\n w}.\n\\end{equation}\nIt is clear by inspection that $\\partial\/\\partial x$,\n$\\partial\/\\partial y$, and $\\partial\/\\partial v$ are exact Killing\nvectors. They are not the only ones. All nontrivial spacetimes in\nthis class admit between five and seven linearly-independent Killing\nfields.\n\nMost \\textit{pp}-waves are effectively ``null dust'' solutions of\nEinstein's equation, although there are vacuum examples as well. One\nof these is given by\n\\begin{equation}\n a(u) = \\cos^2 (\\lambda u) ; \\quad b(u) = \\cosh^2 (\\lambda u)\n , \\label{GravWave}\n\\end{equation}\nwhere we assume that $|\\lambda u|< \\pi\/2$ in order to avoid the two\ncoordinate singularities. This represents a simple plane-fronted\ngravitational wave with amplitude $\\lambda$. The only non-vanishing\ncomponents of the curvature are\n\\begin{equation}\n C_{uxu}{}^{x} = C_{uyu}{}^{y} = \\lambda^2.\n\\end{equation}\nIt is trivial to modify this spacetime to be flat for (say) $u<0$\n\\cite{PlaneWaves}, although impulsive waves of this type will not be\ndiscussed here.\n\nGiven \\eqref{SigmaExact}, it is straightforward to explicitly\ncompute the Jacobi propagators $H^{a}{}_{{\\mathsf{b}}}\n\\sigma^{{\\mathsf{b}}}{}_{\\sfa}$ and $H^{a}{}_{\\sfa} \\sigma_{\\mathsf{b}}$ using\n\\eqref{HDef}. This can be done for any choice of $a(u)$ and $b(u)$,\nalthough we will specialize to the case defined by \\eqref{GravWave}.\nThe results are not particularly enlightening to write down in\ndetail, although they have some interesting consequences. First, all\nJacobi fields are found to be exactly Killing if their first\nderivatives vanish at the base point (denoted as usual with\nsans-serif font). The Jacobi fields $H^{a}{}_{[\\mathsf{x}}\n\\sigma_{\\mathsf{v}]}$ and $H^{a}{}_{[\\mathsf{y}}\n\\sigma_{\\mathsf{v}]}$ are also Killing. This identifies six\nindependent Killing fields. It also implies that the linear\ngravitational momentum $p^*_\\sfa $ defined in \\eqref{KomarMoment}\nmust vanish. This can be taken to imply (unsurprisingly) that the\ngravitational wave has zero rest mass: $|p^*|^2=0$.\n\nThere remain four non-Killing Jacobi fields with skew-symmetric\n$B_{\\sfa {\\mathsf{b}}}$. The one associated with spatial rotations in the\n$x-y$ plane is relatively simple to write down when $\\mathsf{u}=0$:\n\\begin{eqnarray}\n \\fl \\qquad \\psi_{\\mathsf{xy}}^a = 2 H^{a}{}_{\\mathsf{[x}} \\sigma_{\\mathsf{y}]} =\n -(y-\\mathsf{y})\n \\left( \\frac{ \\tan(\\lambda u) }{ \\tanh(\\lambda u) } \\right)\n \\frac{\\partial}{\\partial x} + (x - \\mathsf{x}) \\left( \\frac{ \\tanh(\\lambda u) }{\n \\tan(\\lambda u) } \\right) \\frac{\\partial}{\\partial y}\n \\nonumber\n \\\\\n \\qquad \\qquad ~ + \\lambda ( x -\\mathsf{x})\n (y - \\mathsf{y}) \\left( \\frac{ \\tanh(\\lambda u) - \\tan (\\lambda u) }{ \\tan(\\lambda u) \\tanh(\\lambda u) } \\right) \\frac{\\partial}{\\partial\n v} .\n\\end{eqnarray}\nThis clearly reduces to its expected form when $u \\rightarrow\n\\mathsf{u}$. The degree to which it succeeds in being a genuine\nKilling field may be estimated by noting that\n\\begin{equation}\n \\fl \\qquad |\\mathcal{L}_{\\psi_{\\mathsf{xy}}} g_{ab}|^2 = \\left( \\frac{\\cos(2 \\lambda u) + \\cosh(2 \\lambda u) -2 }{\\sqrt{2} \\sin(\\lambda u) \\sinh(\\lambda\n u)} \\right)^2 \\simeq \\frac{8}{9} (\\lambda u)^4 + \\Or(u^8)\n \\label{LieEstimate1}\n\\end{equation}\nwhen $\\mathsf{u}=0$. The quadratic growth estimate here is typically\nquite good even near the coordinate singularities. There is little\nqualitative change in the nature of this expression if $\\mathsf{u}\n\\neq 0$. More interestingly, the gravitational current\n\\eqref{GravCurrent} associated with $\\psi^a_{\\mathsf{xy}}$ always\nvanishes. This suggests that an observer would not be compelled to\nascribe any $xy$ component of angular momentum to gravitational\nwaves with the given form. $H^{a}{}_{[\\mathsf{u}}\n\\sigma_{\\mathsf{v}]}$ has similar properties. It satisfies an\nequation almost identical to \\eqref{LieEstimate1}, and the\ngravitational current generated by it always vanishes.\n\nMore interesting are the remaining two Killing-type Jacobi fields\n$\\psi_{\\mathsf{xu}}^a = 2 H^{a}{}_{[\\mathsf{x}}\n\\sigma_{\\mathsf{u}]}$ and $\\psi_{\\mathsf{yu}}^a = 2\nH^{a}{}_{[\\mathsf{y}} \\sigma_{\\mathsf{u}]}$. Specializing again to\nthe case $\\mathsf{u}=0$,\n\\begin{eqnarray}\n \\fl \\quad |\\mathcal{L}_{\\psi_{\\mathsf{xu}}} g_{ab}|^2 \\simeq |\\mathcal{L}_{\\psi_{\\mathsf{yu}}} g_{ab}|^2 \\simeq \\frac{2}{3}\n \\lambda^4 u^2 \\Big(\n [ (x - \\mathsf{x})^2+(y - \\mathsf{y})^2] + \\frac{2}{3} (v-\\mathsf{v}) u \\Big) + \\Or(u^4) .\n\\end{eqnarray}\nUnlike the expansion in \\eqref{LieEstimate1}, this approximation\nfails long before $|\\lambda u| \\rightarrow \\pi\/2$. In general, the\ntwo magnitudes on the left-hand side have distinct behaviors that\nstrongly depend on $\\mathsf{u}$. Oscillations generically arise as\n$u$ is varied, for example. See Fig. 1.\n\n\\begin{figure}\n \\begin{center}\n\n \\includegraphics[width=0.8\\textwidth]{LieEx.eps}\n\n \\caption{Plots of $|\\mathcal{L}_{\\psi_{\\mathsf{xu}}} g_{ab}|^2$ for\n a gravitational plane wave described by \\eqref{ppGeneral} and\n \\eqref{GravWave}. The origin is assumed to be at $\\mathsf{u}=0$.\n Both solid curves assume that $v-\\mathsf{v} = 0$. The dashed ones\n use $\\lambda(v-\\mathsf{v})=1\/2$ instead. Both thicker curves set\n $x-\\mathsf{x}=0$ and $\\lambda(y-\\mathsf{y})=1\/4$. The thinner ones use\n $\\lambda(x-\\mathsf{x})=1$ and $y-\\mathsf{y}=0$. Plots for\n $|\\mathcal{L}_{\\psi_{\\mathsf{yu}}} g_{ab}|^2$ look very similar\n unless $\\mathsf{u} \\neq 0$.}\n \\end{center}\n\\end{figure}\n\nThere are gravitational currents associated with both\n$\\psi^{a}_{\\mathsf{xu}}$ and $\\psi^{a}_{\\mathsf{yu}}$. Applying\n\\eqref{GravCurrent} and expanding near $\\mathsf{u}=0$,\n\\begin{equation}\n j^a_{\\psi_{\\mathsf{xu}}} = \\frac{\\lambda^6}{180 \\pi} u^4\n \\left[ u \\frac{\\partial}{\\partial x} + (x - \\mathsf{x}) \\frac{\\partial}{\\partial v} \\right] + \\Or (u^6) .\n\\end{equation}\nSimilarly,\n\\begin{equation}\n j^a_{\\psi_{\\mathsf{yu}}} = - \\frac{\\lambda^6}{180 \\pi} u^4\n \\left[ u \\frac{\\partial}{\\partial y} + (y - \\mathsf{y}) \\frac{\\partial}{\\partial v} \\right] + \\Or(u^6).\n\\end{equation}\nThese expansions are qualitatively accurate throughout the region of\ninterest. It is now clear from \\eqref{PStar2} and\n\\eqref{KomarMoment} that the only non-vanishing gravitational\nmomenta (associated with Killing-type GACs) are\n\\begin{equation}\n S^*_{\\mathsf{x u}} = \\int_\\Sigma j^a_{\\psi_{xu}} \\rmd S_a ; \\qquad S^*_{\\mathsf{y u}}\n = \\int_\\Sigma j^a_{\\psi_{yu}} \\rmd S_a .\n\\end{equation}\nThe rates at which these quantities change depends on the relevant\nfluxes through $\\partial \\Sigma$. Regardless, the magnitude of the\nangular momentum tensor always vanishes. Intuitively, these\nstatements might be taken to mean that the gravitational wave has a\n``mass dipole moment'' equal to its one non-vanishing component of\nordinary angular momentum.\n\nThe results here could be straightforwardly extended to much more\ngeneral \\textit{pp}-wave (and other) spacetimes. The most\ninteresting point is perhaps the calculation of explicit\ngravitational currents in a vacuum spacetime. In these cases, the\ngeneral results obtained in Sect. \\ref{Sect:ExtendedMatter} only\nstate that $j^a$ will decrease no slower than $X^3$ as $X\n\\rightarrow 0$. The example here scales like $X^5$. Although this\nconclusion would probably not be preserved in more complicated\nspacetimes, it shows that momenta not arising from stress-energy\ntensors can sometimes be ignored in remarkably large regions.\n\n\\section{Conclusions}\n\nTwo different notions of approximate affine collineations have been\nintroduced. One has the physical interpretation of capturing\nsymmetry principles in a normal neighborhood of a point, while the\nother is adapted to the measurements of a particular observer. Flows\ngenerated by both of these objects leave $\\sigma^\\sfa(x,\\gamma)$\ninvariant. This has the simple interpretation that Jacobi fields\npreserve Riemann normal coordinates. GACs do the same for the\nspatial components of Fermi normal coordinate systems. These objects\nalways exist, and each forms a 20-dimensional group. Individual\nelements may be interpreted using the values of the field and its\nfirst derivatives at the appropriate base point. The only caveat to\nthis is that a GAC which might initially appear to be purely\ntranslational could slowly acquire some rotational and boost-type\ncomponents. This mixing is essential in order to ensure that the\nfields nearly satisfy \\eqref{AffineDef} near the observer's\nworldline.\n\nThe relevance of these definitions ultimately lies in their\napplications. The approximate symmetries introduced here have been\nused to write down analogs of the typical conservation laws applying\nto geodesics in spacetimes admitting affine collineations. Some of\nthe resulting parameters are exact constants of motion, while others\nare only expected to vary slowly near the preferred point or\nobserver. Regardless, they may be used to classify and derive\ngeodesics in certain regions. Similar results have also been\ndiscussed in connection with extended matter distributions. This led\nto natural notions for the linear and angular momenta of a spacetime\nvolume as viewed in a particular frame. There seems to be some\ndisagreement with other quasilocal notions of gravitational momenta,\nso it is not clear how the definition here should be interpreted. It\nis unknown if it has any positivity or related characteristics.\n\nConcepts discussed here might also be applied to simplify\nperturbation theory off of some background geometry possessing an\nexact affine collineation. It could be useful, for example, to\nuniquely construct GACs with respect to a center-of-mass worldline\nthat coincides with exact timelike or axial Killing fields in the\nunperturbed geometry. Center-of-mass trajectories might also be\nestimated using notions of approximate stationarity. More\nconcretely, an analysis of the quantities $\\ItP_\\xi$ defined in\n\\eqref{PDef} can be shown to provide significant insights into the\neffects of self-forces and self-torques on isolated bodies. Details\nare presented elsewhere \\cite{HarteFuture}.\n\n\\ack\n\nI am grateful for many helpful discussions and comments from Robert\nWald and Samuel Gralla. This work was supported by NSF grant\nPHY04-56619 to the University of Chicago.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{#1} #2}\n\t\\vspace{-2pt}\n}\n\n\n\\title{Fair Division of Time: Multi-layered Cake Cutting}\n\n\\author{\nHadi Hosseini$^1$\\and\nAyumi Igarashi$^2$\\and\nAndrew Searns$^1$\n\\affiliations\n$^1$Rochester Institute of Technology, US\\\\\nNational Institute of Informatics, Japan\n\\emails\nhhvcs@rit.edu,\nayumi\\_igarashi@nii.ac.jp,\nabs2157@rit.edu\n}\n\n\\begin{document}\n\n\\maketitle\n\n\\begin{abstract}\nWe initiate the study of multi-layered cake cutting with the goal of fairly allocating multiple divisible resources (layers of a cake) among a set of agents. The key requirement is that each agent can only utilize a single resource at each time interval. Several real-life applications exhibit such restrictions on overlapping pieces; for example, assigning time intervals over multiple facilities and resources or assigning shifts to medical professionals. We investigate the existence and computation of envy-free and proportional allocations. We show that envy-free allocations that are both feasible and contiguous are guaranteed to exist for up to three agents with two types of preferences, when the number of layers is two. We also show that envy-free feasible allocations where each agent receives a polynomially bounded number of intervals exist for any number of agents and layers under mild conditions on agents' preferences. We further devise an algorithm for computing proportional allocations for any number of agents and layers. \n\\end{abstract}\n\n\\section{Introduction}\nConsider a group of students who wish to use multiple college facilities such as a conference room and an exercise room over different periods of time.\nEach student has a preference over what facility to use at different time of the day: Alice prefers to set her meetings in the morning and exercise in the afternoon, whereas Bob prefers to start the day with exercising for a couple of hours and meet with his teammates in the conference room for the rest of the day.\n\nThe fair division literature has extensively studied the problem of dividing a heterogeneous divisible resource (aka a \\textit{cake}) among several agents who may have different preference over the various pieces of the cake~\\citep{Steinhaus48,Robertson98,Brams06}.\nThese studies have resulted in a plethora of axiomatic and existence results~\\citep{barbanel2005geometry,moulin2004fair} as well as computational solutions~\\citep{procaccia2013cake,aziz2016discretefour} under a variety of assumptions, and were successfully implemented in practice (see~\\citep{procaccia_moulin_2016,Brams96} for an overview).\nIn the case of Alice and Bob, each facility represents a layer of the cake in a \\textit{multi-layered cake cutting} problem, and the question is how to allocate the time intervals (usage right) of the facilities according to their preferences in a fair manner.\n\nOne naive approach is to treat each cake independently and solve the problem through well-established cake-cutting techniques by performing a fair division on each layer separately.\nHowever, this approach has major drawbacks: First, the final outcome, although fair on each layer, may not necessarily be fair overall. Second, the allocation may not be feasible, i.e., it may assign two overlapping pieces (time intervals) to a single agent. \nIn our example, Alice cannot simultaneously utilize the exercise room and the conference room at the same time if she receives overlapping intervals.\nSeveral other application domains exhibit similar structures over resources: assigning nurses to various wards and shifts, doctors to operation rooms, and research equipment to groups, to name a few.\n\nIn multi-layared cake cutting, each layer represents a divisible resource. Each agent has additive preferences over every disjoint (non-overlapping) intervals. A division of a multi-layered cake is \\emph{feasible} if no agent's share contains overlapping intervals, and is contiguous if each allocated piece of a layer is contiguous. \nThere has been some recent work on dividing multiple cakes among agents~\\citep{cloutier2010two,lebert2013envy}. Yet, none of the previous work considered the division of multiple resources under feasibility and contiguity constraints. \nIn this paper, we thus ask the following question: \n\\begin{quote}\n\\textit{What fairness guarantees can be achieved under feasibility and contiguity constraints for various number of agents and layers?}\n\\end{quote}\n\n\\subsection{Our Results}\nWe initiate the study of the multi-layered cake cutting problem for allocating divisible resources, under contiguity and feasibility requirements. Our focus is on two fairness notions, \\textit{envy-freeness} and \\textit{proportionality}. Envy-freeness (EF) requires that each agent believes no other agent's share is better than its share of the cake. Proportionality (Prop) among $n$ agents requires that each agent receives a share that is valued at least $\\frac{1}{n}$ of the value of the entire cake.\nFor efficiency, we consider \\textit{complete} divisions with no leftover pieces.\n\nFocusing on envy-free divisions, we show the existence of envy-free and complete allocations that are both feasible and contiguous for two-layered cakes and up to three agents with at most two types of preferences. These cases are particularly appealing since many applications often deal with dividing a small number of resources among few agents (e.g. assigning meeting rooms). Turning our attention to the case when the contiguity requirement is dropped, we then show that envy-free feasible allocations exist for any number $n$ of agents and any number $m$ of layers with $m \\leq n$, under mild conditions on agents' preferences.\nWe further show that proportional complete allocations that are both feasible and contiguous exist when the number of layers isa power of two. \nSubsequently, we show that although this result cannot be immediately extended to any number of agents and layers, a proportional complete allocation that is feasible exists when the number of layers is at most the number of agents, and can be computed efficiently.\n\n\n\\begin{table}[t]\n\\small \n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\nAgents ($n$) & Layers ($m$) & EF & Prop & \\\\ \\midrule\n2 & 2 & \\cmark (Thm. \\ref{thm:EF:two} ) & \\cmark (Thm. \\ref{thm:exponential}) & \\\\\n3 & 2 & \\cmark (Thm. \\ref{thm:EF:any}$^\\diamondsuit\\dagger$) & \\cmark (Thm. \\ref{thm:exponential}) & \\\\\nany $n\\geq m$ & $2^{a}$, \\scriptsize{$a\\in\\mathbb{Z}_{+}$} & \\cmark (Thm. \\ref{thm:EF:any}$^\\diamondsuit\\dagger$) & \\cmark (Thm. \\ref{thm:exponential}) & \\\\ \nany $n\\geq m$ & any $m$ & \\cmark (Thm. \\ref{thm:EF:any}$^\\diamondsuit\\dagger$) & \\cmark (Thm. \\ref{thm:prop:feasible:any}$^\\diamondsuit$) & \\\\ \\bottomrule\n\\end{tabular}\n\\caption{The overview of our results. $\\dagger$ assumes continuity of value density functions. $\\diamondsuit$ indicates that existence holds without contiguity requirement. Note that when $m> n$, no complete and feasible (non-overlapping) solution exists.}\n\\end{table}\n\n\\subsection{Related Work}\nIn recent years, cake cutting has received significant attention in artificial intelligence and economics as a metaphor for algorithmic approaches in achieving fairness in allocation of resources \\citep{procaccia2013cake,branzei2019communication,kurokawa2013cut,aziz2016discrete}. \nRecent studies have focused on the fair division of resources when agents have requirements over multiple resources that must be simultaneously allocated in order to carry out certain tasks (e.g. CPU and RAM) \\citep{Ghodsi:2011:DRF:1972457.1972490,gutman2012fair,parkes2015beyond}. \nThe most relevant work to ours is the envy-free multi-cake fair division that considers dividing multiple cakes among agents with linked preferences over the cakes. Here, agents can simultaneously benefit from all allocated pieces with no constraints. They show that envy-free divisions with only few cuts exist for two agents and many cakes, as well as three agents and two cakes~\\citep{cloutier2010two,lebert2013envy,nyman2020fair}. In contrast, a multi-layered cake cutting requires non-overlapping pieces. Thus, \\cite{cloutier2010two}'s generalized envy-freeness notion on multiple cakes does not immediately imply envy-freeness in our setting and no longer induces a feasible division.\n\n\\section{Our Model}\nOur setting includes a set of {\\em agents} denoted by $N=[n]$, a set of {\\em layers} denoted by $L=[m]$, where for a natural number $s \\in \\mathbb{N}$, $[s]=\\{1,2,\\ldots,s\\}$.\nGiven two real numbers $x,y \\in \\mathbb{R}$, we write $[x,y]=\\{\\, z \\in \\mathbb{R} \\mid x \\le z \\le y \\,\\}$ to denote an interval. We denote by $\\mathbb{R}_{+}$ (respectively $\\mathbb{Z}_{+}$) the set of non-negative reals (respectively, integers) including $0$. \nA {\\em piece} of cake is a finite set of disjoint subintervals of $[0,1]$. We say that a subinterval of $[0,1]$ is a {\\em contiguous piece} of cake. An {\\em $m$-layered cake} is denoted by $\\mathcal{C}=(C_j)_{j \\in L}$ where $C_j \\subseteq [0,1]$ is a contiguous piece for $j \\in L$. We refer to each $j \\in L$ as $j$-th {\\em layer} and $C_j$ as $j$-th {\\em layered cake}. \n\nEach agent $i$ is endowed with a non-negative {\\em integrable density function} $v_{ij}:C_j \\rightarrow \\mathbb{R}_{+}$. For a given piece of cake $X$ of $j$-th layer, $V_{ij}(X)$ denotes the value assigned to it by agent $i$, i.e., $V_{ij}(X)=\\sum_{I \\in X}\\int_{x \\in I} v_{ij}(x) dx$. These functions are assumed to be {\\em normalized} over layers: for each $i \\in N$, $\\sum_{j \\in L}V_{ij}(C_j)=1$. A {\\em layered piece} is a sequence $\\mathcal{X}=(X_j)_{j \\in L}$ of pieces of each layer $j \\in L$; a layered piece is said to be {\\em contiguous} if each $X_j$ is a contiguous piece of each layer. \nWe assume valuation functions are \\emph{additive} on layers and write $V_{i}(\\mathcal{X})=\\sum_{j \\in L}V_{ij}(X_{j})$.\n\nA layered contiguous piece is said to be {\\em non-overlapping} if no two pieces from different layers overlap, i.e, for any pair of distinct layers $j,j' \\in L$ and for any $I \\in X_j$ and $I' \\in X_{j'}$, $I \\cap I'=\\emptyset$. For two layered pieces $\\mathcal{X}$ and $\\mathcal{X}'$, we say that agent $i$ {\\em weakly prefers} $\\mathcal{X}$ to $\\mathcal{X}'$ if $V_i(\\mathcal{X}) \\geq V_i(\\mathcal{X}')$. \n\nA {\\em multi-allocation} $\\mathcal{A}=(\\mathcal{A}_1,\\mathcal{A}_2,\\ldots,\\mathcal{A}_n)$ is a partition of the $m$-layered cake $\\mathcal{C}$ where each $\\mathcal{A}_i=(A_{ij})_{j \\in L}$ is a layered piece of the cake allocated to agent $i$; we refer to each $\\mathcal{A}_i$ as a {\\em bundle} of $i$. For a multi-allocation $\\mathcal{A}$ and $i \\in N$, we write $V_{i}(\\mathcal{A}_i)=\\sum_{j \\in L}V_{ij}(A_{ij})$ to denote the value of agent $i$ for $\\mathcal{A}_i$. A {\\em multi-allocation} $\\mathcal{A}$ is said to be \n\\begin{itemize}\n\\item {\\em contiguous} if each $\\mathcal{A}_i$ for $i \\in N$ is contiguous; \n\\item {\\em feasible} if each $\\mathcal{A}_i$ for $i \\in N$ is non-overlapping.\n\\end{itemize}\n\nWe focus on \\emph{complete} multi-allocations where the entire cake must be allocated.\nNotice that some layers may be disjoint (see Figure \\ref{fig:server}), and the number of agents must exceed the number of layers, i.e. $n \\geq m$; otherwise the multi-allocation will contain overlapping pieces. We illustrate our model in the following example. \n\n\\begin{example}[Resource sharing]\nSuppose that there are three meeting rooms $r_1$, $r_2$, and $r_3$ with different capacities, and three researchers Alice, Bob, and Charlie. The first room is available all day, the second and the third rooms are only available in the morning and late afternoon, respectively (see Fig.~\\ref{fig:server}). Each researcher has a preference over the access time to the shared rooms. For example, Alice wants to have a group meeting in the larger room in the morning and then have an individual meeting in the smaller one in the afternoon. \n\\end{example}\n\n\\begin{figure}[hbt]\n\\centering\n\\begin{tikzpicture}[scale=0.5, transform shape]\n\\draw[fill=red!10, thick] (0,1.3) rectangle (10,2.3);\n\\draw[fill=blue!10, thick] (0,0) rectangle (4,1); \n\\draw[thick] (5,-1.3) rectangle (10,-0.3);\n\n\\node at (-0.5,1.8) {\\huge $r_1$};\n\\node at (-0.5,0.5) {\\huge $r_2$};\n\\node at (-0.5,-0.8) {\\huge $r_3$};\n\n\\draw[thick,->] (-0.5,-2) -- (10.5,-2);\n\\node at (11.4,-2) {\\Huge time};\n\n\\end{tikzpicture}\n\\caption{Example of a multi-layered cake. There are three meeting rooms $r_1$, $r_2$, and $r_3$ with different capacities, shared among several research groups.}\n\\label{fig:server}\n\\end{figure}\n\n\\paragraph{Fairness.}\nA multi-allocation is said to be {\\em envy-free} if no agent {\\em envies} the others, i.e., $V_{i}(\\mathcal{A}_i) \\ge V_{i}(\\mathcal{A}_{i'})$ for any pair of agents $i,i' \\in N$. A multi-allocation is said to be {\\em proportional} if each agent gets his {\\em proportional fair share}, i.e., $V_{i}(\\mathcal{A}_i) \\ge \\frac{1}{n}$ for any $i \\in N$. The following implication, which is well-known for the standard setting, holds in our setting as well.\n\n\\begin{lemma}\\label{lem:propEF}\nAn envy-free complete multi-allocation satisfies proportionality. \n\\end{lemma}\n\\begin{proof}\nConsider an envy-free complete multi-allocation $\\mathcal{A}_i=(A_{ij})_{j \\in L}$ and an agent $i \\in N$. By envy-freeness, we have that $V_{i}(\\mathcal{A}_i) \\geq V_{i}(\\mathcal{A}_j)$ for any $j \\in N$. Summing over $j \\in N$, we get $V_{i}(\\mathcal{A}_i) \\geq \\frac{1}{n}\\sum_{j \\in N}V_{i}(\\mathcal{A}_j)=\\frac{1}{n}$ by additivity. \n\\end{proof}\n\n\\paragraph{The $m$-layered cuts.}\nIn order to cut the layered cake while satisfying the non-overlapping constraint, we define a particular approach for partitioning the entire cake into diagonal pieces. Consider the $m$-layered cake $\\mathcal{C}$ where $m$ is an even number. For each point $x$ of the interval $[0,1]$, we define\n\\begin{itemize}\n\\item $LR(x,\\mathcal{C})=(\\bigcup^{\\frac{m}{2}}_{j=1}C_j \\cap [0,x]) \\cup (\\bigcup^{m}_{j=\\frac{m}{2}+1}C_j \\cap [x,1])$; \n\\item $RL(x,\\mathcal{C})=(\\bigcup^{\\frac{m}{2}}_{j=1}C_j \\cap [x,1]) \\cup (\\bigcup^{m}_{j=\\frac{m}{2}+1}C_j \\cap [0,x])$. \n\\end{itemize}\n$LR(x,\\mathcal{C})$ consists of the top-half subintervals of points left of $x$ and the lower-half subintervals of points right of $x$; similarly, $RL(x,\\mathcal{C})$ consists of the top-half subintervals of points right of $x$ and the lower-half subintervals of points left of $x$ (Fig. \\ref{fig:LR:RL}). We abuse the notation and write $LR(x)=LR(x,\\mathcal{C})$ and $RL(x)=RL(x,\\mathcal{C})$ if $\\mathcal{C}$ is clear from the context. \n\n\\begin{figure}[ht]\n\\centering\n\\begin{tikzpicture}[scale=0.5, transform shape]\n\n\\draw[fill=blue!10, thick] (0,0) rectangle (6,2); \n\\draw[thick] (0,-2) rectangle (6,0);\n\\draw[dotted] (0,1) -- (6,1);\n\\draw[dotted] (0,-1) -- (6,-1);\n\n\\node[thick] at (0,2.5) {\\Large $x=0$};\n\\node at (3,-1) {\\huge $LR(x)$};\n\\node at (3,1) {\\huge $RL(x)$};\n\n\\node at (-0.8,1.5) {\\Large $j=1$};\n\\node at (-0.8,0.5) {\\Large $j=2$};\n\\node at (-0.8,-0.5) {\\Large $j=3$};\n\\node at (-0.8,-1.5) {\\Large $j=4$};\n\n\\begin{scope}[xshift=8cm]\n\n\\draw[thick] (0,0) rectangle (2,2);\n\\draw[fill=blue!10, thick] (0,-2) rectangle (2,0); \n\\draw[fill=blue!10, thick] (2,0) rectangle (6,2); \n\\draw[thick] (2,-2) rectangle (6,0);\n\\draw[dotted] (0,1) -- (6,1);\n\\draw[dotted] (0,-1) -- (6,-1);\n\n\\node[thick] at (2.0,2.5) {\\bf \\Large $x=\\frac{2}{5}$};\n\\node at (1,1) {\\huge $LR(x)$};\n\\node at (4,-1) {\\huge $LR(x)$};\n\\node at (4,1) {\\huge $RL(x)$};\n\\node at (1,-1) {\\huge $RL(x)$};\n\n\\node at (-0.8,1.5) {\\Large $j=1$};\n\\node at (-0.8,0.5) {\\Large $j=2$};\n\\node at (-0.8,-0.5) {\\Large $j=3$};\n\\node at (-0.8,-1.5) {\\Large $j=4$};\n\n\\end{scope}\n \n\\end{tikzpicture}\n\\caption{Examples of the partitions induced by $x=0$ and $x=\\frac{2}{5}$ for a {\\bf four-layered} cake.}\n\\label{fig:LR:RL}\n\\end{figure}\n\n\n\\paragraph{Computational model.}\nFollowing the standard {\\em Robertson-Webb Model} \\citep{Robertson98}, we introduce two types of queries: those for a cake on each layer (called a {\\em short knife}) and those for the entire cake (called a {\\em long knife}). \n\n\\paragraph{Short knife.} Short eval query: given an interval $[x,y]$ of the $j$-th layered cake $C_j$, $eval_j(i,x,y)$ asks agent $i$ for its value $[x,y]$, i.e., $V_{ij}([x,y])$.\n Short cut query: given a point $x$ and $r \\in [0,1]$, $cut_j(i,x,r)$ asks agent $i$ for the minimum point $y$ such that $V_{ij}([x,y])=r$.\n\n\\paragraph{Long knife.} Long eval query: given a point $x$, $eval(i,x)$ asks agent $i$ for its value $LR(x)$, i.e., $V_{i}(LR(x))$. \nLong cut query: given $r \\in [0,1]$, $cut(i,r)$ asks agent $i$ for the minimum point $x$ such that $V_{i}(LR(x))=r$ if such point $x$ exists.\n\n\n\\section{Existence of a switching point}\nWe start by showing the existence of a point $x$ that equally divides the entire cake into two pairs of diagonal pieces, both for the individuals and for the majority; these will serve as a fundamental property in our problem. \nWe say that $x \\in [0,1]$ is a {\\em switching point} for agent $i$ over a layered cake $\\mathcal{C}$ if $V_i(LR(x))=V_i(RL(x))$. \n\n\\begin{lemma}\\label{lem:switching}\nSuppose that the number $m$ of layers is even. Take any $i \\in N$. Let $r\\in \\mathbb{R}$ be such that $($i$)$ $V_i(LR(0)) \\geq r$ and $V_i(RL(0)) \\leq r$, or $($ii$)$ $V_i(LR(0)) \\leq r$ and $V_i(RL(0)) \\geq r$. \nThen, there exists a point $x \\in [0,1]$ such that $i$ values $LR(x)$ exactly at $r$, i.e. $V_i(LR(x)) = r$. In particular, a switching point for $i$ always exists. \n\\end{lemma}\n\\begin{proof}\nSuppose without loss of generality that $V_i(LR(0)) \\geq r$ and $V_i(RL(0)) \\leq r$. Consider the function $f(x)=V_i(LR(x))$ for $x \\in [0,1]$. Recall that $f(x)$ is a continuous function written as the sum of continuous functions:\n$\nf(x)=\\sum^{\\frac{m}{2}}_{j=1} V_{ij}(C_j \\cap [0,x])+ \\sum^{m}_{j=\\frac{m}{2}+1} V_{ij}(C_j \\cap [x,1]). \n$\nSince $f(0) \\geq r$ and $f(1) \\leq r$, there is a point $x \\in [0,1]$ with $f(x)=r$ by the intermediate value theorem, which proves the claim. Further, by taking $r=\\frac{1}{2}$, the point $x$ where $V_i(LR(x))=\\frac{1}{2}$ is a switching point for agent $i$. \n\\end{proof}\n\nWe will generalize the notion of a switching point from the individual level to the majority. For layered contiguous pieces $\\mathcal{I}$ and $\\mathcal{I}'$, we say that the majority weakly prefer $\\mathcal{I}$ to $\\mathcal{I}'$ (denoted by $\\mathcal{I} \\mathop{\\stackrel{m}{\\succeq}} \\mathcal{I}'$) if there exists $S \\subseteq N$ such that $|S| \\geq \\ceil{\\frac{n}{2}}$ and each $i \\in S$ weakly prefers $\\mathcal{I}$ to $\\mathcal{I}'$. We say that $x \\in [0,1]$ is a {\\em majority switching point} over $\\mathcal{C}$ if $LR(x) \\mathop{\\stackrel{m}{\\succeq}} RL(x)$ and $RL(x) \\mathop{\\stackrel{m}{\\succeq}} LR(x)$.\nThe following lemma guarantees the existence of a majority switching point, for any even number of layers and any number of agents. \n\n\\begin{lemma}\\label{lem:majority:switching}\nSuppose that the number of layers, $m$, is even. Then, there exists a majority switching point for any number $n \\geq m$ of agents. \n\\end{lemma}\n\\begin{proof}\nSuppose without loss of generality that the majority of agents weakly prefer $LR(0)$ to $RL(0)$. Since $LR(0)=RL(1)$ and $RL(0)=LR(1)$, this means that by the time when the long knife reaches the right-most point, i.e., $x=1$, the majority preference switches. \n\nFormally, consider the following set of points $x \\in [0,1]$ where the majority weakly prefer $LR(x)$ to $RL(x)$: \n\\[\nM:= \\{\\, x \\in [0,1] \\mid LR(x) \\mathop{\\stackrel{m}{\\succeq}} RL(x)\\,\\}. \n\\]\nWe will first show that $M$ is a compact set. Clearly, $M$ is bounded. To show that $M$ is closed, consider an infinite sequence as follows $X=\\{x_k\\}_{k=1,2, \\ldots} \\subseteq M$ that converges to $x^*$. For each $k=1,2,\\ldots$, we denote by $S_k$ the set of agents who weakly prefer $LR(x_k)$ to $RL(x_k)$; by definition, $|S_k| \\geq \\ceil{\\frac{n}{2}}$. Since there are finitely many subsets of agents, there is one subset $S_k \\subseteqq N$ that appears infinitely often; let $S^*$ be such subset and $\\{x^*_k\\}_{k=1,2, \\ldots}$ be an infinite sub-sequence of $X$ such that for each $k$, each agent in $S^*$ weakly prefers $LR(x^*_k)$ to $RL(x^*_k)$. Since the valuations $V_i$ for $i \\in S^*$ are continuous, each agent $i \\in S^*$ weakly prefers $LR(x^*)$ to $RL(x^*)$ at the limit $x^*$, which implies that $x^* \\in M$ and hence $M$ is closed. Now since $M$ is a compact set, the supremum $t^*=\\sup M$ belongs to $M$. By the maximality of $t^*$, at least $\\ceil{\\frac{n}{2}}$ agents weakly prefer $RL(t^*)$ to $LR(t^*)$. Since $t^* \\in M$, at least $\\ceil{\\frac{n}{2}}$ agents weakly prefer $LR(t^*)$ to $RL(t^*)$ as well. Thus, $t^*$ corresponds to a majority switching point. \n\\end{proof}\n\n\\section{Envy-free multi-layered cake cutting}\nFirst, we will look into the problem of obtaining complete envy-free multi-allocations, while satisfying non-overlapping constraints. When there is only one layer, it is known that an envy-free contiguous allocation exists for any number of agents under mild assumptions on agents' preferences \\citep{Stromquist1980,Su1999}.\nGiven the contiguity and feasibility constraints, the question is whether it is possible to guarantee an envy-free division in the multi-layered cake-cutting model. \n\n\\subsection{Two agents and two layers}\nWe answer the above question positively for a simple, yet important, case of two agents and two layers. The standard protocol that achieves envy-freeness for two agents is known as the {\\em cut-and-choose} protocol: Alice divides the entire cake into two pieces of equal value. Bob selects his preferred piece over the two pieces, leaving the remainder for Alice. \n\nWe extend this protocol to the multi-layered cake cutting using the notion of a switching point. Alice first divides the layered cake into two {\\em diagonal pieces}: one that includes the top left and lower right parts and another that includes the top right and lower left parts of the cake. Our version of the cut-and-choose protocol is specified as follows:\n\n\\vspace{5pt}\n\n\\noindent\\fbox{%\n\t\\parbox{0.985\\linewidth}\n\t {%\n\t\t\\textbf{Cut-and-choose protocol for $n=2$ agents} over a two-layered cake $\\mathcal{C}$: \\\\\n\t\t\\textit{Step 1.} Alice selects her switching point $x$ over $\\mathcal{C}$.\\\\\n\t\n\t\t\\textit{Step 2.} Bob chooses a weakly preferred layered contiguous piece among $LR(x)$ and $RL(x)$. \\\\\n\t\t\\textit{Step 3.} Alice receives the remaining piece.\n}%\n}\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[scale=0.7, transform shape]\n\\draw[thick] (0,0) rectangle (3,1);\n\\draw[fill=blue!10, thick] (3,0) rectangle (10,1); \n\\draw[fill=blue!10, thick] (0,-1) rectangle (3,0); \n\\draw[thick] (3,-1) rectangle (10,0); \n\\node at (3.0,1.3) {\\large $x$};\n\\node at (1.5,0.5) {\\large $LR(x)$};\n\\node at (6.5,-0.5) {\\large $LR(x)$};\n\\node at (1.5,-0.5) {\\large $RL(x)$};\n\\node at (6.5,0.5) {\\large $RL(x)$};\n\\end{tikzpicture}\n\\caption{Cut-and-Choose for two-layered cake}\n\\label{fig:EF:two}\n\\end{figure}\n\n\n\\begin{theorem}\\label{thm:EF:two}\nThe cut-and-choose protocol yields a complete envy-free multi-allocation that is feasible and contiguous for two agents and a two-layered cake using $O(1)$ number of long eval and cut queries.\n\\end{theorem}\n\\begin{proof}\nIt is immediate to see that the protocol returns a complete multi-allocation where each agent is assigned to a non-overlapping layered contiguous piece. The resulting allocation satisfies envy-freeness: Bob does not envy Alice since he chooses a preferred piece among $LR(x)$ and $RL(x)$. Alice does not envy Bob by the definition of a switching point. \n\\end{proof}\n\nAs we noted in Section $2$, the existence result for two agents does not extend beyond two layers: if there are at least three layers, there is no feasible multi-allocation that completely allocates the cake to two agents. \n\n\\subsection{Three agents and two layers}\nWe move on to the case of three agents and two layers. We will design a variant of Stromquist's protocol that achieves envy-freeness for one-layered cake \\citep{Stromquist1980}: The referee moves two knives: a short knife and a long knife. The short knife points to the point $y$ and moves from left to right over the top layer, gradually increasing the left-most top piece (denoted by $Y$). The long knife keeps pointing to the point $x$, which can partition the remaining cake, denoted by $\\mathcal{C}^{-y}$, into two diagonal pieces $LR(x)$ and $RL(x)$ in an envy-free manner. Each agent shouts when the left-most top piece $Y$ becomes at least as highly valuable as the preferred piece among $LR(x)$ and $RL(x)$. Some agent, say $s$, shouts eventually (before the left-most top piece becomes the top layer), assuming that there is at least one agent who weakly prefers the top layer to the bottom layer. We note that $x$ may be positioned left to $y$; see Figure \\ref{fig:EF:three} for some possibilities of the long knife's locations. \n\nWe will show that the above protocol works, for a special case when there are at most two types of preferences: In such cases, the majority switching points coincide with the switching points of an agent with the majority preference.\n\n\\begin{lemma}\\label{lem:switching:identical}\nSuppose that $m=2$, $n=3$, and there are two different agents $i,j \\in N$ with the same valuation $V$. Then, $x$ is a majority switching point over $\\mathcal{C}$ if and only if $x$ is a switching point for $i$. \n\\end{lemma}\n\\begin{proof}\nSuppose that agents $i,j \\in N$ have the same valuation $V$. Suppose that $x$ is a majority switching point over $\\mathcal{C}$. Then, at least two agents weakly prefer $LR(x)$ to $RL(x)$, meaning that at least one of the two agents $i$ and $j$ weakly prefers $LR(x)$ to $RL(x)$, which means that both agents weakly prefers $LR(x)$ to $RL(x)$ since $i$ and $j$'s valuations are identical. Similarly, both $i$ and $j$ weakly prefer $RL(x)$ to $LR(x)$. Thus, $x$ is a switching point for $i$. \nThe converse direction is immediate. \n\\end{proof}\n\nAn implication of the above lemma is that when performing Stromquist's protocol, one can point out to a switching point of an individual, instead of a majority one. This allows the value of each piece to change continuously. For a given two-layered cake $\\mathcal{C}$, we write $\\mathcal{C}^{-y}=(C^{-y}_1,C_2)$ as a two-layered cake obtained from $\\mathcal{C}$ where the first segment $[0,y]$ of the top layer is removed, i.e., $C^{-y}_1 = C_1 \\setminus [0,y]$. For each majority switching point $x$ over $\\mathcal{C}^{-y}$, we select three different agents $\\ell(x)$, $m(x)$, and $r(x)$ as follows: \n\\begin{itemize}\n\\item $\\ell(x)$ is an agent who weakly prefers $LR(x,\\mathcal{C}^{-y})$ to $RL(x,\\mathcal{C}^{-y})$;\n\\item $m(x)$ is an agent who is indifferent between $LR(x,\\mathcal{C}^{-y})$ and $RL(x,\\mathcal{C}^{-y})$; and \n\\item $r(x)$ and agent who weakly prefers $RL(x,\\mathcal{C}^{-y})$ to $LR(x,\\mathcal{C}^{-y})$. \n\\end{itemize}\n\n\n\\begin{theorem}\\label{thm:twolayers3agents}\nSuppose that $m=2$ and $n=3$. If there are two different agents with the same valuation, an envy-free complete multi-allocation that is feasible and contiguous exists. \n\\end{theorem}\n\\begin{proof}\nAssume w.l.o.g. that at least one agent prefers the top layer over the bottom layer. This means that such agent weakly prefers the top layer to any of the pieces $LR(z,\\mathcal{C}^{-y})$ and $RL(z,\\mathcal{C}^{-y})$ when $y=1$. Suppose that $i \\in N$ is one of the two different agents with the same valuations. We design the following protocol for three agents over a two-layered cake: \n\n\\vspace{3pt}\n\\noindent\\fbox{%\n\t\\parbox{0.985\\linewidth}{%\n\t\t\\textbf{Moving-knife protocol for $n=3$ agents} over a two-layered cake $\\mathcal{C}$: w.l.o.g. assume that at least one agent weakly prefers the top layer $(j=1)$ over the bottom layer $(j=2)$\\\\\n\t\t\\textit{Step 1.} The referee continuously moves a short knife from the left-most point $(y=0)$ to the right-most point $(y=1)$ over the top layer, while continuously moving a long knife pointing to a switching point over $\\mathcal{C}^{-y}$ for $i$. \n Let $y$ be the position of the short knife and $Y$ be the top layer piece to its left. Let $x$ be the position of the long knife.\\\\\t\t\n\t\t\\textit{Step 2.} The referee stops moving the short knife when some agent $s$ {\\em shouts}, i.e., $Y$ becomes at least as highly valuable as the preferred piece among $LR(x,\\mathcal{C}^{-y})$ and $RL(x,\\mathcal{C}^{-y})$. \\\\\n\t\t\\textit{Step 3.} We allocate the shouter $s$ to the left-most top piece $Y$ and partitions the rest into $LR(x,\\mathcal{C}^{-y})$ and $RL(x,\\mathcal{C}^{-y})$. \n\\begin{itemize}\n\\item If $s=\\ell(x)$, then we allocate $LR(x,\\mathcal{C}^{-y})$ to $m(x)$ and $RL(x,\\mathcal{C}^{-y})$ to $r(x)$. \n\\item If $s=m(x)$, then we allocate $LR(x,\\mathcal{C}^{-y})$ to $\\ell(x)$ and $RL(x,\\mathcal{C}^{-y})$ to $r(x)$. \n\\item If $s=r(x)$, then we allocate $LR(x,\\mathcal{C}^{-y})$ to $\\ell(x)$ and $RL(x,\\mathcal{C}^{-y})$ to $m(x)$. \n\\end{itemize}\n\t}%\n}\n\\vspace{3pt}\n\nBy our assumption, some agent eventually shouts and thus the protocol returns an allocation $\\mathcal{A}$. Clearly, $\\mathcal{A}$ is feasible, contiguous, and complete. Also, it is easy to see that the shouter $s$ who receives a bundle $Y$ does not envy the other two agents. The agents $i \\neq s$ do not envy $s$ because the referee continuously moves both a short and a long knife. Finally, the agents $i \\neq s$ do not envy each other by the definition of a majority switching point and by Lemma \\ref{lem:switching:identical}. \n\\end{proof}\n\n\\begin{figure}[t]\n\\centering\n\\begin{tikzpicture}[scale=0.55, transform shape]\n\\draw[fill=red!10, thick] (0,0) rectangle (4,1);\n\\draw[fill=blue!10, thick] (0,-1) rectangle (3,0); \n\\draw[fill=blue!10, thick] (4,0) rectangle (10,1); \n\\draw[thick] (3,-1) rectangle (10,0);\n\n\\node at (2.0,0.5) {$Y$};\n\\node at (1.5,-0.5) {$RL(x)$};\n\\node at (6.5,0.5) {$RL(x)$};\n\\node at (6.0,-0.5) {$LR(x)$};\n\n\\node at (3.0,1.3) {$x$};\n\\node at (4.0,1.3) {$y$};\n\n\\begin{scope}[yshift=-3cm]\n\\draw[fill=red!10, thick] (0,0) rectangle (4,1);\n\\draw[thick] (4,0) rectangle (10,1);\n\\draw[fill=blue!10, thick] (6,0) rectangle (10,1); \n\\draw[fill=blue!10, thick] (0,-1) rectangle (6,0); \n\\draw[thick] (6,-1) rectangle (10,0);\n\n\\node at (2.0,0.5) {$Y$};\n\\node at (6.0,1.3) {$x$};\n\\node at (4.0,1.3) {$y$};\n\n\\node at (3.0,-0.5) {$RL(x)$};\n\\node at (8.0,0.5) {$RL(x)$};\n\\node at (5.0,0.5) {$LR(x)$};\n\\node at (8.0,-0.5) {$LR(x)$};\n\\end{scope}\n \n\\end{tikzpicture}\n\\caption{Moving knife protocol for three agents over a two-layered cake. Note that the position of $x$ may appear before $y$. \n}\n\\label{fig:EF:three}\n\\end{figure}\n\nIn the general case, the existence question of contiguous and feasible envy-free multi-allocations deems to be challenging due to the non-monotonicity of valuations over diagonal pieces.\\footnote{See Section \\ref{sec:discussion} for an extensive discussion.} \nIn the next subsection, we thus turn our attention to the case when the contiguity requirement is relaxed.\n\n\\subsection{Non-connected pieces}\nHaving seen that an envy-free multi-allocation that is both feasible and contiguous exists for a special case, we will consider the case when the contiguity requirement is dropped, namely, agents may receive a collection of sub-intervals of each layer. \nWe will show the existence of an envy-free multi-allocation that is feasible and uses at most poly$(n)$ number of cuts within each layer, assuming that each density function $v_{ij}$ is continuous. In what follows, we will reduce the problem to finding a `perfect' allocation of a one-layered cake. An allocation of a single-layered cake is called {\\em perfect} if each agent values every allocated piece exactly at his proportional fair share $\\frac{1}{n}$. It is known that such allocation consisting of at most poly$(n)$ number of contiguous pieces exists whenever agents' value density functions are continuous \\citep{Alon1987}. It is not surprising that the existence of a perfect allocation implies the existence of an envy-free allocation over a single-layered cake. We show that this result also implies the existence of envy-free allocations over a multi-layered cake. \n\n\\begin{theorem}\\label{thm:EF:any}\nSuppose that $m \\leq n$ and each $v_{i,j}$ for $i \\in N$ and $j \\in L$ is continuous. Then, an envy-free complete feasible multi-allocation $\\mathcal{A}=(\\mathcal{A}_i)_{i \\in N}$ where each piece $A_{ij}$ for agent $i \\in N$ and layer $j \\in L$ contains at most poly$(n)$ number of contiguous pieces exists. \n\\end{theorem}\n\\begin{proof}\nWe assume without loss of generality that each layer $C_j$ is the whole interval $[0,1]$ by just putting zero valuations on the part outside $C_j$. \nNow, we construct an instance of a single-layered cake $I=[0,1]$ as follows. First, create a dummy agent $i_j$ for each agent $i \\in N$ and each layer $j \\in L$. Each dummy agent $i_j$ has a valuation $v'_{i_j}$ determined by agent $i$'s valuation for the $j$-th layered cake, i.e., for each sub-interval $X \\subseteq I$, $v'_{i_j}(X)=v_{i,j}(X)$. \nWe will show that a perfect allocation of the artificial cake among the $mn$ dummy agents induces an envy-free multi-allocation of the original layered cake. \n\nSpecifically, take a perfect allocation $(X_1,X_2,\\ldots,X_{mn})$ of this instance where each $X_t$ for $t \\in [mn]$ contains at most poly$(n)$ number of contiguous pieces, which is guaranteed to exist \\citep{Alon1987}. We then group each consecutive $m$ sequence of pieces together: namely, let $Y_{h}=\\bigcup^{im}_{t=(i-1)m+1}X_t$ for each $h \\in [n]$. By the definition of a perfect allocation, we have\n\\begin{align}\\label{eq}\n&v_{ij}(Y_h)= \\frac{v_{ij}(C_j)}{n},\n\\end{align}\nfor any $h \\in [n]$. Now, we partition each layer into $n$ pieces using the partition $(Y_1,Y_2,\\ldots,Y_n)$ of the artificial cake and allocate to the agents so that each agent receives exactly one piece $Y_h$ for each layer. Formally, consider a permutation $\\sigma_j:[n] \\rightarrow [n]$ where\n\\[\n\\sigma_j(i)\n=i+j -1 \\pmod{m}. \n\\]\nConstruct an multi-allocation $\\mathcal{A}=(\\mathcal{A}_i)_{i \\in N}$ where each agent $i \\in N$ is assigned to $A_{ij}=Y_{\\sigma_j(i)}$ for each layer $j \\in L$. By our construction, each $A_{ij}$ contains at most poly$(n)$ number of contiguous pieces. Also, each layered piece $\\mathcal{A}_i$ is non-overlapping as $(Y_1,Y_2,\\ldots,Y_n)$ is a partition of the interval $[0,1]$. By \\eqref{eq}, it is immediate to see that $\\mathcal{A}$ is envy-free. \n\\end{proof}\n\n\n\\section{Proportional multi-layered cake cutting}\nFocusing on a less demanding fairness notion, it turns out that a complete proportional multi-allocation that is both feasible and contiguous exists, for a wider class of instances, i.e., when the number $m$ of layers is some power of two, and the number $n$ of agents is at least $m$. Notably, we show that the problem can be decomposed into smaller instances when the number of agents is at least the number of layers and the number of layers is a power of two. Building up on the {\\em base case} of two layers, our algorithm recursively calls the same algorithm to decide on how to allocate the cake of the sub-problems. We further show that if we relax the contiguity requirement, a proportional feasible multi-allocation can be computed efficiently whenever $m \\leq n$. \n\n\\subsection{Connected pieces}\nIn this subsection, we will show that a proportional complete multi-allocation exists for any $n \\geq m$ when $m$ is some power of $2$. We start by presenting two auxiliary lemmata. We define a {\\em merge} of two disjoint contiguous pieces $I_j$ and $I_{j'}$ of layers $j$ and $j'$ as replacing the $j$-th layered cake with the union $I_j \\cup I_{j'}$ and removing $j'$-th layered cake. The {\\em merge} of a finite sequence of mutually disjoint contiguous pieces $(I_1,\\ldots,I_k)$ can be defined inductively: merge $(I_1,\\ldots,I_{k-1})$ and then apply the merge operation to the resulting outcome and $I_k$. \nNow we observe that if there are two disjoint layers, one can safely merge these layers and reduce the problem size. \n\n\\begin{lemma}\\label{lem:merge}\nSuppose that $C_j$ and $C_{j'}$ are two disjoint layers of a layered cake $\\mathcal{C}$, and $\\mathcal{C}'$ is obtained from $\\mathcal{C}$ by merging $C_j$ and $C_{j'}$. Then, each non-overlapping contiguous layered piece of $\\mathcal{C}'$ is a non-overlapping contiguous layered piece of the original cake $\\mathcal{C}$. \n\\end{lemma}\n\\begin{proof}\nSuppose that $C_j$ and $C_{j'}$ are two disjoint layers of a layered cake $\\mathcal{C}=(C_{t})_{t \\in L}$, and the layered cake $\\mathcal{C}'=(C'_{t})_{t \\in L \\setminus \\{j'\\}}$ is obtained from $\\mathcal{C}$ by merging $C_j$ and $C_{j'}$. Let $\\mathcal{X}'=(X'_{t})_{t \\in L \\setminus \\{j'\\}}$ be a non-overlapping contiguous piece of $\\mathcal{C}'$. Consider the corresponding layered piece $\\mathcal{X}=(X_{t})_{t \\in L}$ of the original cake $\\mathcal{C}$ where $X_t=X'_{t}$ for $t \\in L \\setminus \\{j,j'\\}$ and $X_t = X'_t \\cap C_{t}$ for $t \\in \\{j,j'\\}$. It is immediate to see that $\\mathcal{X}'$ is non-overlapping and contiguous, since $C_j$ and $C_{j'}$ are disjoint. \n\\end{proof}\n\nThe above lemma can be generalized further: Let $\\mathcal{C}$ be a $2m$-layered cake and $x \\in [0,1]$. We define a {\\em merge} of $LR(x)=(S_j)_{j \\in L}$ by merging the pair $(S_j,S_{j+m})$ for each $j \\in [m]$. A {\\em merge} of $RL(x)$ can be defined analogously. Such operation still preserves both feasibility and contiguity. \n\n\\begin{corollary}\\label{cor:merge}\nLet $\\mathcal{C}$ be a $2m$-layered cake and $x \\in [0,1]$. Suppose that $\\mathcal{C}'$ is a $m$-layered cake obtained by merging $LR(x,\\mathcal{C})$ or $RL(x,\\mathcal{C})$. Then, each non-overlapping contiguous layered piece of $\\mathcal{C}'$ is a non-overlapping contiguous layered piece of the original cake $\\mathcal{C}$. \n\\end{corollary}\n\\begin{proof}\nSuppose that $\\mathcal{C}'$ is a $m$-layered cake obtained by merging $LR(x,\\mathcal{C})=(S_j)_{j \\in L}$ of a $2m$-layered cake $\\mathcal{C}$. By Lemma \\ref{lem:merge}, a non-overlapping contiguous layered piece of the cake obtained from each merge of the pair $(S_j,S_{j+m})$ for $j \\in [m]$ still corresponds to a non-overlapping and contiguous piece of the original cake. Thus, the claim holds. An analogous argument applies to the case when we merge $RL(x,\\mathcal{C})$. \n\\end{proof}\n\nWe are now ready to prove that a proportional complete multi-allocation exists for any $n=m$ when $m$ is some power of $2$. In essence, the existence of a majority switching point, as proved in Lemma \\ref{lem:majority:switching}, allows us to divide the problem into two instances. We will repeat this procedure until the number of layers of the subproblem becomes $2$, for which we know the existence of a proportional, feasible, contiguous multi-allocation by Theorem \\ref{thm:EF:two}. \n\n\\begin{theorem}\\label{thm:prop:base}\nA proportional complete multi-allocation that is feasible and contiguous exists, for any number $m$ of layers and any number $n= m$ of agents where $m=2^a$ for some $a \\in \\mathbb{Z}_{+}$. \n\\end{theorem}\n\\begin{proof}\nWe design the following recursive algorithm $\\mathcal{D}$ that takes a subset $N'$ of agents with $|N'| \\geq 2$, a $|L'|$-layered cake $\\mathcal{C}'$, and a valuation profile $(V_{i})_{i \\in N'}$, and returns a proportional complete multi-allocation of the cake to the agents which is feasible. Suppose that $m=n$. If $m=n=1$, then we allocate the entire cake to the single agent. If $m=n=2$, we run the cut-and-choose algorithm as described in the proof of Theorem \\ref{thm:EF:two}. \nNow consider the case when $m=n=2^a$ for some integers $a \\geq 1$. Then the algorithm finds a majority switching point $x$ over $\\mathcal{C}'$. We let $\\mathcal{I}_1=LR(x)$ and $\\mathcal{I}_2=RL(x)$. By definition of a majority switching point and the fact that $n$ is even, we can partition the set of agents $N'$ into $N_1$ and $N_2$ where $N_1$ is the set of agents who weakly prefer $\\mathcal{I}_1$ to $\\mathcal{I}_2$, $N_2$ be the set of agents who weakly prefer $\\mathcal{I}_2$ to $\\mathcal{I}_1$, and $|N_k| = \\frac{|N'|}{2}$ for each $k=1,2$. \nWe apply $\\mathcal{D}$ to the merge of $\\mathcal{I}_k$ with the agent set $N_k$ for each $k=1,2$, respectively. \n\nWe will show that by induction on the exponential $a$, that the complete multi-allocation $\\mathcal{A}$ returned by $\\mathcal{D}$ satisfies proportionality as well as feasibility and contiguity. This is clearly true when $m=n=2$ due to Lemma \\ref{lem:propEF} and Theorem \\ref{thm:EF:two}. Suppose that the claim holds for $m=n=2^a$ with $1 \\leq a \\leq k-1$; we will prove it for $a=k$. Suppose that the algorithm divides the input cake $\\mathcal{C}'$ via a majority switching point $x$ into $\\mathcal{I}_1=LR(x)$ and $\\mathcal{I}_2=RL(x)$. Suppose that $(N_1,N_2)$ is a partition of the agents where $N_1$ is the set of agents who weakly prefer $\\mathcal{I}_1$ to $\\mathcal{I}_2$, $N_2$ is the set of agents who weakly prefer $\\mathcal{I}_2$ to $\\mathcal{I}_1$, and $|N_k| = \\frac{|N'|}{2}$ for each $k=1,2$. Observe that each agent $i \\in N_1$ weakly prefers $\\mathcal{I}_1$ to $\\mathcal{I}_2$ and thus $V_i(\\mathcal{I}_1) \\geq \\frac{1}{2}V_i(\\mathcal{C}')$. Similarly, $V_i(\\mathcal{I}_2) \\geq \\frac{1}{2}V_i(\\mathcal{C}')$ for each $i \\in N_2$. Thus, by the induction hypothesis, each agent $i$ has value at least $\\frac{1}{|N'|}V_i(\\mathcal{C}')$ for its allocated piece $\\mathcal{A}_{i}$. Further, by Corollary \\ref{cor:merge}, each non-overlapping contiguous layered piece of the merge of $\\mathcal{I}_1$ (respectively, $\\mathcal{I}_2$) is a contiguous non-overlapping layered piece of the original cake $\\mathcal{C}$. By the induction hypothesis, the algorithm outputs a multi-allocation of each merge that is contiguous. Thus, the algorithm returns a proportional complete multi-allocation that is feasible and contiguous.\n\\end{proof}\n\nWe will generalize the above theorems to the case when the number of agents is strictly greater than the number of layers. Intuitively, when $n>m$, then there is at least one layer whose sub-piece can be `safely' allocated to some agent without violating the non-overlapping constraint. \n\n\\begin{theorem}\\label{thm:exponential}\nA proportional complete multi-allocation that is feasible and contiguous exists, for any number $m$ of layers and any number $n \\geq m$ of agents where $m=2^a$ for some $a \\in \\mathbb{Z}_{+}$. \n\\end{theorem}\n\\begin{proof}\nWe design the following recursive algorithm $\\mathcal{D}$ that takes a subset $N'$ of agents with $|N'| \\geq 2$, a $|L'|$-layered cake $\\mathcal{C}'$, and a valuation profile $(V_{i})_{i \\in N'}$, and returns a proportional complete multi-allocation of the layered cake to the agents which is feasible. For $n=m$, we apply the algorithm described in the proof of Theorem \\ref{thm:prop:base}. Suppose that $n>m$. The algorithm first identifies a layer $C_j$ whose entire valuation is at least $\\frac{1}{n}$ for some agent; assume w.l.o.g. that $j=1$. We move a knife from left to right over the top cake $C_1$ until some agent $i$ {\\em shouts}, i.e., agent $i$ finds the left contiguous piece $Y$ at least as highly valued as his proportional fair share $\\frac{1}{n}$. The algorithm $\\mathcal{D}$ then gives the piece to the shouter. To decide on the allocation of the remaining items, we apply $\\mathcal{D}$ to the reduced instance $(N' \\setminus \\{i\\},(C'_j)_{j \\in L},(V_{i'})_{i' \\in N' \\setminus \\{i\\}})$ where $C'_j=C_j \\setminus Y$ for $j=1$ and $C'_j=C_j$ for $j \\neq 1$. \n\nWe will prove by induction on $|N'|$ that the complete multi-allocation $\\mathcal{A}=(\\mathcal{A}_1,\\mathcal{A}_2,\\ldots,\\mathcal{A}_n)$ returned by $\\mathcal{D}$ satisfies proportionality as well as feasibility and contiguity. This is clearly true when $m=|N'|$, due to Theorem \\ref{thm:prop:base}. Suppose that the claim holds for $|N'|$ with $m \\leq |N'| \\leq k-1$; we will prove it for $|N'|=k$. Suppose agent $i$ is the shouter who gets the left contiguous piece $Y$. Clearly, agent $i$ receives her proportional share under $\\mathcal{A}$. Observe that all remaining agents have the value at least $\\frac{|N'|-1}{|N'|}V_i(\\mathcal{C}')$ for the remaining cake. Thus, by the induction hypothesis, each agent $i' \\neq i$ has value at least $\\frac{1}{|N'|}V_i(\\mathcal{C}')$ for its allocated piece $\\mathcal{A}_{i'}$. The feasibility and contiguity of $\\mathcal{A}$ are immediate by the induction hypothesis. This completes the proof. \n\\end{proof}\n\n\n\\subsection{Non-connected pieces}\nSince envy-freeness implies proportionality, Theorem \\ref{thm:EF:any} in the previous section implies the existence of a proportional feasible multi-allocation when agents' value density functions are continuous. We strengthen this result, by showing that such desirable allocation exists for a more general case and by providing an efficient algorithm for finding one. \n\n\\begin{theorem}\\label{thm:prop:feasible}\nA proportional complete multi-allocation that is feasible exists when $m =n$ and can be computed using $O(nm^2)$ number of short eval queries and $O(nm)$ number of long eval and cut queries. Further, each bundle of the resulting multi-allocation includes at most two contiguous pieces within each layer. \n\\end{theorem}\n\nBelow, we show that each agent can divide the entire cake into $n$ equally valued layered pieces. A multi-allocation $\\mathcal{A}$ is {\\em equitable} if for each agent $i \\in N$, $V_{i}(\\mathcal{A}_i)=\\frac{1}{n}$. We design a recursive algorithm that iteratively finds two layers for which one has value at most $\\frac{1}{m}$ and at least $\\frac{1}{m}$ and removes a pair of diagonal pieces of value exactly $\\frac{1}{m}$ from the two layers.\n\n\\begin{lemma}\\label{lem:equitable}\nFor any number $m$ of layers and any number $n = m$ of agents with the identical valuations, an equitable complete multi-allocation that is feasible and contiguous exists and can be found using $O(nm^2)$ number of short eval queries and $O(nm)$ number of long cut queries. \n\\end{lemma}\n\\begin{proof}\nWe denote by $V=V_i$ the valuation function for each agent $i \\in N$. \nConsider the following recursive algorithm $\\mathcal{D}$ that takes a subset $N'$ of agents with $|N'| \\geq 1$, a $|L'|$-layered cake $\\mathcal{C}'$, and a valuation profile $(V_{i})_{i \\in N'}$, and returns an equitable complete multi-allocation of the layered cake to the agents. When $|L'|=|N'|=1$, then the algorithm allocates the entire cake to the single agent. \nSuppose that $|L'|=|N'| \\geq 2$. \nThe algorithm first finds a layer $j$ whose entire value is at most $\\frac{1}{m}$ and another layer $j'$ whose entire value is at least $\\frac{1}{m}$. The algorithm $\\mathcal{D}$ then finds a point $x \\in [0,1]$ where $V(S_{j} \\cup S_{j'})=\\frac{1}{m}$ for $S_j=C_j \\cap [0,x]$ and $S_{j'}=C_j \\cap [x,1]$; such point exists due to Lemma \\ref{lem:switching}. We allocate $S_{j} \\cup S_{j'}$ to one agent and apply $\\mathcal{D}$ to the remaining cake $\\mathcal{C}''$ with $|N'|-1$ agents where $\\mathcal{C}''$ is obtained from merging the remaining $j$-th layered cake $C_j \\setminus S_j$ and the $j'$-th layered cake $C_{j'} \\setminus S_{j'}$. The correctness of the algorithm as well as the bound on the query complexity are immediate. \n\\end{proof}\n\n\\begin{figure*}[hbt]\n\\centering\n\\begin{tikzpicture}[scale=0.6, transform shape]\n\n\\draw[thick] (0,0) rectangle (3,1);\n\\draw[fill=red!10, thick] (3,0) rectangle (6,1);\n\\draw[fill=blue!10, thick] (0,-1) rectangle (2,0); \n\\draw[fill=red!10, thick] (2,-1) rectangle (3,0);\n\\draw[thick] (3,-1) rectangle (6,0);\n\\draw[fill=red!10, thick] (0,-2) rectangle (2,-1);\n\\draw[fill=blue!10, thick] (2,-2) rectangle (6,-1); \n\n\n\\node at (4.5,0.5) {$I_{11}$};\n\\node at (2.5,-0.5) {$I_{12}$};\n\\node at (1,-1.5) {$I_{13}$};\n\n\\node at (1.5,0.5) {$I_{21}$};\n\\node at (4.5,-0.5) {$I_{22}$};\n\n\\node at (1,-0.5) {$I_{32}$};\n\\node at (4,-1.5) {$I_{33}$};\n\n\\draw[ultra thick,->] (6.5,-0.5)--(7.5,-0.5);\n\n\\begin{scope}[xshift=8cm,yshift=-0.5cm]\n\\node[fill=red!10,draw, circle](I1) at (1,1) {$\\mathcal{I}_1$};\n\\node[draw, circle](I2) at (3,1) {$\\mathcal{I}_2$};\n\\node[fill=blue!10,draw, circle](I3) at (5,1) {$\\mathcal{I}_3$};\n\n\\node[fill=gray!10,draw, circle](a1) at (1,-1) {$1$};\n\\node[fill=gray!10,draw, circle](a2) at (3,-1) {$2$};\n\\node[fill=gray!10,draw, circle](a3) at (5,-1) {$3$};\n\n\\draw[-,red, >=latex,thick] (I1)--(a1);\n\\draw[-, >=latex,thick] (I2)--(a1);\n\\draw[-,>=latex,thick] (I3)--(a1);\n\\draw[-, >=latex,thick] (I2)--(a2);\n\\draw[-, >=latex,thick] (I2)--(a3);\n\n\\draw[ultra thick,->] (6.5,0)--(7.5,0);\n\\end{scope}\n\n\\begin{scope}[xshift=16cm,yshift=0.5cm]\n\\draw[thick] (0,-1) rectangle (3,0);\n\n\\draw[fill=blue!10, thick] (0,-2) rectangle (2,-1); \n\\draw[thick] (3,-1) rectangle (6,0);\n\\draw[fill=blue!10, thick] (2,-2) rectangle (6,-1); \n\n\\node at (1.5,-0.5) {$I_{21}$};\n\\node at (4.5,-0.5) {$I_{22}$};\n\n\\node at (1,-1.5) {$I_{32}$};\n\\node at (4,-1.5) {$I_{33}$};\n\n\\draw[thick,dotted] (4,0.5)--(4,-2.5);\n\\end{scope}\n \n\\end{tikzpicture}\n\\caption{Protocol for proportionality for three agents and three layers. Agent $1$ divides the entire cake into three equally valued layered pieces $\\mathcal{I}_1$, $\\mathcal{I}_2$, and $\\mathcal{I}_3$ (the left-most picture). Here, $\\mathcal{I}_i=(I_{ij})_{j=1,2,3}$ for each $i =1,2,3$ where $I_{23}=I_{31}=\\emptyset$. In the middle picture, agent $1$ is adjacent to every piece, meaning that he has value at least proportional fair share for every piece; on the other hand, the other agents are adjacent to the second piece only. The maximum envy-free matching is an edge between $\\mathcal{I}_1$ and agent $1$ (red edge), so the algorithm allocates $\\mathcal{I}_1$ to agent $1$ and merges $\\mathcal{I}_2$, and $\\mathcal{I}_3$. Then it applies the cut-and-choose among the remaining agents (the right-most picture).}\n\\label{fig:PROP:three}\n\\end{figure*}\n\nEquipped with Lemma \\ref{lem:equitable}, we will prove Theorem \\ref{thm:prop:feasible} by recursively computing an {\\em envy-free matching} between $n$ agents and $n$ layered pieces where one agent has proportional fair share for every piece. Specifically, given a bipartite graph $G$ with one side being the set of agents and the other side being the set of items, an envy-free matching $M$ of $G$ is a matching where no unmatched agent {\\em envies} some matched agent, i.e., no unmatched agent is adjacent to any matched item in $G$. The problem of finding an envy-free matching of maximum size can be solved in polynomial time \\citep{EladErel,GAN2019}. Further, using Hall's type condition, it can be easily shown that if there is one agent who is adjacent to every item and the number of agents is at most the number of items, then there is a non-empty envy-free matching (Corollary $1.4$ $($c$)$ of \\citet{EladErel}). \n\n\\begin{proof}[Proof of Thm. \\ref{thm:prop:feasible}]\nIn order to obtain a proportional feasible multi-allocation, we will recursively compute a non-empty envy-free matching:\nFor each iteration, let one agent partition $n$-equally valued layered pieces, find a maximum envy-free matching between agents and pieces, and assign the matched agents to the matched pieces. By Corollary $1.4$ $($c$)$ of \\citet{EladErel}, the envy-free matching computed at each step is non-empty; thus, at least one agent is matched and we will apply the same procedure to the unmatched agents and pieces. The formal description is given as follows. See Figure \\ref{fig:PROP:three} for an illustration for $m=n=3$. \n\n\\vspace{5pt}\n\\noindent\\fbox{%\n\t\\parbox{0.985\\linewidth}{%\n\t\t\\textbf{A protocol for proportional feasible multi-allocations for $n = m$ agents} over a $m$-layered cake $\\mathcal{C}$: \\\\\n\t\t\\textit{Step 1.} One agent partitions the cake into $n$ non-overlapping layered contiguous pieces $\\mathcal{I}_1,\\mathcal{I}_2, \\ldots, \\mathcal{I}_n$ which she considers of equal value, using the algorithm in the proof of Lemma \\ref{lem:equitable}.\\\\\n\t\t\\textit{Step 2.} Construct a bipartite graph $G$ with the agents being one side and the pieces being on the other side, where there is an edge between agent $i$ and $\\mathcal{I}_h$ if agent $i$ has value at least his proportional far share for $\\mathcal{I}_h$. \\\\\n\t\t\\textit{Step 3.} Compute a maximum-size envy-free matching $M$ of $G$. Assign matched agent to the corresponding piece. \\\\\n\t\t\\textit{Step 4.} For each unmatched piece $\\mathcal{I}_h$, merge all the disjoint contiguous pieces in $\\mathcal{I}_h$, and create a $m-\\ell$-layered cake $\\mathcal{C}'$ consisting of each merge of $\\mathcal{I}_h$ where $\\ell$ is the number of matched pieces. Apply the same protocol to $\\mathcal{C}'$ among the remaining unmatched agents. \n\t}%\n}\n\\vspace{5pt}\n\nWe will show by the induction on the number of agents $n=m$ that the resulting multi-allocation $\\mathcal{A}$ is proportional and feasible. The claim clearly holds for $n=m=1$. Suppose that the claim holds for $n$ with $m=n \\leq k-1$; we will prove it for $m=n=k$. We will first show that the resulting multi-allocation $\\mathcal{A}$ is proportional. Clearly, the agents who get matched has proportional fair share for his bundle. Further, each $i$ of the remaining unmatched $n-\\ell$ agents have value less than $\\frac{1}{n}$ for each matched piece and thus has at least $1-\\frac{\\ell}{n}$ for the remaining unmatched pieces; thus, $V_i(\\mathcal{A}_i)$ is at least $\\frac{1}{n}$. It can be easily verified that each bundle $\\mathcal{A}_i$ is non-overlapping. Each iteration requires $O(m^2)$ number of short eval queries and $O(m)$ number of long cut queries for the cutter, and $O(m^2)$ number of short eval queries for each agent. Further, the number of iterations is at most $n$, which proves the bound on the query complexity. \nThis completes the proof.\n\\end{proof}\n\nSimilarly to the proof for Theorem \\ref{thm:exponential}, we can generalize the above theorem to the case when the number of agents is strictly greater than the number of layers. \n\n\\begin{theorem}\\label{thm:prop:feasible:any}\nA proportional complete multi-allocation that is feasible exists and can be computed using $O(nm^2)$ number of short eval queries and $O(nm)$ number of long cut queries, for any number $m$ of layers and any number $n \\geq m$ of agents. \n\\end{theorem}\n\nIt remains open whether a proportional contiguous multi-allocation exists when the number of layers is three. A part of the reason is that our algorithm for finding an equitable multi-allocation (Lemma \\ref{lem:equitable}) may not return a `balanced' partition: The number of pieces contained in each layered piece may not be the same when the number of layers is odd. For example, one layered piece may contain pieces from three different layers while the other two parts may contain pieces from two different layers, as depicted in Figure \\ref{fig:PROP:three}. \n\n\n\\section{Discussion} \\label{sec:discussion}\nWe initiated the study of multi-layered cake cutting, demonstrating the rich and intriguing mathematical feature of the problem. There are several exciting questions left open for future work. Below, we list some of them. \n\n\\begin{itemize}\n\\item {\\bf Existence of fair allocations}:\nWe have seen that an envy-free contiguous and feasible multi-allocation of a two-layered cake exists for two or three agents with at most two types of preferences. An interesting open problem is whether such allocation also exists for any number of agents over a two-layered cake. One might expect that the Simmon-Su's technique \\citep{Su1999} using Sperner's Lemma can be adopted to our setting by considering all possible diagonal pieces. However, this approach may not work because multi-layered cake-cutting necessarily exhibits non-monotonicity in that the value of a pair of diagonal pieces may decrease when the knife moves from left to right. For proportionality, one intriguing future direction is extending our existence result for $m=2^a$ to any $m$. This requires careful consideration of contiguity and feasibility, which are often at odds with completeness.\n\n\\item {\\bf Query complexity of fair allocations}:\nThe query complexity of finding an envy-free feasible multi-allocation is open in the multi-layered cake-cutting problem. In particular, it would be challenging to extend the celebrated result of \\citet{aziz2016discrete} -- who showed the existence of a bounded protocol for computing an envy-free allocation of a single-layered cake with any number $n$ of agents -- to our setting. We expect that a direct translation may not work, due to the intricate nature of the feasibility constraint. With respect to proportionality, our existence proof implies that if there is a way to compute a majority switching point efficiently, one can compute a proportional contiguous feasible multi-allocation for special cases when the number of layers is a power of two. It is open whether such cutting point can be computed using a bounded number of queries.\n\n\\item {\\bf Approximate fairness}: \nIn the presence of contiguity requirement, it is known that no finite protocol computes an envy-free allocation even for three agents and a single-layered cake \\citep{Stromquist2008}. However, several positive results are known when the aim is to approximately bound the envy between agents \\citep{Deng2012,Goldberg2020,Arunachaleswaran19}.\nPursuing a similar direction in the context of multi-layered cake cutting would be an interesting research topic. \n\n\\item {\\bf Efficiency requirement}: \nBesides fairness criteria, another basic desideratum is economic efficiency. In the context of a single-layered cake cutting, several works studied the relation between fairness and efficiency \\citep{CohlerLPP11,Bei12, AumannDomb2010, AumannDH13}. \nThe question of what welfare guarantee can be achieved together with fairness is open in our model. In particular, it would be interesting to investigate the compatibility of the fairness notions with an efficiency requirement, under feasibility constraints. \n\n\n\\end{itemize}\n\n\\section*{Acknowledgement}\nHadi Hosseini was supported by the National Science Foundation (grant IIS-1850076).\nAyumi Igarashi was supported by the KAKENHI Grant-in-Aid for JSPS Fellows no. 18J00997 and JST, ACT-X. The authors would like to thank Yuki Tamura for introducing the problem to us and for the fruitful discussions. The authors also acknowledge the helpful comments by the GAIW and IJCAI reviewers. \nWe are grateful to an anonymous GAIW reviewer for the proof of the existence of an envy-free multi-allocation in the general case. \n\n\n\\bibliographystyle{named}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt is well known that volumetric photonic crystals have the efficacy of controlling propagating waves, e.g.~\\cite{joannopoulos2008molding}, both in three and two dimensions, e.g. \\cite{Kraus}. \nRecently, in parallel with thorough and general investigations of time-varying electromagnetic systems (see, e.g., Refs.~\\cite{XUCHENPRAPPLIED, SAJJADarXiv, groupEnergy}), three-dimensional (volumetric) temporal photonic crystals whose material properties are uniform in space but changing in time have attracted significant attention~\\cite{zurita2009reflection}. This is due to the intriguing effects that they have on propagating plane waves. \n\nIt appears that it is fundamentally important to explore properties of temporal metasurfaces which are 2D material sheets or boundaries with time-varying properties. In this case, the waves are surface waves, bound to the sheet or boundary. The eigenmode problem for waves along spatially uniform but time-modulated boundaries is one of the important canonical problems in electromagnetics of time-varying structures.\n\nIn this work, we solve this problem and discuss dispersion properties of surface waves over time-modulated reactive boundaries. As a simple canonical case, we consider a planar infinite boundary that is modeled by a surface capacitance which is spatially uniform over the surface and temporally modulated by an arbitrary periodical function. As a particular realization, one can consider, for example, a high-impedance surface of the type introduced by D. Sievenpiper \\cite{sievenpiper1999high}, where the capacitance between patches is modulated using varactors.\n\n\nIn addition to derivation of the dispersion relation for surface waves over such time-varying boundary, we give numerical examples of dispersion plots. Also, we explain the conditions for appearing stop bands for propagation constants and reveal phenomena of exponential field growth, reflection amplification, and radiation of space waves from those time-modulated boundaries.\n\n\n\n\\section{Theory}\n\n\\begin{figure}\n\\includegraphics[width=.95\\linewidth]{geom.pdf}\n\\caption{Geometry of the problem: A TE-polarized surface wave over a time-modulated capacitive boundary. }\n\\label{fig:geom}\n\\end{figure}\n\nLet us consider a spatially uniform and time-varying reactive boundary. Here, as an example, we assume a time-varying capacitive one which is represented by $C(t)$. As is well known, such a boundary supports surface waves which have the transverse-electric (TE) polarization with respect to the propagation direction. In other words, in free space, the electric field is perpendicular to the propagation plane, as shown in Fig.~\\ref{fig:geom}. For a stationary boundary, the dispersion equation defines a relation between the frequency $\\omega$ and the propagation constant along the surface $\\beta$. Since the capacitive boundary periodically changes in time, the eigenmode $\\beta$ will contain components at frequencies $\\omega_n=\\omega+n\\omega_{\\rm{M}}$, $n=0,\\pm 1,\\pm 2,\\dots$. Here, $\\omega_{\\rm{M}}$ is the fundamental modulation frequency. The electric field is expressed as \n\\begin{equation}\n\\mathbf{E}=\\sum_{n=-\\infty}^{+\\infty}E_n\\exp(j\\omega_nt)\\mathbf{a}_y,\n\\end{equation}\nin which \n\\begin{equation}\nE_n=A_n\\exp(j\\beta z)\\exp(-\\alpha_nx).\n\\end{equation}\nHere, $A_n$ is the amplitude of the wave corresponding to each frequency harmonic, and $\\alpha_n$ denotes the attenuation constant along the normal direction, for each harmonic. The plane-wave dispersion equation for free space above the boundary sets the following relation for each harmonic: \n\\begin{equation}\n\\beta^2=\\alpha_n^2+\\omega_n^2\\epsilon_0\\mu_0 .\n\\label{eq:betaalpha}\n\\end{equation}\nIt is worth mentioning that since we investigate surface waves, $\\alpha_n$ must be a real value.\n\nSimilarly to the electric field, the tangential component of the magnetic field which is directed along the $z$-axis is given by \n\\begin{equation}\n\\mathbf{H}_{\\rm{t}}=\\sum_{n=-\\infty}^{+\\infty}H_n\\exp(j\\omega_nt)\\mathbf{a}_z,\n\\end{equation}\nwhere\n\\begin{equation}\nH_n=B_n\\exp(j\\beta z)\\exp(-\\alpha_nx).\n\\end{equation}\nThe Maxwell equations relate the amplitudes of the electric field and the tangential component of the magnetic field to each other. By applying Eq.~\\eqref{eq:betaalpha} and doing some algebraic manipulations, we obtain a matrix relation $\\mathbf{M}\\cdot \\mathbf{A}=\\mathbf{B}$. Here, $\\mathbf{M}$ is a matrix that has $2N+1$ rows and columns, and it is a function of $\\beta$ and $\\omega_n$. The matrices $\\mathbf{A}$ and $\\mathbf{B}$ have only one column and $2N+1$ rows, and they are representing the amplitudes. \n\nIn analogy with the circuit theory, where we explicitly express the relation between the electric current flowing through the time-varying capacitance $i(t)$ and the voltage over it $v(t)$ as \n\\begin{equation}\n\\int i(t)dt=C(t)v(t),\n\\end{equation}\nwe simply write the relation between the tangential components of the electric and magnetic fields. In fact, this is the boundary condition in the dynamic scenario. By imposing the boundary condition, we obtain another matrix equation as $\\mathbf{Y}\\cdot \\mathbf{A}=\\mathbf{B}$. The matrix $\\mathbf{Y}$ is a function of $\\omega_n$ and the Fourier coefficients of the periodic function $C(t)$ which is expressed by the Fourier series in the exponential form. Similar modeling method has been used in our recent paper \\cite{wang2020nonreciprocity}.\n\nNow, we have two matrix equations $\\mathbf{M}\\cdot \\mathbf{A}=\\mathbf{B}$ and\\break $\\mathbf{Y}\\cdot \\mathbf{A}=\\mathbf{B}$. Therefore, we conclude that $\\big[\\mathbf{Y}-\\mathbf{M}\\big]\\cdot\\mathbf{A}=0$. The determinant of the whole matrix in the square brackets must be zero in order to allow nonzero solutions for the electric field. Consequently, relation \n\\begin{equation}\n\\det\\Big[\\mathbf{Y}-\\mathbf{M}\\Big]=0 \\label{eq: dispersion}\n\\end{equation}\ndetermines the dispersion of the surface waves above time-varying capacitive boundaries. In the following, we give some particular examples and numerically investigate the dispersion curves.\n\n\n\\section{Numerical Examples and Discussion}\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{dispersion_one_tune.pdf}\n\t\\caption{Dispersion plot for modulation with one harmonic tune. The figure is plotted for $C_0=1$~pF and $\\omega_{\\rm M}=3$~GHz. \n\t} \n\t\\label{fig: dispersion_one_tunes}\n\\end{figure}\n\\begin{figure}[!h]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{dispersion_two_tune.pdf}\n\t\\caption{Dispersion plot for modulation with two harmonic tunes: $C(t)=C_0[1+0.3\\cos(\\omega_{\\rm M}t)+0.2\\cos(2\\omega_{\\rm M}t)]$. \n\t} \n\t\\label{fig: dispersion_two_tunes}\n\\end{figure}\n\n\n\nFirst, we consider a capacitive boundary that is modulated in time harmonically, assuming, as an example, $C(t)=C_0[1+0.3\\cos(\\omega_{\\rm M}t)]$. \nTo obtain the dispersion diagram, we specify the value of $\\beta$, and solve the dispersion equation Eq.~(\\ref{eq: dispersion}) for the corresponding eigenfrequencies. The corresponding dispersion curves are shown in Fig.~\\ref{fig: dispersion_one_tunes}. One can see that for a fixed value of the propagation constant $\\beta$, there are many solutions for the eigenfrequencies $\\omega$, meaning that an eigenmode contains components at many frequencies. This property means that we can excite the mode by external sources at many possible frequencies. \nThe excitation frequency can even be above the light line (purple dot) which excites higher-order frequency harmonics below the light line (green dot). This indicates that, in time-varying structures, it is possible to launch surface waves with an incident plane wave, as is also reported in a recent work \\cite{galiffi2020wood}.\nMost importantly, temporal modulation opens up a band gap in $k$-space, which is a dual phenomenon of spatial periodic structures where the band gap is at the frequency axis. Similar properties of bulk media have been reported in \\cite{zurita2009reflection,lustig2018topological}.\nIncreasing the modulation depth, one can widen the band gap. Interestingly, the number of band gaps corresponds to the number of modulation tones. Figure~\\ref{fig: dispersion_two_tunes} shows that by adding a second modulation tone, a second band gap opens up. Positions and widths of the gaps can be tuned by varying the modulation spectrum. These band gaps provide great possibilities to control the propagation of surface waves on a metasurface plane. \n\n\\begin{figure}[!h]\n\\includegraphics[width=.95\\linewidth]{amplification.pdf}\n\\caption{(a) Surface wave on a time-invariant boundary. (b) Amplified surface wave on a time-varying boundary ($\\Delta t=5T_0$). }\n\\label{fig:amplification}\n\\end{figure}\n\nNext, we examine the wave behavior when the excited surface modes are inside a band gap. For a specified wavenumber $\\beta$ in the band gap, the solved eigenfrequencies are complex numbers $\\omega=\\omega^\\prime\\pm j\\omega^{\\prime\\prime}$, meaning that the wave can be exponentially attenuated or amplified in time. Next, we use COMSOL to numerically simulate the time-varying structure in this regime. \nFigure~\\ref{fig:amplification}(a) illustrates a constant-amplitude surface wave propagating along an unmodulated capacitive boundary. Then, the temporal modulation of the surface reactance is suddenly switched on. Evidently, as it can be seen in Fig.~\\ref{fig:amplification}(b, ) the surface mode is significantly amplified after modulating with a duration of $\\Delta t=5T_0$, where $T_0$ is the time period at the excitation frequency. In addition, there are higher-order harmonics generated in free space, forming a standing wave pattern along the surface, due to symmetry of the structure. \n\n\n\n\n\\section{Conclusions}\nHere, we have presented the dispersion equation and example dispersion plots for a temporally modulated electromagnetic boundary. The results show that time modulation induces band gaps in the two-dimensional $k$-space, which provides opportunities to control surface wave propagation. In the presentation, we will discuss in detail interesting properties of surface waves launched inside a band gap. \nLet us note that one can repeat the above path to achieve the dispersion equation associated with time-varying inductive boundaries. The difference is that the wave has a transverse-magnetic polarization (i.e., magnetic field is perpendicular to the propagation plane). \n\n\n\\section{Acknowledgment}\nThe authors thank Dr. V.~Asadchy for useful discussions and valuable comments.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Highlights}\n\\begin{itemize}\n\\item Privacy-preserving multi-agent reinforcement learning is used to coordinate residential energy \n\\item Learning from optimisations improves coordination scalability in stochastic environments\n\\item Marginal reward signals further enhance cooperation relative to previous approaches\n\\item The curse of dimensionality is mitigated by the use of fixed-size Q-tables\n\\item Case studies with large real-life datasets yield 33.7\\% local and global cost reductions \n\\end{itemize}\n\n\\begin{multicols}{2}\n\n\\section{Introduction}\nThis paper addresses the scalability issue of distributed domestic energy flexibility coordination in a cost-efficient and privacy-preserving manner. A novel class of coordination strategies using optimisation-based multi-agent reinforcement learning (MARL\\footnote{A full nomenclature is available in \\Cref{app:nomenclature}}) with fixed Q-table size is proposed for household-level decision-making, tackling the challenge of scalability for simultaneously learning independent agents under partial observability in a stochastic environment \\citep{Matignon2012}. Multiple versions of the novel strategy are assessed to maximise the statistical expectation of system-wide benefits, including local battery costs, grid costs and greenhouse gas emissions. \n\nWidespread electrification of primary energy provision and decarbonisation of the power sector are two vital prerequisites for limiting anthropogenic global warming to 1.5$^o$C above pre-industrial levels. To reduce risks of climate-related impacts on health, livelihood, security and economic growth, intermittent renewable power supplies could be required to supply 70\\% to 85\\% of electricity by 2050 \\citep{IPCC2015}. However, this poses the challenges of the intermittency and limited controllability of resources \\citep{Bose2019}. Therefore, a robust, decarbonised power system will rely on two structural features: decentralisation and demand response (DR) \\citep{Leautier2019}. The coordination of distributed flexible energy resources can help reduce costs for transmission, storage, peaking plants and capacity reserves, improve grid stability, align demand with decarbonised energy provision, promote energy independence and security, and lower household energy bills \\citep{Vazquez-Canteli2019, Pumphrey2020}.\n\nResidential sites constitute a significant share of potential DR, representing for example 38.5\\% of the 2019 UK electricity demand, and 56.4\\% of energy consumption if including transport and heat, which are both undergoing electrification \\citep{BEIS2021}. Increasing ownership of EVs and PV panels has been facilitated by regulatory changes, with many countries committing to internal combustion car phase-outs in the near future, and by plummeting costs, with an 82\\% and 87\\% levelised cost drop between 2010 and 2019 for EVs and PV panels \\citep{Agency2018,BloomberNEF2019}. This potential is so far underexploited, as DR primarily focuses on larger well-known industrial and commercial actors that require less coordination and data management \\citep{CharlesRiverAssociates2017}, with most customers still limited to trade with utility companies \\citep{Chen2019}. The primary hurdles to unlocking residential flexibility are the high capital cost of communication and control infrastructure as the domestic potential is highly fragmented \\citep{Leautier2019}, concerns about privacy and hindrance of activities \\citep{Bugden2019,Pumphrey2020}, and computational challenges for real-time control at scale \\citep{Moret2019}. \n\nTraditionally, convex optimisation would be used to maximise global coordination objectives in convex problems with variables known ahead of time. Techniques such as least-squares and linear programming have been well-studied for over a century \\citep{Boyd2009}. However, residential energy coordination presents challenges to its application. Firstly, optimisations that are centralised are hindered by privacy, acceptance, and communication constraints, and present exponential time complexity at the scale of millions of homes \\citep{Dasgupta2016}. Secondly, standard optimisation methods cannot be used without full knowledge of the system's inputs and dynamics \\citep{Recht2018}. In residential energy, agents only have partial observability of the system due to both the stochasticity and uncertainty of environment variables such as individual residential consumption and generation profiles, and to the privacy and infrastructure cost constraints that hinder communication between agents during implementation \\citep{FrancoisLavet2017}. Not relying on shared information may also improve the robustness of the solutions to failure of other agents, communication delays, and unreliable information, and improve adaptability to changing environments \\citep{Sen1994}. Finally, the real-life complex electricity grid environment may not be amenable to a convex model representation. Due to the heterogeneity of users and behaviours needing different parameters and models, the large-scale use of model-based controllers is cumbersome \\citep{Ruelens2017}. A model-free approach instead avoids modelling non-trivial interactions of parameters, including private information \\citep{Dasgupta2016}. \n\nGiven these challenges to residential energy flexibility coordination, and the specific constraints of the problem at play which renders traditional approaches unsuitable, we seek to develop a novel coordination mechanism which satisfies the following criteria, as tested in real-life scenarios:\n\\begin{itemize}\n\\item Computational scalability: minimal and constant computation burden during implementation as the system size increases; \n\\item Performance scalability: no drop in coordination performance as the system size increases, measured in savings obtained per hour and per agent;\n\\item Acceptability: local control of appliances, no communication of personal data, thermal discomfort, or hindrance\/delay of activities.\n\\end{itemize}\n\nThe rest of this paper is organised as follows. In \\Cref{sec:gapanalysis} we motivate the novel MARL approach with a literature review and a gap analysis. In \\Cref{system}, a system model is presented that includes household-level modelling of EVs, space heating, flexible loads and PV generation. \\Cref{RLSection} lays out the MARL methodology, with various methodological options for independent agents to learn to cooperate. In \\Cref{data}, the input data used to populate the model is presented. In \\Cref{results}, the performance of different MARL strategies is compared to lower and upper bounds in case studies. Finally, we conclude in \\Cref{conclusion}.\n\n\n\\section{MARL-based energy coordination: literature review and gap analysis}\\label{sec:gapanalysis}\nReinforcement learning (RL) can overcome the constraints faced by centralised convex optimisation for residential energy coordination, by allowing for decentralised and model-free decision-making based on partial knowledge. RL is an artificial intelligence (AI) framework for goal-oriented agents\\footnote{Here agents are independent computer systems acting on behalf of prosumers \\citep{Wooldridge2002}. Prosumers are proactive consumers with distributed energy resources actively managing their consumption, production and storage of energy \\citep{Morstyn2018_Federated}. } to learn sequential decision-making by interacting with an uncertain environment \\citep{Sutton1998}. As an increasing wealth of data is collected in local electricity systems, RL is of growing interest for the real-time coordination of distributed energy resources (DERs) \\citep{Antonopoulos2020,Vazquez-Canteli2019}. Instead of optimising based on inherently uncertain data, RL more realistically searches for statistically optimal sequential decisions given partial observation and uncertainty, with no \\emph{a priori} knowledge \\citep{Recht2018}. Approximate learning methods may be more computationally scalable, more efficient in exploring high-dimensional state spaces and therefore more scalable than exact global optimisation with exponential time complexity \\citep{Schellenberg2020, Dasgupta2016}. \n\nAs classified in \\citep{CharbonnierReview}, numerous RL-based coordination methods have been proposed in the literature for residential energy coordination, though with remaining limitations in terms of scalability and privacy protection. On the one hand, in RL-based direct control strategies, a central controller directly controls individual units, and households directly forfeit their data and control to a central RL-based scheduler \\citep{ONeill2010}. While most existing AI-based DR research thus assumes fully observable tasks \\citep{Antonopoulos2020}, direct controllability of resources from different owners with different objectives and resources and subject to privacy, comfort and security concerns is challenging \\citep{Darby2020}. Moreover, centralised policies do not scale due to the curse of dimensionality as the state and action spaces grow exponentially with the system size \\citep{Powell2011}. On the other hand, RL-based indirect control strategies consider decision-making at the prosumer level, entering the realm of MARL. This can be achieved using different communication structures, with either centralised, bilateral, or no sharing of personal information, as presented below.\n\nFirstly, agents may share information with a central entity, which in turn broadcasts signals based on a complete picture of the coordination problem. For example, the central entity may send unidirectional price signals to customers based on information such as prosumers' costs, constraints and day-ahead forecasts. RL can inform both the dynamic price signal \\citep{Lu2019, Kim2016}, and the prosumer response to price signals \\citep{Kim2016,Babar2018}. The central entity may also collect competitive bids and set trades and match prosumers centrally, where RL algorithms are used to refine individual bidding strategies \\citep{Vaya2014, Ye2020, Dauer2013,Sun2015,Kim2020} or to dictate the auction market clearing \\citep{Chen2019,Claessens2013}. Units may also use RL to cooperate towards common objectives with the mediation of a central entity that redistributes centralised personal information \\citep{Zhang2017,Dusparic2015,Dusparic2013,Hurtado2018}. However, information centralisation also raises costs, security, privacy and scalability of computation issues. Biased information may lead to inefficient or even infeasible decisions \\citep{Morstyn2020_P2P}. \n\nSecondly, RL-based coordination has been proposed where prosumers only communicate information bilaterally without a central authority. For example, in \\citep{Taylor2014} agents use transfer learning with distributed W-learning to achieve local and system objectives. Bilateral peer-to-peer communication offers autonomy and expression of individual preferences, though with remaining risks around privacy and bounded rationality \\citep{Herbert1982}. There is greater robustness to communication failures compared situations with a single point of failure. However, as the system size increases, the number of communication iterations until algorithmic convergence increases, requiring adequate computational resources and limited communication network latency for feasibility \\citep{Guerrero2020}. The safe way of implementing distributed transactions to ensure data protection is an ongoing subject of research \\citep{CharbonnierReview}. \n\nFinally, in RL-based implicit coordination strategies, prosumers rely solely on local information to make decisions. For example, in \\citep{Cao2019, Yang2019}, competitive agents in isolation maximise their profits in RL-based energy arbitrage, though they do not consider the impacts of individual actions on the rest of the system, with potential negative impacts for the grid. For example, a concern is that all loads receive the same incentive, the natural diversity on which the grid relies may be diminished \\citep{Crozier2018_Mitigating}, and the peak potentially merely displaced, with overloads on upstream transformers. Implicit cooperation, which keeps personal information at the local level while encouraging cooperation towards global objectives, has been thus far under-researched beyond frequency control. In \\citep{Rozada2020}, agents learn the optimal way of acting and interacting with the environment to restore frequency using local information only. This is a promising approach for decentralised control. However, the applicability in more complex scenarios with residential electric vehicles and smart heating load scheduling problems has not been considered. Moreover, the convergence slows down for increasing number of agents, and scalability beyond 8 agents has not been investigated. Indeed, fundamental challenges to the coordination of simultaneously learning independent agents at scale under partial observability in a stochastic environment have been identified when using traditional RL algorithms [1]: independent learners may reach individual policy equilibriums that are incompatible with a global Pareto optimal, the non-stationarity of the environment due to other concurrently learning agents affects convergence, and the stochasticity of the environment prevents agents from discriminating between their own contribution to global rewards and noise from other agents or the environment. Novel methods are therefore needed to develop this approach.\n\nWe seek to bridge this gap, using implicit coordination to unlock the so-far largely untapped value from residential energy flexibility to provide both individual and system benefits. We propose a new class of MARL-based implicit cooperation strategies for residential DR, to make the best use of the flexibility offered by increasingly accessible assets such as photovoltaic (PV) panels, electric vehicle (EV) batteries, smart heating and flexible loads. Agents learn RL policies using a data-based, model-free statistical approach by exploring a shared environment and interacting with decentralised partially observable Markov decision processes (Dec-POMDPs), either through random exploration or learning from convex optimisation results. In the first rehearsal phase \\citep{Kraemer2016} with full understanding of the system, they learn to cooperate to reach system-wide benefits by assessing the global impact of their individual actions, searching for trade-offs between local, grid and social objectives. The pre-learned policies are then used to make decisions under uncertainty given limited local information only.\n\nThis approach satifies the computational scalability, coordination scalability and acceptance criteria set out in this paper.\n\nFirstly, the real-time control method is computationally scalable thanks to fixed-size Q-tables which avoid the curse of dimensionality, and there is only minimal, constant local computation required to implement the pre-learned policies during implementation. No further communication is required for implementation. This increases robustness to communication issues and data inaccuracy relative to when relying on centralised and bilateral communication, and cuts the costs of household computation and two-way communication infrastructure. \n\nSecondly, we address the outstanding MARL coordination performance scalability issue for agents with partial observability in a stochastic environment seeking to maximise rewards which also depend on other concurrently learning agents \\citep{Busoniu2008,Matignon2012}. The case studies in this paper show that allowing agents to learn from omniscient, stable, and consistent optimisation solutions can successfully act as an equilibrium-selection mechanism, while the use of marginal rewards improves learnability\\footnote{\\say{the sensitivity of an agent's utility to its own actions as opposed to actions of others, which is often low in fully cooperative Markov games} \\citep{Matignon2012}} by isolating individual contributions to global rewards. This novel methodological combination offers significant improvements on MARL scalability and convergence issues, with high coordination performance maintained as the number of agents increases, where that of standard MARL drops at scale. \n\nFinally, this method tackles acceptability issues, with no interference in personal comfort nor communication of personal data. \n\nThe specific novel contributions of this paper are (a) a novel class of decentralised flexibility coordination strategies, MARL-based implicit cooperation, with no communication and fixed-size Q-tables to mitigate the curse of dimensionality; (b) a novel MARL exploration strategy for agents under partial observability to learn from omniscient, convex optimisations prior to implementation for convergence to robust cooperation at scale; and (c) the design and testing with large banks of real-world data of combinations of reward definitions, exploration strategies and multi-agent learning frameworks for assessing individual impacts on global energy, grid and storage costs. Methodologies are identified which outperform a baseline with increasing numbers of agents despite uncertainty.\n\n\\section{Local system description}\\label{system}\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{energybalance.pdf}\n\n\\end{center}\n\\caption{Local system model. Red dotted lines denote energy balances.}\n\\label{fig:EnergyBalance}\n\\end{figure*}\n\nIn this section, the variables, objective function and constraints of the problem are described. This sets the frame for the application of the RL algorithms presented in \\Cref{RLSection}.\n\n\\subsection{Variables}\\label{variables}\nWe consider a set of time steps $t \\in \\mathcal{T} = \\{t_0,...,t_\\textrm{end}\\}$ and a set of prosumers $i \\in \\mathcal{P} = \\{1,...,n\\}$. Decision variables are \\emph{italicised} and input data are written in roman. Energy units are used unless specified otherwise. Participants have an EV, a PV panel, electric space heating and generic flexible loads.\n\nThe EV at-home availability $\\upmu_i^t$ (1 if available, 0 otherwise), EV demand for required trips $\\textrm{d}_{\\textrm{EV},i}^t$, household electric demand $\\textrm{d}_i^{t}$, PV production $\\textrm{p}_{\\textrm{PV},i}^t$, external temperature $\\textrm{T}_{\\textrm{e}}^t$ and solar heat flow rate $\\upphi^t$ are specified as inputs for $t \\in \\mathcal{T}$ and $i \\in \\mathcal{P}$.\n\nThe local decisions by prosumers are the energy flows in and out of the battery $b_{\\textrm{in},i}^t$ and $b_{\\textrm{out},i}^t$, the electric heating consumption $h_i^t$ and the prosumer consumption $c_i^t$. These have both local and system impacts (\\Cref{fig:EnergyBalance}). Local impacts include battery energy levels $E_i^t$, losses $\\epsilon_{\\textrm{ch},i}^t$ and $\\epsilon_{\\textrm{dis},i}^t$, prosumer import $p_i^t$, building mass temperature $T_{\\textrm{m},i}^t$ and indoor air temperature $T_{\\textrm{air},i}^t$. System impacts arise through the costs of total grid import $g^t$ and distribution network trading. Distribution network losses and reactive power flows are not included. \n\n\\subsection{Objective function}\\label{objfunc}\n\nProsumers cooperate to minimise system costs consisting of grid ($c_\\textrm{g}^t$), distribution ($c_\\textrm{d}^t$) and storage ($c_\\textrm{s}^t$) costs. This objective function will be maximised both in convex optimisations off-line -- to provide an upper bound for the achievable objective function, and in some cases to provide information to the learners during the simulated learning phase -- and in the learning of MARL policies for decentralised online implementation. \n\n\\begin{equation}\n\\max F = \\sum_{\\forall t \\in \\mathcal{T}}{\\hat{F}_t} = \\sum_{\\forall t \\in \\mathcal{T}}{- (c_\\textrm{g}^t + c_\\textrm{d}^t + c_\\textrm{s}^t )}\n\\end{equation}\n\n\\begin{equation}\nc_\\textrm{g}^t = \\textrm{C}_\\textrm{g}^t \\left( g^t + \\epsilon_g \\right)\n\\end{equation}\nWhere losses incurred by imports and exports from and to the main grid are approximated as\n\\begin{equation}\n\\epsilon_g = \\frac{\\textrm{R}}{\\textrm{V}^2}\\left(g^t\\right)^2\n\\end{equation}\n\nThe grid cost coefficient $\\textrm{C}_\\textrm{g}^t$ is the sum of the grid electricity price and the product of the carbon intensity of the generation mix at time $t$ and the Social Cost of Carbon which reflects the long-term societal cost of emitting greenhouse gases \\citep{ParryM}. The impacts of local decisions on upstream energy prices are neglected. Grid losses are approximated using the nominal root mean square grid voltage $\\textrm{V}$ and the average resistance between the main grid and the distribution network $\\textrm{R}$ \\citep{Multiclass}, based on the assumption of small network voltage drops and relatively low reactive power flows \\citep{Coffrin2012}. The second-order dependency disincentivises large power imports and exports, which helps ensure interactions of transmission and distribution networks do not reduce system stability.\n\n\\begin{equation}\nc_\\textrm{d}^t = \\textrm{C}_\\textrm{d}\\sum_{i \\in \\mathcal{P}}{\\max\\left(- p_i^t,0\\right)}\n\\end{equation}\n\\noindent Distribution costs $c_\\textrm{d}^t$ are proportional to the distribution charge $\\textrm{C}_\\textrm{d}$ on exports. The resulting price spread between individual imports and exports decreases risks of network constraints violation by incentivising the use of local flexibility first \\citep{Morstyn2020_IntegratingP2P}. Distribution network losses due to power flows between prosumers are neglected so there is no second-order dependency.\n\\begin{equation}\nc_\\textrm{s}^t = \\textrm{C}_\\textrm{s}\\sum_{i \\in \\mathcal{P}}{\\left(b_{\\textrm{in},i}^t + b_{\\textrm{out},i}^t\\right)}\n\\end{equation}\n\n\\noindent Storage battery depreciation costs $c_\\textrm{s}^t$ are assumed to be proportional to throughput using the depreciation coefficient $\\textrm{C}_\\textrm{s}$, assuming a uniform energy throughput degradation rate \\citep{DufoLopez2014}.\n\n\\subsection{Constraints}\n\n\nLet $\\textrm{E}_0$, $\\underline{\\textrm{E}}$ and $\\overline{\\textrm{E}}$ be the initial, minimum and maximum battery energy levels, $\\upeta_\\textrm{ch}$ and $\\upeta_\\textrm{dis}$ the charge and discharge efficiencies, and $\\overline{\\textrm{b}_\\textrm{in}}$ the maximum charge per time step. Demand $\\textrm{d}_{i,k}^{t_\\textrm{D}}$ is met by the sum of loads consumed $\\hat{c}_{i,k,t_\\textrm{C},t_\\textrm{D}}$ at time $t_\\textrm{C}$ by prosumer $i$ for load of type $k$ (fixed or flexible) demanded at $t_\\textrm{D}$. The flexibility boolean $\\textrm{f}_{i,k,t_\\textrm{C},t_\\textrm{D}}$ indicates if time $t_\\textrm{C}$ lies within the acceptable range to meet $\\textrm{d}_{i,k}^{t_\\textrm{D}}$. A Crank-Nicholson scheme \\citep{ISO2007} is employed to model heating, with $\\upkappa$ a 2x5 matrix of temperature coefficients, and $\\underline{\\textrm{T}}_i^t$ and $\\overline{\\textrm{T}}_i^t$ lower and upper temperature bounds. System constraints for steps $\\forall \\ t \\in \\mathcal{T}$ and prosumers $\\forall \\ i \\in \\mathcal{P}$ are:\n\n\\begin{itemize}\n\\item Prosumer and substation energy balance (see \\Cref{fig:EnergyBalance})\n\\begin{equation}\n\tp_i^t = c_i^t + h_i^t + \\frac{b_{\\textrm{in},i}^t}{\\upeta_\\textrm{ch}} - {\\upeta_\\textrm{dis}} b_{\\textrm{out},i}^t - \\textrm{p}_{\\textrm{PV},i}^t \n\\end{equation}\n\\begin{equation}\n\\sum_{i \\in \\mathcal{P}}{p_i^t} = g^t\n\\end{equation}\n\\item Battery energy balance \n\\begin{equation}\n\tE_i^{t+1} = E_i^t + b_{\\textrm{in},i}^t - b_{\\textrm{out},i}^t - \\textrm{d}_{\\textrm{EV},i}^t \n\\end{equation}\n\\item Battery charge and discharge constraints\n\\begin{equation}\n\t \\textrm{E}_0 = E_i^{t_0} = E_i^{t_\\textrm{end}} + b_{\\textrm{in},i}^{t_{\\textrm{end}}} - b_{\\textrm{out},i}^{t_{\\textrm{end}}} - \\textrm{d}_{\\textrm{EV},i}^{t_{\\textrm{end}}} \n\\end{equation}\n\\begin{equation}\n\t\\upmu_i^t\\underline{\\textrm{E}}_i \\leq E_i^t \\leq \\overline{\\textrm{E}}_i\n\\end{equation}\n\\begin{equation}\n\tb_{\\textrm{in},i}^t \\leq \\upmu_i^t \\overline{\\textrm{b}_\\textrm{in}}\n\\end{equation}\n\\begin{equation}\n\tb_{\\textrm{out},i}^t \\leq \\upmu_i^t \\overline{\\textrm{E}}_i\n\\end{equation}\n\n\\item Consumption flexibility --- the demand of type $k$ at time $t_\\textrm{D}$ by prosumer $i$ must be met by the sum of partial consumptions $\\hat{c}_{i,k,t_\\textrm{C},t_\\textrm{D}}$ at times $t_\\textrm{C}...t_\\textrm{C}+\\textrm{n}_\\textrm{flex}$ within the time frame $\\textrm{n}_\\textrm{flex}$ specified by the flexibility of each type of demand in matrix $\\textrm{f}_{i,k,t_\\textrm{C},t_\\textrm{D}}$\n\\begin{equation}\\label{eq:demandmet}\n\t\\sum_{t_\\textrm{C}\\in\\mathcal{T}}{\\hat{c}_{i,k,t_\\textrm{C},t_\\textrm{D}} \\textrm{f}_{i,k,t_\\textrm{C},t_\\textrm{D}}} = \\textrm{d}_{i,k}^{t_\\textrm{D}} \n\\end{equation}\n\\item Consumption --- the total consumption at time $t_\\textrm{C}$ is the sum of all partial consumptions $\\hat{c}_{i,k,t_\\textrm{C},t_\\textrm{D}}$ meeting parts of demands from current and previous time steps $t_\\textrm{D}$:\n\\begin{equation}\\label{eq:totalcons}\n\t\\sum_{t_\\textrm{D}\\in\\mathcal{N}}{\\hat{c}_{i,k,t_\\textrm{C},t_\\textrm{D}}}= c_{i,k}^{t_\\textrm{C}} \n\\end{equation}\n\n\\item Heating --- the workings to obtain this equation are included in \\Cref{app:heating}:\n\\begin{equation}\\label{eq:main_heating}\n\\begin{bmatrix}\nT_{\\textrm{m},i}^{t+1}\\\\\nT_{\\textrm{air},i}^{t+1} \n\\end{bmatrix}\n = \\upkappa \n \\begin{bmatrix}\n1,\nT_{\\textrm{m},i}^{t},\n\\textrm{T}_{\\textrm{e}}^t,\n\\upphi^t,\nh_i^t\n\\end{bmatrix}^\\intercal\n\\end{equation}\n\\begin{equation}\n\\underline{\\textrm{T}}_i^t \\leq T_{\\textrm{air},i}^t \\leq \\overline{\\textrm{T}}_i^t\n\\end{equation}\n\\item Non-negativity constraints \n\\begin{equation}\n\tc_i^t, h_i^t,E_i^t, b_{\\textrm{in},i}^t, b_{\\textrm{out},i}^t, \\hat{c}_{i,l,t_\\textrm{C},t_\\textrm{D}} \\geq 0\n\\end{equation}\n\\end{itemize}\n\nWhile the proposed framework could accommodate the use of idiosyncratic satisfaction functions to perform trade-offs between flexibility use and users' comfort, no such trade-offs are considered in this paper, with comfort requirements for temperature and EV usage always being met. Field evaluations have shown that programmes that do not maintain thermal comfort are consistently overridden, increasing overall energy use and costs \\citep{Sachs2012}, while interference in consumption patterns and temperature set-points cause dissatisfaction \\citep{Vazquez-Canteli2019}. Meeting fixed domestic loads, ensuring sufficient charge for EV trips, and maintaining comfortable temperatures are therefore set constraints. \n\n\\section{Reinforcement learning methodology}\\label{RLSection}\nThe MARL approach is now presented in which independent prosumers learn to make individual decisions which together maximise the statistical expectation of the objective function in \\Cref{system}. \n\nAt time step $t \\in \\mathcal{T}$, each agent is in a state $s_i^t \\in \\mathcal{S}$ corresponding to accessible observations (here the time-varying grid cost), and selects an action $a_i^t \\in \\mathcal{A}$ as defined in \\Cref{sec:agentdecision}. This action dictates the decision variables in \\Cref{variables} $b_{\\textrm{in},i}^t$, $b_{\\textrm{out},i}^t$, $h_i^t$ and $c_i^t$. The environment then produces a reward $r^t \\in \\mathcal{R}$ which corresponds to the share $\\hat{F}_t$ of the system objective function presented in \\Cref{objfunc} and agents transition to a state $s_i^{t+1}$. Agents learn individual policies $\\pi_i$ by interacting with the environment using individual, decentralised fixed-size Q-tables.\n\nWe first introduce the Q-learning methodology. Then, the mapping between the RL agent action and the decision variables in \\Cref{variables} is presented. Finally, we propose variations on the learning method, with different experience sources, multi-agent structures and reward definitions.\n\n\\subsection{Q-Learning}\nWhile any reinforcement learning methodology could be used with the framework proposed in this paper, here we focus on Q-learning, a model-free, off-policy RL methodology. Its simplicity and proof of convergence make it suited to developing novel learning methodologies in newly defined environments \\citep{Vazquez-Canteli2019}. State-actions values $Q(s,a)$ represent the expected value of all future rewards $r_t$ $\\forall \\ t \\in \\mathcal{T}$ when taking action $a$ in state $s$ according to policy $\\pi$:\n\\begin{equation}\nQ(s,a) \\triangleq E^{\\pi}{[r_{t} + \\gamma r_{t+1} + \\gamma^2r_{t+2}...|s_t = s, a_t = a ]}\n\\end{equation}\n\n\\noindent where $\\gamma$ is the discount factor setting the relative importance of future rewards. Estimates are refined incrementally as\n\\begin{equation}\n\\hat{Q}(s,a)\\leftarrow \\hat{Q}(s,a) + \\alpha\\delta\n\\end{equation}\nwhere $\\delta$ is the temporal-difference error,\n\\begin{equation}\n\\delta = \\left(r_t +\\gamma \\hat{V}(s^\\textrm{next})-\\hat{Q}(s,a)\\right)\n\\end{equation}\n$\\hat{V}$ is the state-value function estimate,\n \\begin{equation}\n\\hat{V}(s) = \\max_{a^* \\in \\mathcal{A}(s)}{\\hat{Q}(s,a^*)}\n\\end{equation}\nand $\\alpha$ is the learning rate. In this work we use hysteretic learners, i.e. chiefly optimistic learners that use an increase rate superior to the decrease rate in order to reduce oscillations in the learned policy due to actions chosen by other agents \\citep{Matignon2012, Matignon2007}. For $\\beta < 1$:\n\\begin{equation}\n \\alpha =\n \\begin{cases}\n \\alpha_0 & \\text{if $\\delta > 0$}\\\\\n \\alpha_0\\beta & \\text{otherwise}\\\\\n \\end{cases} \n\\end{equation}\n\nAgents follow an $\\epsilon$-greedy policy to balance exploration of different state-action pairs and knowledge exploitation. The greedy action with highest estimated rewards is selected with probability $1-\\epsilon$ and random actions otherwise. \n\\begin{equation}\n\\label{eqn:greedy}\n a^* =\n \\begin{cases}\n \\argmax_{\\ a^* \\in \\mathcal{A}}\\hat{Q}(s,a^*) & \\text{if $x \\sim U(0,1) > \\epsilon$}\\\\\n a \\sim p(a) = \\frac{1}{|\\mathcal{A}|} \\ \\forall \\ a \\ \\in \\mathcal{A}& \\text{otherwise}\\\\\n \\end{cases} \n\\end{equation}\n\nHenceforth, we refer to the estimates $\\hat{Q}$ and $\\hat{V}$ as $Q$ and $V$ to reduce the amount of notation.\n\n\\subsection{Agent state}\nThe agent state is defined by the time-dependent grid cost coefficient $\\textrm{C}_\\textrm{g}^t$, i.e. the sum of the grid electricity price and the product of the carbon intensity of the generation mix at time $t$ and the social cost of carbon.\n\nTo convert the RL policy action into local decisions, the agent also requires information on their current PV generation, battery level, flexible loads and indoor air temperature, as described below in \\Cref{sec:agentdecision}.\n\n\\subsection{Agent action}\\label{sec:agentdecision}\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{inkscape_mu_4.pdf}\n\n\\end{center}\n\\caption{Decision variable $\\psi$. Sections 1-5 denote the trade-off regimes described in \\Cref{sec:agentdecision}. At each step, the fixed requirements for loads, heat and upcoming EV trips are first met. The $\\psi$ decision then applies to the remaining flexibility, from maximal energy exports (full use of flexibility) at $\\psi = 0$, to maximal energy imports (no use of flexibility) at $\\psi = 1$. $\\textrm{d}_\\textrm{tot}$ and $\\textrm{d}_\\textrm{fixed}$ are the sum of household and heating loads with and without their flexible component. If fixed loads cannot be fully met by PV energy, the residual is met by storage and imports (2). If there is additional PV energy after meeting all loads, it can be stored or exported (4).}\n\\label{fig:mu}\n\\end{figure*}\n\nLarge action spaces compound the curse of dimensionality in Q-learning and waste exploration resources \\citep{Powell2011}. At each time step, the decision variables in \\Cref{system} controlling the flows in and out of the battery$b_{\\textrm{in},i}^t$ and $b_{\\textrm{out},i}^t$, the electric heating consumption $h_i^t$ and the prosumer consumption $c_i^t$ for household $i$ are therefore synthesised into a single variable $\\psi \\in [0,1]$ controlling the use of available local flexibility. \\Cref{fig:mu} shows how consumption (for domestic loads and heat), imports and storage change with $\\psi$.\n\nAt each step, the fixed requirements for loads, heat and upcoming EV trips are first met. The $\\psi$ decision then applies to the remaining flexibility. In conditions deemed optimal for energy exports $\\psi = 0$, all initial storage and residual PV generation is exported and flexible loads are delayed. On the other end, a \\emph{passive} agent does not utilise its flexibility and uses the \\emph{default} action $\\psi = 1$, maximising imports with EVs charged when plugged in and no flexible loads delayed. Intermediate imports trade-offs are mapped on \\Cref{fig:mu}:\n \n\\begin{enumerate}\n\\item From exporting all to none of the initial storage $E_i^t$\n\\item From meeting fixed loads $\\textrm{d}_{i,\\textrm{fixed}}^t$ with the energy stored to importing the required amount \n\\item From no to maximum flexible consumption $\\textrm{d}_{i,\\textrm{tot}}^t$ \n\\item From exporting to storing PV energy $\\textrm{p}_{\\textrm{PV},i}^t$ remaining after meeting loads\n\\item From importing no additional energy to filling up the battery to capacity $\\overline{\\textrm{E}}_i$\n\\end{enumerate}\n\nCostlier actions incurring battery depreciation, losses and export costs are towards either $\\psi$ extreme, only used in highly beneficial situations (convex local costs function in the lower plot of \\Cref{fig:mu}). Ranking actions consistently ensures agents do not waste resources trialling sub-optimal combinations of decisions. For example, it is more cost-efficient to first absorb energy imports by consuming flexible loads, and only use the battery (incurring costs) if imports are large. \n\nNote that although this action space is continuous, it can be discretised into intervals for implementation in Q-learning. \n\n\\subsection{Variations of the learning method}\\label{methodologies}\nDifferent experience sources, reward definitions and MARL structures are proposed within the MARL approach. The performance of these combinations of algorithmic possibilities will be assessed in \\Cref{results} to inform effective model design.\\\\\n\n\\subsubsection{Experience sources}\nIn data-driven strategies, the learning is determined by the collected experience.\n\\begin{itemize}\n\\item \\textbf{Environment exploration}. Traditionally, agents collect experience by interacting with an environment \\citep{Sutton1998}.\n\n\\item \\textbf{Optimisations}. A novel approach collects experience from optimisations. Learning from entities with more knowledge or using knowledge more effectively than randomly exploring agents has previously been proposed, as with agents \\say{mimicking} humans playing video games \\citep{Grandmaster}. Similarly, agents learn from convex \\say{omniscient} optimisations on historical data with perfect knowledge of current and future variables. This experience is then used under partial observability and control for stable coordination between prosumers at scale. Note in this case that, although the MARL learning and implementation are model-free, a model of the system is used to run the convex optimisation and produce experience to learn from. A standard convex optimiser uses the same data that would be used to populate the environment explorations but solves over the whole day-horizon with perfect knowledge of all variables using the problem description in \\Cref{system}. Then, at each time step, the system variables are translated into equivalent RL $\\{s_t,a_t,r_t, s_{t+1}\\}$ tuples for each agent, which are used to update the policies in the same way as for standard Q-learning as presented below.\\\\\n\\end{itemize}\n \n\\subsubsection{MARL structures} Both the centralised and decentralised structures proposed use fixed-size $|\\mathcal{S}| \\times |\\mathcal{A}|$ Q-tables corresponding to individual state-action pairs. The size of a global Q-table referencing all possible combinations of states and actions would grow exponentially with the number of agents. This would limit scalability due to memory limitations and exploration time requirements. Moreover, as strategies proposed in this paper are privacy-preserving, only local state-action pairs are used for individual action selection, wasting the level of detail of a global Q-table.\n\n\\begin{itemize}\n\\item \\textbf{Distributed learning}. Each agent $i$ learns its $Q_i$ table with its own experience. No information is shared between agents. \n\n\\item \\textbf{Centralised learning}. A single table $Q_\\textrm{c}$ uses experience from all agents during pre-learning. All agents use the centrally learned policy for decentralised implementation.\n\\end{itemize}\n\n\\subsubsection{Reward definitions}\nThe reward definition is central to learning as its maximisation forms the basis for incrementally altering the policy \\citep{Sutton1998}. Assessing the impact of individual actions on global rewards accurately is key to the effective coordination of a large number of prosumers. In the following, the Q-tables $Q^0$, $Q^\\textrm{diff}$,$Q^\\textrm{A}$ and $Q^\\textrm{count}$ may be either agent-specific $Q_i$ or centralised $Q_\\textrm{c}$ based on the MARL structure. We proposed four variations of the Q-table update rule for each experience step tuple collected $(s_i^t, a_i^t, r^t,s_i^{t+1})$.\n \\begin{equation}\nQ(s_i^t, a_i^t)\\leftarrow Q(s_i^t,a_i^t) + \\alpha \\delta\n\\end{equation}\n\\begin{itemize}\n\\item \\textbf{Total reward}. The instantaneous total system reward $r^t = \\hat{F}_t$ is used to update the Q-table $Q^0$.\n\\begin{equation}\n\\delta = r^t + \\gamma V^0(s_i^{t+1}) - Q^0(s_i^t, a_i^t)\n\\end{equation}\n\\item \\textbf{Marginal reward}. The difference in total instant rewards $r^t$ between that if agent $i$ selects the greedy action and that if it selects the default action is used to update $Q^\\textrm{diff}$ \\citep{Wolpert2002}. The default action $a_\\textrm{default}$ corresponds to $\\psi = 1$, where no flexibility is used. The default reward $r^t_{a_{i}=a_\\textrm{default}}$, where all agents perform their greedy action apart from agent $i$ which performs the default action, is obtained by an additional simulation.\n\\begin{equation}\n\\delta = \\left(r^t - r^t_{a_{i}=a_\\textrm{default}}\\right) + \\gamma V^\\textrm{diff}(s_i^{t+1}) - Q^\\textrm{diff}(s_i^t, a_i^t)\n\\end{equation}\n\\item \\textbf{Advantage reward}. The post difference between $Q^0$ values when $i$ performs the greedy and the default action is used. This corresponds to the estimated increase in rewards not just instantaneously but over all future states, analogously to in \\citep{Foerster2018}. No additional simulations are required as the Q-table values are refined over the normal course of explorations.\n\\begin{equation}\n\\delta = \\left(Q^0(s_i^t, a_i^t) - Q^0(s_i^t, a_{a_i=a_\\textrm{default}})\\right) - Q^\\textrm{A}(s_i^t, a_i^t)\n\\end{equation}\n\\item \\textbf{Count}. The Q-table stores the number of times each state-action pair is selected by the optimiser. \n \\begin{equation}\n\\alpha\\delta = 1\n\\end{equation}\n\\end{itemize}\n\n\\section{Input Data}\\label{data}\n\n\\begin{table*}[!t]\n \\newcommand{\\rule[-10pt]{0pt}{20pt}}{\\rule[-10pt]{0pt}{20pt}}\n \\newcommand{\\rule[-5pt]{0pt}{30pt}}{\\rule[-5pt]{0pt}{30pt}}\n\\begin{tabularx}{\\textwidth}{ c|m{5cm}|m{5cm}}\n\\toprule\n& Normalised profile & Scaling factor \\\\ \n\\hline\n PV \\rule[-10pt]{0pt}{20pt} & Randomly selected from current month bank $b_{t+1}=(m)$ & \n \\multirow{2}{=}{\\setlength\\parskip{\\baselineskip}%\n Computed as $\\lambda_{t+1} = \\lambda_{t} + x$, where $x \\sim \\Gamma\\left(\\alpha(b_{t},b_{t+1}),\\beta(b_{t},b_{t+1})\\right)$} \\rule[-10pt]{0pt}{20pt} \\\\\n \\cline{1-2}\n Load \\rule[-5pt]{0pt}{30pt} &\\multirow{2}{=}{\\setlength\\parskip{\\baselineskip}%\nCluster selected based on transition probability $p(k_{t+1} | k_t, w_t, w_{t+1})$ \\newline Normalised profile randomly selected from bank $b_{t+1} = (k_{t+1}, w_{t+1})$} \\rule[-5pt]{0pt}{30pt} &\n \\\\ \n\\cline{1-1}\n\\cline{3-3} \n EV \\rule[-5pt]{0pt}{30pt} & & Random variable from discrete distribution $p(\\lambda_{t+1}|\\lambda_t, b_t, b_{t+1})$ \\rule[-10pt]{0pt}{20pt} \\\\\n \\bottomrule\n\\end{tabularx}\n \\caption{Markov chain mechanism for selecting behaviour clusters, profiles and scaling factors for input data in subsequent days}\n\\label{tab:loadnextday}\n\\end{table*}\n\nThis section presents the data that is fed into the model presented in \\Cref{system}. Interaction with this data will shape the policies learned through RL \\citep{Sutton1998} and should reflect resource intermittency and uncertainty to maximise the expectation of rewards in a robust way without over-fitting. EV demand $\\textrm{d}_{\\textrm{EV},i}^t$ and availability $\\upmu_i^t$, PV production $\\textrm{p}_{\\textrm{PV},i}^t$ and electricity consumption $\\textrm{d}_i^{t}$ are drawn from large representative datasets.\n\n\\subsection{Data selection and pre-processing}\nLoad and PV generation profiles are obtained from the Customer Led Network Revolution (CLNR), a UK-based smart grid demonstration project \\citep{TC1a,TC5}, and mobility data from the English National Travel Survey (NTS) \\citep{DepartmentforTransport2019}. The NTS does not focus on EVs only and offers a less biased view into the general population's travel pattern than small-scale EV trials data, both due to the smaller volume of data available compared to for generic cars and because the self-selected EV early trial participants may not be representative of patterns once EVs become widely adopted. It is implicitly assumed that electrification will not affect transport patterns \\citep{Crozier2018}.\n\nNTS data from 82,455 households from 2002 to 2017 results in 1,272,834 full days of travel profiles. Load and PV data from 11,907 customers between 2011 and 2014 yields 620,702 and 22,670 full days of data, respectively. Profiles are converted to hourly resolution and single missing points replaced with the figure from the same time the day or week before or after which has the lowest sum of squares of differences between the previous and subsequent point. Tested with available data, this yields absolute errors with mean 0.13 and 0.08 kWh and 99th percentile 1.09 and 0.81 kWh for PV and load data. PV sources have nominal capacities between 1.35 and 2.02 kWp.\n\nThe at home-availability of the vehicles is inferred from the recorded journeys' origin and destination. EV energy consumption profiles are obtained using representative consumption factors from a tank-to-wheel model proposed in \\citep{Crozier2018}, dependent on travel speed and type (rural, urban, motorway). \n\n\\subsection{Markov chain}\n\\begin{figure*}[!b]\n\\begin{center}\n\\includegraphics[width=0.7\\textwidth]{f_load_EV_omni.pdf}\n\\end{center}\n\\caption{Scaling factors for normalised profiles (i.e. total daily loads in kWh) in subsequent days. Linear correlation can be observed for the load profiles, while more complex patterns are exhibited for EV consumption. $\\rho$ is the Pearson correlation coefficient.}\n\\label{fig:Corr}\n\\end{figure*}\n\nDuring learning, agents continuously receive experience to learn from. However, numerous subsequent days of data are not available for single agents. We design a Markov chain mechanism to feed consistent profiles for successive days, using both consistent scaling factors and behaviour clusters.\n\nDaily profiles for load and travel are normalised such that $\\sum_{t=0..24}{x^t}=1$, and clustered using K-means, minimising the within-cluster sum-of-squares \\citep{Lloyd1982} in four clusters for both weekday and weekend data (with one for no travel). The features used for load profiles clustering are normalised peak magnitude and time and normalised values over critical time windows, and those for travel are normalised values between 6 am and 10 pm. PV profiles were grouped per month.\n\nProbabilistic Markov chain transition rules are shown in \\Cref{tab:loadnextday}. Transition probabilities for clusters $k$ and scaling factors $\\lambda$ are obtained from available transitions between subsequent days in the datasets for each week day type $w$ (week day or weekend day). \\Cref{fig:Corr} shows that subsequent PV and load scaling factors follow strong linear correlation, with the residuals of the perfect correlation following gamma distributions with zero mean, whereas EV load scaling factors follow more complex patterns, so transitions probabilities are computed between 50 discrete intervals.\n\n\\section{Case study results and discussion}\\label{results}\nThis section compares the performance of the residential flexibility coordination strategies presented in \\Cref{RLSection} to baseline and upper bound scenarios for increasing numbers of prosumers. The performance of traditionally used MARL strategies drops at scale, while that of the novel optimisation-based methodology using marginal rewards is maintained.\n\n\\subsection{Set-up}\nThe MARL algorithm is trained in off-line simulations using historical data prior to online implementation. This means agents do not trial unsuccessful actions with real-life impacts during learning. Moreover, the computation burden is taken prior to implementation, while prosumers only apply pre-learned policies, avoiding the computational challenges of large-scale real-time control. \n\nThe learning occurs over 50 epochs consisting of an exploration, an update and an evaluation phase. First, the environment is explored over two training episodes of duration $|\\mathcal{T}| = 24$ hours. Learning in batches of multiple episodes helps stabilise learning in the stochastic environment. Then, Q-tables are updated based on the rules presented in \\Cref{methodologies}. Finally, an evaluation is performed using a deterministic greedy policy on new evaluation data. Ten repetitions are performed such that the learning may be assessed over different trajectories.\n\nThe Social Cost of Carbon is set at 70 \u00a3\/tCO$_2$, consistent with the UK 2030 target \\citep{Hirst2018}. Weather \\citep{WeatherWunderground2020}, electricity time-of-use prices \\citep{OctopusEnergy2019} and grid carbon intensity \\citep{NationalGridESO2020} are from January 2020, where relevant specified for London, UK. The low solar heat gains in January are neglected \\citep{Brown2020}. Other relevant parameters for the case studies are listed in \\Cref{app:inputs}.\n\nAs performed on a Intel(R) Core(TM) i7-9800X CPU @ 3.80GHz, computation time for a learning trajectory is $2^\\prime45^{\\prime\\prime}$ for one agent and $97^\\prime5^{\\prime\\prime}$ for 30 agents, including evaluation points. The policy can then be directly applied at the household level during operation.\n\nCase study results using different experience sources, reward definitions and MARL structures are presented in \\Cref{fig:results}. Acronyms for each strategy are tabulated in the legend. Positive values denote savings relative to a baseline scenario where all agents are passive, i.e. not using their flexibility with EVs charged immediately and no flexible loads delayed. As the Q-learning policies are first initialised with zero values, in the first epoch of learning completely random action values are chosen, which provides rewards far below the baseline. As agents collect experience and update their policies at each epoch, improved policies are learned, some of which are able to outperform the baseline. An upper bound is provided by results from \\say{omniscient} convex optimisations, which are however not achievable in practice for three main reasons. Firstly, they use perfect knowledge of all the environment variables in the present and future, despite uncertainty in renewable generation, mix of the grid, and customer behaviour. Optimisation with inaccurate data would lead to suboptimal results. Secondly, prosumers may not be willing to yield their data and direct control to an external entity. Finally, central optimisations become computationally expensive for real-time control of large numbers of prosumers.\n\n\\subsection{Results}\nResults presented in \\Cref{fig:results} show that only the algorithms learning from optimisations maintained stable coordination performance at scale, while the performance of traditionally used MARL algorithms would drop in this context of stochasticity and partial observation. The optimisation-based algorithm which uses marginal rewards (MO) performed best. We further elaborate on the results in the subsections below.\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{results_vs_nag_20211210.pdf}\n\n\n\\end{center}\n\\caption{The left-hand side plot shows the five-epoch moving average of evaluation rewards relative to baseline rewards for a single prosumer. The right-hand side plot shows the mean of the final 10 evaluations against the number of prosumers. Lines show median values and shaded areas the 25th and 75th percentiles over the 10 repetitions. The best-performing MARL structure is displayed for each exploration source and reward definition pair. The performance of the baseline MARL algorithm (TE, orange) drops as the number of concurrently learning agents in the stochastic environment increases; the best-performing alternative algorithm proposed (MO, purple) maintains high performance at scale.}\n\\label{fig:results}\n\\end{figure*}\n\n\\subsubsection{Environment exploration-based learning}\nThe centralised MARL structure is favoured for environment exploration-based learning (continuous lines in \\Cref{fig:results}). A single policy uses experience collected by all agents, rather than each agent learning from their own experience only. \n\n\\Cref{fig:results} shows that environment exploration-based MARL using total rewards (TE, orange), the baseline MARL framework, exhibits a high performance for a single agent. However, savings drop as the number of cooperating agents increases, down to around zero from ten agents. Coordination challenges arise for independent learners to isolate the contribution of their actions to total rewards from the stochasticity of the environment, compounded by other simultaneously learning agents' random explorations, and the non-stationarity of their on-policy behaviour \\citep{Matignon2012}. \n\nUsing advantage rewards (AE, grey), based on estimates of the long-term value of actions relative to that of the baseline action, yields superior results beyond two agents. However, as AE uses the total reward $Q^0$-table as an intermediary step, results similarly drops for increasing numbers of agents.\n\nUsing marginal rewards (ME, dark green), the value of each agent's action relative to the baseline action is singled out immediately by an additional simulation and used as a reward at each time step. This improves the performance relative to TE and AE for five agents and more, though still with declining performance as the number of agents increases.\n\n\\subsubsection{Optimisation-based learning}\nOptimisation-based learning generally favours the distributed MARL structure, with agents able to converge to distinct compatible policies (dashed lines in \\Cref{fig:results}).\n\nComparing trajectories in \\Cref{fig:results}, learning from the total rewards obtained by an optimiser (TO, light blue) yields lower savings than when using environment explorations (TE). The learned policies yield negative savings, i.e. would provide worse outcomes than inflexible agents. The omniscient optimiser takes precise, extreme decisions thanks to its perfect knowledge of all current and future system variables, importing at very high $\\psi$ values when it is optimal to do so. RL algorithms on the other hand are used under partial observability, aiming for actions that statistically perform well under uncertainty. Agents independently picking TO-based decisive actions in a stochastic environment do not yield optimal outcomes. Assessing the long-term advantage of actions from optimisations (AO, dark blue) follows a similar trend, whilst providing marginally superior savings relative to TO.\n\nOptimisation-based learning using marginal rewards (MO, purple) offers the highest savings as the additional baseline simulations are best able to isolate the contribution of individual actions from variations caused by both the environment and other agents. When increasing the number of agents, the strategy is able to learn from optimal, stable, consistently behaving agents. Savings of 6.18p per agent per hour, or \u00a345.11 per agent per month are obtained on average for 30 agents, corresponding to a 33.7\\% reduction from baseline costs. 65.9\\% of savings stem from reduced battery depreciation, 20.32\\% from distribution grid congestion, 11.1\\% from grid energy, and 2.7\\% from greenhouse gas emissions.\n\nThe count-based strategy learning from optimisations (CO, light green) seeks to reproduce the state-action patterns of the omniscient optimiser with perfect knowledge of system variables and perfect control of agents for local decision-making under partial observability. It provides results lower than the high performances of MO, though with a stable performance at scale. Savings of \u00a321.09 per agent per month on average for 30 agents are obtained. The battery and distribution grid costs increase by an equivalent of 6.0\\% and 7.7\\% of total savings respectively, while grid energy and greenhouse gas emissions costs reductions represent 59.7\\% and 54.0\\% of total savings.\n\nBoth the MO and CO strategies exhibit stable performance at scale, though converging to different types of policy. The MO policy saves more by smoothing out the charging and distribution grid utilisation profiles despite smaller savings in imports and emissions costs, while CO derives a larger advantage from the grid price differentials in grid imports, though with higher battery and distribution grid costs. The weight applied to each of those competing objectives in the objective function directly impacts the policies that are learned. Examples of how the individual home energy management system decision variables (heating, energy consumption, battery charging) vary based on the controller are illustrated in \\Cref{app:example_case_study}.\n\nOverall, the new class of optimisation-based learning performs significantly better across different numbers of prosumers, with higher savings and lower inter-quartile range than environment-based learning at scale. This superior performance requires computations to run optimisations on historical data, and to perform baseline simulations to compute marginal rewards, though computational time for pre-learning is not strictly a limiting factor as it is performed off-line ahead of implementation. \n\nA fundamental challenge in MARL has been the trade-off between fully centralised value functions, which are impractical for more than a handful of agents, or, in a more straightforward approach, independent learning of individual action-value functions by each agent in independent Q-learning (IQL) \\citep{Tan1993}. However, an ongoing issue with this approach has been that of convergence at scale, as agents do not have explicit representations of interactions between agents, and each agent's learning is confounded by the learning and exploration of others \\citep{Rashid2020}. As shown in \\Cref{fig:results}, the Pareto selection, non-stationarity and stochasticity issues presented in \\Cref{sec:gapanalysis} have prevented environment exploration-based learners from achieving successful MARL cooperation at scale for agents under partial observability in a stochastic environment. This case study of coordinated residential energy management shows that the novel combination of marginal rewards, which help agents isolate their marginal contribution to total rewards, and the learning from results of convex optimisations, where agents learn successful policy equilibriums from omniscient, stable, and consistent solutions, offer significant improvements on these scalability and convergence issues. \n\n\\section{Conclusion}\\label{conclusion}\nIn this paper, a novel class of strategies has addressed the scalability issue of residential energy flexibility coordination in a cost-efficient and privacy-preserving manner. The combination of off-line optimisations with multi-agent reinforcement learning provides high, stable coordination performance at scale.\n\nWe identified in the literature that the concept of RL-based implicit energy coordination, where energy prosumers cooperate towards global objectives based on local information only, had been under-researched beyond frequency droop control with limited number of agents. The scalability of such methods was identified as a key gap that we have sought to bridge. The novel coordination mechanism proposed in this paper thus satisfies the criteria for successful residential energy coordination set out in the introduction, as tested with large banks of real data in the case studies:\n\n\\begin{itemize}\n\n\\item Computational scalability: The scalability of traditional learning algorithms is significantly improved thanks to fixed-size Q-tables to avoid the curse of dimensionality, so that policies can be learned for larger number of agents. The proposed method does not require expensive communication and control appliances at the prosumer level, as pre-learned policies are directly applied with no further communication and no exponential time real-time optimisations needed. This is a crucial benefit for applications with physical limitations in hardware availability and processing time. \n\n\\item Performance scalability: The coordination performance remains high for increasing numbers of prosumers despite the challenges of partial observability, environment stochasticity and concurrently learning of agents, thanks to learning from the results of global omniscient optimisations on historical data, and to rewards signals that isolate individual contributions to global rewards. Significant value of \u00a345.11 per agent per month was obtained in the presented case study for 30 agents, thanks to savings in energy, prosumer storage and societal greenhouse gas emissions-related costs. Those savings do not drop with increasing number of agents, as opposed to with standard MARL approaches.\n\n\\item Acceptability: The approach does not rely on sharing of personal data, thermal discomfort, or hindrance\/delay of activities, and the appliances are controlled locally. This cost-efficient and privacy-preserving implicit coordination approach could help integrate distributed energy resources such as residential energy, otherwise excluded from energy systems' flexibility management. \n\n\\end{itemize}\n\nImportant future work is a more detailed assessment of the impacts of the coordination strategies on power flows, as well as an evaluation of the generalisation and adaptability potential of policies when used by other households or if household characteristics change over time. Moreover, while all agents readily reduce individual costs through participation in the framework, further game-theoretic tools could be used to design a post-operation reward scheme.\n\n\\section*{Acknowledgement}\nThis work was supported by the Saven European Scholarship and by the UK Research and Innovation and the Engineering and Physical Sciences Research Council (award references EP\/S000887\/1, EP\/S031901\/1, and EP\/T028564\/1).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOver the last decade topological phases of matter have attracted many attentions for providing tremendous insights both in fundamental and experimental aspects of condensed matter physics \\cite{Chiu2016}. In addition to the gapped phases which initially ignited the field, recently discovery of gapless topological phases stimulated many works towards the understanding of nontrivial topology of gapless systems \\cite{review1}. Among those, Weyl and Dirac semimetals are of particular interest due to their experimental discovery and many unique physical properties \\cite{review1}. The Dirac\/Weyl smimetals (DSMs\/WSMs) are characterized by isolated point touchings of two degenerate\/nondegenerate bands in momentum space. Weyl nodes can be generated by splitting of degenerate Dirac nodes usually via breaking of either time-reversal ($\\mathcal{T}$) or inversion ($\\mathcal{I}$) symmetries or both. In the standard form, the low energy excitations near the Weyl nodes, disperse linearly along all three momentum directions with each node carries monopole charge of $\\pm 1$. As a result, on the surface, there exists a Fermi arc that connects a pair of Weyl nodes with opposite chiralities. Recently, generalization of WSMs to multi-Weyl nodes have been proposed where each nodes have higher-order dispersion in one or more directions and consequently caries monopole charge of larger than one \\cite{MultiWeyl1,MultiWeyl2}. \\\\\n\\indent On the other hand, DSMs\/WSMs can also be classified into type-I and type-II, based on the tilting of their nodes. In the standard type-I WSMs, Fermi surfaces are point-like while in the type-II WSMs, Weyl nodes are tilted resulting in formation of electron and hole pockets producing finite density of states at the Fermi level \\cite{typeII1,typeII2}. In a conventional type-I and II WSMs, two Weyl nodes with opposite chiralities have same types, however, recently, a theoretical proposal \\cite{hybridWeyloriginal}, introduced a new WSM where a pair of Weyl nodes with different chiralities can have different types, forming the so-called \\emph{hybrid Weyl semimetals}.\\\\\n\\indent Besides the DSMs\/WSMs, another class of three-dimensional nodal semimetals are Luttinger semimetals (LSMs) where possess a quadratic band touching (QBT) point between doubly degenerate valence and conduction bands of $J=3\/2$ (effective) fermions at an isolated point in the Brillouin zone.\nThe LSM provides the low-energy description for a plethora of both strongly and weakly correlated compounds, such as the 227 pyrochlore iridates (Ln2Ir2O7, with Ln being a lanthanide element) \\cite{LSMapp1,LSMapp2, LSMapp3,LSMapp4}, half-Heusler compounds (ternary alloys such as LnPtBi, LnPdBi) \\cite{LSMapp5,LSMapp6}, HgTe \\cite{semiconductor1,semiconductor2,QBToriginal,Ruan2016,mottQBT}, and gray-tin \\cite{LSMapp7,LSMapp8}.\nMoreover, LSMs proved to show many interesting behaviors\\cite{LSMprop1,LSMprop2,LSMprop3,LSMprop4,LSMprop5}, specially in the presence of interaction, for example, investigation of the magnetic and superconducting orders actively have been explored \\cite{LSMapp9, Roy-Ghorashi2019, GhorashiPRB2017, GhorashiPRL2018, Ghorashi2019,SatoSC32,LSMBoettcher2,LSMSCLiu1,LSMBoettcher1,Szabo-Bitan32-2018}.\\\\\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{FlashpickickedLSMv4.png}\n \\caption{Schematic picture of summary of the results obtained in this work. Starting from a Luttinger semimetal via two different nonuniform ($k$-dependent) periodic kickings, where break inversion ($\\mathcal{I}$) and time-reversal ($\\mathcal{T}$) symmetries while preserving their combinations ($\\mathcal{IT}$), we have obtained a \\emph{hybrid dispersion Dirac semimetals} (e.g., $k_z\\Gamma_5$ where $\\Gamma_i$ are Dirac matrices [see text for details]) and tilted LSM (e.g., $k_z\\Gamma_0$). Then, by applying an external magnetic field, $J_z$, in parallel to the direction of nodes (or kick direction), \\emph{hybrid Weyl phases} can be generated.}\n \\label{fig:adpic}\n\\end{figure*}\n\\indent The coexistence of various Weyl nodes with different charges and\/or types is an interesting phenomena that could help towards understanding as well as manipulation of the properties of various Weyl nodes in an equal footing setup. Besides a few works reporting the coexistence of type-I and II Dirac\/Weyl nodes \\cite{hybridWSMexp,hybridDSM1,hybridWSM1,WSMexp1&2,weylcoexPRL}, there have been also proposals\\cite{LSMFloq1,Ghorashi2018}, claiming dynamical generation of various Weyl nodes of different types and\/or charges in one system. Application of the light is shown to be a powerful method to change the material properties \\cite{FloquetRev}. In particular, the conversion of a topologically trivial phase into a nontrivial one using periodic driving has attracted enormous attention in the past decade \\cite{FloquetRev,Floquet1, Floquet2, Floquet3}. Specifically many proposals on Floquet WSMs in various systems exist, such as Dirac semimetals \\cite{FloqWeyl1,FloqWeyl2}, band insulators \\cite{FloqWeyl3}, stacked graphene \\cite{FloqWeyl4}, line-nodal semimetals \\cite{FloqWeyl2,FloqWeyl5}, and crossing-line semimetals \\cite{FloqWeyl6,FloqWeyl7}. Also, proposals have been made to create tunable WSMs in pyrochlore iridates with Zeeman fields \\cite{Weylpyro1,Weylpyro2}. Very recently, using circular\/elliptic polarized light on Luttinger Hamiltonian in the high-frequency limit we have shown a very rich phase diagram of various Weyl semimetals, including coexistence of type-I and II as well as single and double Weyl nodes \\cite{Ghorashi2018} . \\\\\n \\indent Despite the several dynamical proposals for the generation of different Weyl phases, a promising setup for the realization of hybrid Dirac and Weyl semimetals is still lacking. In this work, we tackle this issue by an alternative way of periodic driving, in particular the periodic kicking. Using the periodic $\\delta$-function kicks can typically simplify theoretical studies by allowing to perform calculations analytically to a large extent (in contrast to sinusoidal driving or elliptical\/circular light) \\cite{ Floquet3,kicking2}.\nHowever, for the sake of comparison we also briefly discuss the smooth driving case to show that some of the features of our discussion can be hold up in smooth driving setup as well as long as the perturbation breaks inversion (uniaxially) and time-reversal symmetres but preserve their combinations. In this paper, we show two examples of such perturbations which along with an external magnetic field induce various hybrid Dirac and Weyl phases, including a new \\emph{hybrid dispersion Dirac semimetal}. Figure.~\\ref{fig:adpic} summarizes the result of this work.\n\n\\section{Model and Formalism}\n\\subsection{Model}\nWe start with reviewing the main ingredients of Luttinger Hamiltonian in the non-equilibrium limit, which can be represented as,\n\n\\begin{align}\n H_{L}(\\vec{k})=\\int \\frac{d^3\\vec{k}}{(2\\pi)^3}\\Psi^{\\dagger}_{\\vec{k}} h_L(\\vec{k}) \\Psi_{\\vec{k}},\n\\end{align}\nwhere\n\\begin{align}\n h_{L}(\\vec{k})=&(\\frac{k^2}{2m_0}-\\mu)\\Gamma_0-\\frac{1}{2m_1}\\sum^3_{a=1} d_a(\\vec{k})\\Gamma_a\\cr\n -&\\frac{1}{2m_2}\\sum^5_{a=4} d_a(\\vec{k})\\Gamma_a\n\\end{align}\nwhere $k^2=k^2_x+k^2_y+k^2_z$ and,\n\\begin{align}\n \\Psi^T_{k}= (c_{\\vec{k},3\/2}, c_{\\vec{k},1\/2}, c_{\\vec{k},-1\/2}, c_{\\vec{k},-3\/2}).\n\\end{align}\n$\\mu$ is the chemical potential measured from the band touching point. $\\Gamma_a$ are the well-known gamma matrices which are given by,\n\\begin{align}\n \\Gamma_1=\\tau_3\\sigma_2,\\,\\, \\Gamma_2=\\tau_3\\sigma_1,\\,\\, \\Gamma_3=\\tau_2,\\,\\,\n \\Gamma_4=\\tau_1,\\,\\, \\Gamma_5=\\tau_3\\sigma_3,\n\\end{align}\n and satisfy $\\{\\Gamma_a,\\Gamma_b\\}=\\delta_{a,b}$, while $\\Gamma_0$ is four dimensional identity matrix. $\\tau$ and $\\sigma$ denote space of sign and magnitude of spin projection $m_s\\in\\{\\pm 3\/2, \\pm 1\/2\\}$, respectively. $d_a(\\vex{k})$ are given as,\n \\begin{align}\n d_1=\\sqrt{3}k_y k_z,\\,\\, d_2=\\sqrt{3}k_x k_z,\\,\\, d_3=\\sqrt{3}k_x k_y,\\,\\,\\cr\n d_4=\\frac{\\sqrt{3}}{2}(k_x^2-k_y^2),\\,\\, d_5=\\frac{1}{2}(2k_z^2-k_x^2-k_y^2)\n \\end{align}\n The five mutually anticommuting $\\Gamma$ matrices can be written\nin terms of spin-$3\/2$ matrices according to,\n\\begin{align}\n \\Gamma_1=&\\frac{1}{\\sqrt{3}}\\{J_y,J_z\\},\\,\\,\\Gamma_2=\\frac{1}{\\sqrt{3}}\\{J_x,J_z\\},\\,\\,\\Gamma_3=\\frac{1}{\\sqrt{3}}\\{J_x,J_y\\},\\cr\n \\Gamma_4=&\\frac{1}{\\sqrt{3}}(J^2_x-J^2_y),\\,\\,\\Gamma_5=J^2_z-\\frac{5}{4},\n\\end{align}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.3\\textwidth]{LSM_curveup.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_curveopposite.pdf}\n \\caption{The band structure for LSM along the $k_z$ axis with $\\lambda_1=0.6$ and (a) $\\lambda_2=0.1$, (b) $\\lambda_2=0.6$.}\n \\label{fig:LSMbare}\n\\end{figure}\n Here we take the isotropic limit of $m_1=m_2 \\equiv m$. Therefore, the Luttinger Hamiltonian can also be written in an alternative way,\n\\begin{align}\n h_L(\\vec{k})= [(\\lambda_1+5\\lambda_2\/2)k^2-\\mu]\\Gamma_0-2\\lambda_2(\\vec{J}.\\vec{k})^2\n\\end{align}\n with $\\vec{J}=(J_x,J_y,J_z)$ and $\\vec{k}=(k_x,k_y,k_z)$ and we used $\\lambda_1=1\/2m_0$ and $\\lambda_2=1\/4m$. $J_{x,y,z}$ are effective spin-$3\/2$ operators,\n \\begin{eqnarray}\n&\\,J_z=\\begin{bmatrix}\n \\frac{3}{2} & 0 & 0 & 0 \\\\\n 0 & \\frac{1}{2} & 0 & 0\\\\\n 0 & 0 & -\\frac{1}{2} & 0 \\\\\n 0 & 0 & 0 & -\\frac{3}{2} \\\\\n \\end{bmatrix},J_x=\\begin{bmatrix}\n 0 & \\frac{\\sqrt{3}}{2} & 0 & 0 \\\\\n \\frac{\\sqrt{3}}{2} & 0 & 1 & 0\\\\\n 0 & 1 & 0 & \\frac{\\sqrt{3}}{2} \\\\\n 0 & 0 & \\frac{\\sqrt{3}}{2} & 0 \\\\\n \\end{bmatrix}\\cr\n &\\,J_y=\\begin{bmatrix}\n 0 & \\frac{-i\\sqrt{3}}{2} & 0 & 0 \\\\\n \\frac{\\sqrt{3}}{2} & 0 & -i & 0\\\\\n 0 & 1 & 0 & \\frac{-i\\sqrt{3}}{2} \\\\\n 0 & 0 & \\frac{\\sqrt{3}}{2} & 0 \\\\\n \\end{bmatrix}.\n\\end{eqnarray}\nThe energy dispersions are $E(k)=(\\lambda_{1}\\mp2\\lambda_{2})k^{2}-\\mu$ for the $j=3\/2$ and the $j=1\/2$ bands, respectively. Four bands come in doubly degenerate pairs as a result of time-reversal (with antiunitary operator $\\mathcal{T}=\\Gamma_1\\Gamma_3\\mathcal{K}$ and $\\vec{k}\\rightarrow -\\vec{k}$ where $\\mathcal{K}$ is complex conjugation) and inversion ($\\mathcal{I}=I_{4\\times4}$ and $\\vec{k}\\rightarrow -\\vec{k}$) symmetries. For $\\lambda_{2}<2\\lambda_{1}$\n($\\lambda_{2}>2\\lambda_{1}$), the degenerate bands curve the same (opposite) way as shown in Figure.~(\\ref{fig:LSMbare}). In the case of both bands bending the same way, Eq. (2) is widely used to model heavy- and light-hole bands in zinc-blende semiconductors \\cite{semiconductor1} and many properties of such a dispersion have been studied in the literature, including a recent study on the realization of fully gapped topological superconductivity with \\emph{p}-wave pairing which has states with exotic cubic and linear dispersions coexisting on the surface \\cite{congjunWuPRL16,GhorashiPRB2017}. On the other hand, when bands bend oppositely, the model in Eq. (2) is known as Luttinger semimetal with QBT and is used to describe behavior of certain pyrochlore iridates as well as some doped half-Heusler alloys such as LaPtBi \\cite{Chadov2010,Lin2010,halfheusler3}.\n\n\n\\subsection{Periodic driving}\nA general time-dependent problem with $H(t)=H_0+V(t)$, can be tackled using Floquet theory when $V(t+T)=V(t)$ is periodic. To proceed, we can expand the periodic potential in a Fourier series as\n\\begin{align}\n V(t)=V_0+\\sum^{\\infty}_{n=1} \\big(V_n e^{i\\omega nt}+V_{-n}e^{-i\\omega nt}\\big).\n\\end{align}\n In the limit of fast driving regime, in which the driving frequency is larger than any natural energy scale in the problem, one can obtain the effective Hamiltonian and Floquet operators perturbatively up to $\\mathcal{O}(1\/\\omega^2)$ \\cite{Floquet3}. The Floquet operator, $\\mathcal{F}(t)$, is the unitary time-evolution $\\hat{U}(t)$ after one period of drive can be factorized as,\n\\begin{align}\n \\mathcal{F}(t)=\\exp[-i \\alpha\\Lambda(\\vec{k})]\\exp[-i H_L T]=\\exp[-i H_{eff} T].\n\\end{align}\nSo the dynamics can have three stages: initial kick at $t_i$, the evolution of system with $H(t)$ in the interval $t_f-t_i$ and final kick at $t_f$ which describes the \"micromotion\" \\cite{Floquet3}. Then, the time-evolution operator can be expressed as,\n\\begin{align}\n \\hat{U}(t_i\\rightarrow t_f)= \\hat{U}(t_f)^{\\dagger}e^{-iH_{eff}(t_f-t_i)}\\hat{U}(t_i),\n\\end{align}\nwhere $\\hat{U}(t)=e^{-i\\mathcal{F}(t)}$. $\\mathcal{F}(t)$ is a time-periodic operator with zero average over one period. Lets set $t_i=0$, then $H_{eff}$ and $\\mathcal{F}(t)$ can be expanded as,\n\\begin{align}\n H_{eff}=\\sum_{n=0}^{\\infty}\\frac{1}{\\omega^n}H_{eff}^n,\\,\\,\\mathcal{F}(t)=\\sum_{n=1}^{\\infty}\\frac{1}{\\omega^n}\\mathcal{F}^n.\n\\end{align}\nA generic quantum $\\delta$-kick can be implemented with following perturbation,\n\\begin{align}\n H_{kick}(t)=\\alpha \\Lambda(\\vec{k}) \\sum_{n=-\\infty}^{\\infty} \\delta(t-nT)\n\\end{align}\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.3\\textwidth]{LSM_G5p_05_kz.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_G5m_05_kz.pdf}\n \\caption{The band structure of LSM along the $k_z$ axis in the presence of uniform strain ($\\alpha \\Gamma_5$) with (a) $\\alpha=0.5$ (tensile) and (b) $\\alpha=-0.5$ (compression). $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used.}\n \\label{fig:G5}\n\\end{figure}\nwhere $T=2\\pi\/\\omega$, $\\alpha$ is the kicking strength and $\\Lambda(\\vec{k})$ is the matrix representation of a perturbation, which in general can be a function of momentum, $\\vec{k}$, and could be used to mimic a nonuniform kicking. Following a perturbative expansion the effective Hamiltonian for $\\delta$-kick, with $H_{kick,n}=\\alpha \\Lambda(\\vec{k})\/T\\,\\text{for all}\\,\\,n$, can be obtained as \\cite{Floquet3,kicking2},\n\\begin{align}\\label{effkick}\n H^{kick}_{eff}=&\\,H_L+\\frac{\\alpha\\Lambda(\\vec{k})}{T}+\\frac{1}{24}[[\\alpha\\Lambda(\\vec{k}),H_L],\\alpha\\Lambda(\\vec{k})]+\\mathcal{O}(1\/\\omega^3).\n \n \n\\end{align}\nWe note that the third term in Eq.~(\\ref{effkick}), can be dropped in the limit of weak kicks, i.e, to the first order of kick strength. Therefore, a dynamical kicking effectively is equivalent to directly adding the kick term as a perturbation to the bare Hamiltonian, something which is not always relevant in the equilibrium. In the rest of this work we only focus on the physics of the effective Hamiltonian and we leave \"micromotion\" physics for future studies. \\\\\nSimilarly, the effective Hamiltonian for the case of smooth driving of the form $H(t)=H_0+\\alpha \\Lambda(\\vex{k})\\cos(\\omega t)$ can be obtained as \\cite{Floquet3},\n\\begin{align}\\label{effsmooth}\n H^{cos}_{eff}=H_0+\\frac{1}{4\\omega^2}[[\\alpha \\Lambda(\\vec{k}),H_0],\\alpha \\Lambda(\\vec{k})]+\\mathcal{O}(1\/\\omega^3).\n\\end{align}\nUnlike Eq.~(\\ref{effkick}) of periodic kicking, the first correction to the $H_0$ in the case of smooth driving is $\\mathcal{O}(\\alpha^2)$.\\\\\n\\indent In the following we first investigate the effect of uniform kicking ($k$-independent) and show that some of the known results can be retrieved. Then we turn to nonuniform periodic kicking and show that by introducing $k$-dependent $\\mathcal{IT}$ symmetric kicks, interestingly, different hybrid Dirac\/Weyl semimetals can be obtained. Finally, we briefly compare the case of smooth driving of Eq.~(\\ref{effsmooth}) with the results obtained by periodic kicking.\n\n\n \\section{Uniform kicking}\nThere are many possibilities of uniform kicking, including: five $\\Gamma_j$, and ten commutators $\\Gamma_{ij}=[\\Gamma_i,\\Gamma_j]\/(2i)$ with $i > j$ , which can be expressed in terms of the products of odd number of spin-$3\/2$ matrices. Here we focus on uniform kicks which are proportional to $\\Lambda=\\Gamma_j$. The effective Hamiltonian for such kicks, in a sufficiently weak kick limit, is given by,\n\\begin{align}\\label{effG5}\n H^j_{eff}=h_L(\\vec{k})+\\frac{\\alpha}{T}\\Gamma_j,\n\\end{align}\nwith spectrum,\n\\begin{align}\n E^j_{\\pm}(\\vec{k})=\\lambda_1k^2\\pm \\frac{\\sqrt{4\\lambda_2^2\\sum_i d^2_i T^2 - 4\\lambda_2\\alpha T d_j+\\alpha^2}}{T},\n\\end{align}\n\nOf particular interest is the $\\Gamma_5$ kicking, that breaks the rotational invariance and can be thought of as the effect of an external strain in $z$ direction \\cite{Ruan2016,QBToriginal}. It is known that a uniaxial strain, can realize two different phases in LSMs depending on the sign of $\\alpha$ \\cite{Ruan2016,QBToriginal,Weylpyro1,Weylpyro2,LSMmagnetic}. Figure.~\\ref{fig:G5}a(b) shows the bandstructure corresponding to effective Hamiltonian of Eq.~(\\ref{effG5}) with $\\Gamma_j=\\Gamma_5$ and for $\\alpha > 0$ ($\\alpha<0$) which represents a Dirac semimetal (trivial\/topological insulator).\n\nIn the case of a tensile strain, there are two Dirac nodes at $k_z=\\pm\\frac{\\alpha}{2\\lambda_2 T}$ ($\\alpha > 0 $). Now if we add an external magnetic field, we expect that each Dirac nodes split to two Weyl points and form Weyl semimetal phases due to breaking of $\\mathcal{T}$ symmetry. Interestingly, even in the topological insulator limit of $\\alpha<0$, one can realize Weyl semimetals depending on the strength of magnetic field \\cite{LSMmagnetic}. We also note that magnetic field alone can drive a Luttinger semimetal to a Weyl semimetal phase \\cite{LSMmagnetic,Weylpyro2}. Moreover, in the experimentally relevant system of pyrochlore irridates, it is shown that magnetic field can give rise to rich phase diagram of topological semimetals \\cite{Weylpyro2}. It is important to mention that an external magnetic field can have orbital effects for sufficiently large strength. However, here and in the rest of this paper, we neglect the orbital effects of magnetic field for sake of simplicity following many works in the literature \\cite{Weylpyro1, Weylpyro2,LSMFloq1} and we refer readers to some of the works which explored the effects of magnetic field in Landau levels and quantum oscillations \\cite{LSMmagnetic,LSMquantumoscillation}. The detailed analysis of the effect of magnetic field (also possible pseudomagnetic fields due to other perturbations) will be left for future studies.\n\n\\section{Nonuniform kicking}\nNow lets turn to the main part of this work, where we investigate the effect of nonuniform kicks. Similar to the uniform kicking discussed in the previous section, we can consider numerous possibilities of nonuniform driving. However, here, we restrict ourselves to a \"linear in momentum\" kicks of the form $k_i\\Gamma_i$ where $\\Gamma_i$ are time-reversal even. The very first consequence of such choice of the kicks, is the breaking of both $\\mathcal{T}$ and $\\mathcal{I}$ symmetries, while preserving their combinations $\\mathcal{IT}$, which is a necessary symmetry for keeping double degeneracy of the bands intact. Specifically, we focus on two types of nonuniform kicking: (a) $k_z\\Gamma_5$ and (b) a $k_z\\Gamma_0$.\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.42\\textwidth]{Fig4a.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz05_kz-R.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz07_kz-R.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz11_kz-R.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz15_kz-R.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz20_kz-R.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz30_kz-R.pdf}\\includegraphics[width=0.42\\textwidth]{kzG5_PH_kxkz.pdf}\n \\caption{The band structure of LSM in the presence of $k_z\\Gamma_5$ with kick strength of $\\alpha=0.8$ and (a) $h=0$, $k_y-k_z$ plane showing quadratic dispersion along the $k_y$ for the node at $\\Gamma$ point, (b) $h=0.5$, (c) $h=0.7$, (d) $h=1.1$, (e) $h=1.5$, (f) $h=2$, (g) $h=3$ and (h) same as (a) but with $\\lambda_1=0$. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used in all plots (except (h) where $\\lambda_1=0$).}\n \\label{fig:kzG5}\n\\end{figure*}\n\\subsection{$\\Lambda=k_z\\Gamma_5$}\n\nAs it is mentioned above the $\\Gamma_5$ can be interpreted as the effect of strain in $k_z$ directions \\cite{Ruan2016}, similarly, $k_z\\Gamma_5$ also breaks the rotational invariance, then in principle we might still think of $k_z\\Gamma_5$ term as a type of nonuniform strain or strain gradient, even though full microscopic derivation of such strain could shed more light. The presence of $k_z$ suggests that microscopically on a lattice, $\\Gamma_5$ now acts non-local and directly modifies hopping terms. On the hand, one can convert the $k_z\\Gamma_5$ to the lattice model and explicitly write it down as, $\\sum_{\\sigma\\in\\{1\/2,3\/2\\}}(C^{\\dagger}_{k_{\\perp},i,\\sigma,\\uparrow}C_{k_{\\perp},j,\\sigma,\\uparrow}-C^{\\dagger}_{k_{\\perp},i,\\sigma,\\downarrow}C_{k_{\\perp},j,\\sigma,\\downarrow})\/2i+h.c$. This is similar to the spin current operator along $z$-axis. Therefore, we can also think of the $k_z\\Gamma_5$ kick as an applied spin current (or related to it) along $z$ direction. However, it is essential to note that besides the rotational invariance the $k_z\\Gamma_5$ kick also breaks inversion and time-reversal symmetries while preserving their combinations. Then the bands of driven system are still degenerate. Therefore, here, without restricting ourselves to any particular microscopic interpretation, we emphasize that to achieve the results obtained in this section one need perturbations which break both $\\mathcal{I}$ and $\\mathcal{T}$ but preserve $\\mathcal{IT}$.\\\\\nThe effective Hamiltonian for $k_z\\Gamma_5$ kick can be written as, $H_{eff}=h_{L}(\\vec{k})+\\frac{\\alpha}{T}k_z\\Gamma_5$, with following spectrum,\n\\begin{align}\n E_{\\pm}(\\vec{k})=\\lambda_1k^2\\pm \\frac{\\sqrt{4\\lambda_2^2T^2\\sum_i d^2_i - 4\\lambda_2\\alpha T k_z d_5+\\alpha^2 k^2_z}}{T}.\n\\end{align}\nFigure.~\\ref{fig:kzG5}, shows the spectrum in $k_x-k_z$ plane. There are two Dirac nodes at $k_z=0$ and $k_z=\\frac{\\alpha}{2\\lambda_2 T}$. While one of the nodes is always pinned to $\\Gamma$ point the other node can be tuned by kick parameters. Therefore, for fixed set of parameters the distance between two nodes for $k_z\\Gamma_5$ is half of the case with uniform strain $\\Gamma_5$. Moreover, we observe three main differences in compare to the case with uniform strain as is shown in Figure.~\\ref{fig:G5}. (i) There is no major difference between the tensile ($\\alpha >0 $) and compressive ($\\alpha <0 $) strain. In both cases system drives into a Dirac semimetal phase. (ii) Unlike the uniform tensile strain, due to broken inversion symmetry two Dirac nodes reside at different energies and (iii) the most important of all, remarkably, one of the nodes is quadratic while the other is linear, realizing an unique \\emph{hybrid Dirac semimetal}. Interestingly, the \"quadratic\" Dirac node (or a QBT which is linearly dispersed in $k_z$ direction) is pinned at the $\\Gamma$ point and the \"energy difference\" as well as distance between the nodes can be controlled with kick strength. Moreover, while the linear Dirac node is tilted, the node centered around the $\\Gamma$ point no matter how strong is the kick, shows no tilt. It should be emphasized, here, we define hybrid Dirac semimetal mainly based on the different dispersion of two Dirac nodes instead of their tilts, even though as we discussed above, they show different tilting (and types) as well. Moreover, We note that such hybrid phase is unique to the Dirac semimetals because it is impossible to realize a Weyl semimetal with pair of nodes with different dispersion or (magnitude of monopole charges).\\\\\n\\indent Next, lets apply an external magnetic field $h J_z$ where $h$ denotes the strength of magnetic field. The magnetic field splits Dirac nodes to Weyl nodes in $k_z$ direction. In the presence of a magnetic field an analytical expression of eigenvalues for the general $\\lambda_1,\\lambda_2$ can not be achieved, so in the following we proceed by solving the model numerically. Figure.~\\ref{fig:kzG5} depicts the evolution of the Weyl nodes by strength of external magnetic field. Starting from $h \\ll \\alpha $, there are 8 nodes on $k_z$ axis, four single and four double Weyl nodes, which are indicated in Figure.~\\ref{fig:kzG5} by the red and blue dots, respectively. Interestingly, we observe that one or more of the Weyl pairs realize a \\emph{hybrid Weyl phases} and they can survive up to a decently strong magnetic field (Figure.~\\ref{fig:kzG5}(b-f)). While the other Weyl nodes do not possess hybrid types, they show different tilts for each nodes of the same pair. Only at a very strong magnetic field (Figure.~\\ref{fig:kzG5}(g)) all of the Weyl points show a type-I structure.\nMoreover, we see that by increasing the magnetic field, from initial eight nodes, four of them merge and gap out and we left off with only four nodes (two single and two double nodes) at $h>>\\alpha$ limit. \\\\\n\\indent Finally, we comment on the particle-hole symmetric limit of $\\lambda_1=0$. In this limit we can get energies analytically even in presence of an external magnetic field as,\n\\begin{align}\n E^{(1)}_{\\pm}(\\vec{k})=&\\pm\\frac{h}{2}+(2\\lambda_2 k^2_z-\\frac{\\alpha k_z}{T}),\\cr E^{(2)}_{\\pm}(\\vec{k})=&\\pm\\frac{3h}{2}-(2\\lambda_2 k^2_z-\\frac{\\alpha k_z}{T}),\n\\end{align}\nwith 8 nodes located at,\n\\begin{align}\n k^{\\pm,\\pm}_{z,I}=&\\,\\frac{\\alpha\\pm \\sqrt{\\alpha^2\\pm 4h\\lambda_2T^2}}{4\\lambda_2 T},\\cr\n k^{\\pm,\\pm}_{z,II}=&\\,\\frac{\\alpha\\pm \\sqrt{\\alpha^2\\pm 8h\\lambda_2T^2}}{4\\lambda_2 T}.\n\\end{align}\nAs is clear from the above equations, four out of the eight nodes, $k^{\\pm,-}_{z,I,II}$, gap out by increasing $h$. However, it is noteworthy that even though a \\emph{hybrid dispersion Dirac semimetal phase} can still be generated, but now in the limit of $\\lambda_1=0$, the two nodes have the same energies Figure.~\\ref{fig:kzG5}(h), therefore, no longer a hybrid Weyl phase can be achieved in the presence of an external magnetic field.\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_kz.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz02.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz04.pdf}\n\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz06.pdf}\n\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz08.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz15.pdf}\n\\caption{The band structure of LSM with tilted QBT with kick strength of $\\alpha=0.6$ and (a) $h=0$, (b) $h=0.2$, (c) $h=0.4$, (d) $h=0.6$, (e) $h=0.8$ and (f) $h=1.5$. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used in all plots.}\n\\label{fig:kzG0}\n\\end{figure*}\n\\subsection{$\\Lambda=k_z\\Gamma_0$: Tilted QBT}\n\nThe second class of nonuniform kicks which we investigate in this work, is $\\Lambda=k_z\\Gamma_0$. The effective spectrum for such driving can be obtained as,\n\\begin{align}\\label{tQBT}\n E_{\\pm}(\\vec{k})=(\\lambda_{1}\\mp2\\lambda_{2})k^{2}+\\frac{\\alpha k_z}{T}.\n\\end{align}\nThe mere effect of such driving is to effectively tilt the QBT in $k_z$ direction as shown in Figure.~\\ref{fig:kzG0}(a). However, the tilted QBT behaves very differently in the presence of an external perturbation. To clarify further, we rewrite the Eq.~(\\ref{tQBT}) at $k_x=k_y=0$ as,\n\\begin{align}\n E_{\\pm}(\\vec{k})=&\\,(\\lambda_{1}\\mp2\\lambda_{2})k_z^{2}+\\frac{\\alpha k_z}{T}\\cr\n =&\\,(\\lambda_{1}\\mp2\\lambda_{2})(k_z+\\mathcal{A}_z)^{2}\\cr\n =&\\,(\\lambda_{1}\\mp2\\lambda_{2})k_z^2+2(\\lambda_{1}\\mp2\\lambda_{2})\\mathcal{A}_z k_z+(\\lambda_{1}\\mp2\\lambda_{2})\\mathcal{A}^2_z\\cr\n \\simeq&\\,(\\lambda_{1}\\mp2\\lambda_{2})k_z^2+2(\\lambda_{1}\\mp2\\lambda_{2})\\mathcal{A}_z k_z,\n\\end{align}\nwhere in the last line we assumed $\\mathcal{A}_z<1$. Now by comparing the first and the last line of the above equation, we obtain $\\mathcal{A}_z=\\frac{\\alpha}{2(\\lambda_{1}\\mp2\\lambda_{2})T}$. Therefore, we can think of a tilted QBT as a QBT with a pseudo-electromagnetic potential $\\mathcal{A}= (0, 0, \\mathcal{A}_z)$ which is proportional to kick strength. In particular, interesting nontrivial topological phases can emerge by applying an external magnetic field in the presence of tilt, which otherwise are absent in a conventional QBT. As we mentioned previously, it is known that in the presence of a magnetic field, the QBT splits to multiple Weyl points. However, when the QBT is tilted, due to the competition between the external magnetic field and pseudo-magnetic field due to the $\\mathcal{A}$ (proportional to kick strength) there are various regimes which different types of WSM phases can be generated. Starting with the weak field limit ($\\alpha >> h$), there are two pairs of nodes, two double and two single nodes. In this regime, all Weyl nodes have a type-II nature (Figure.~\\ref{fig:kzG0}(b)). By further increasing of the field, for intermediate fields ($h \\lesssim \\alpha$), hybrid Weyl pairs are realized where one of nodes in each of single and double pairs are type-II and the others are type-I. Interestingly, the realized hybrid WSM can survive up to a very strong magnetic field. This could be experimentally beneficent as it demonstrates that hybrid WSMs generated here, are accessible in a broad range of external fields. Similar to the $k_z\\Gamma_5$ kick, we can get the analytical expression of energies and nodes position in the particle-hole limit,\n\\begin{align}\n E^{(1)}_{\\pm}(\\vec{k})=&\\pm\\frac{h}{2}+(2\\lambda_2 k^2_z+\\frac{\\alpha k_z}{T}),\\cr E^{(2)}_{\\pm}(\\vec{k})=&\\pm\\frac{3h}{2}-(2\\lambda_2 k^2_z-\\frac{\\alpha k_z}{T}),\n\\end{align}\nand,\n\\begin{align}\n k^{\\pm}_{z,I}=\\pm\\sqrt{\\frac{h}{2\\lambda_2}},\\,\\,k^{\\pm}_{z,II}=\\pm\\frac{\\sqrt{h}}{2\\sqrt{\\lambda_2}}=\\frac{k^{\\pm}_{z,I}}{\\sqrt{2}}.\n\\end{align}\nUnlike the $k_z\\Gamma_5$ kick, the presence of particle-hole symmetry does not prevent the generation of hybrid Weyl phases, because does not generate any other Dirac nodes and as we mentioned before only tilts the QBT. Moreover, the position of Weyl nodes are independent of kick strength and can be tuned solely using magnetic field.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{kzG5smooth_5_kxkz.pdf}\n \\includegraphics[width=0.4\\textwidth]{kzG5smooth_5_kykz.pdf}\n \\caption{The bandstructure of LSM at $k_x-k_z$ plane ($k_y=0$) in presence of $k_z\\Gamma_5$ smooth driving with $\\alpha=5$. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used.}\n \\label{fig:kzG5smooth}\n\\end{figure}\n\n\\subsection{Smooth driving}\nHere, we briefly discuss the case of smooth driving using Eq.~(\\ref{effsmooth}) and make comparison with the results obtained by periodic kicking in previous sections. First of all, any type of perturbation which is proportional to the identity will not modify the system in the case of smooth driving, as it obviously commutes with Hamiltonian. However, the $\\Lambda(\\vec{k})=\\alpha k_z\\Gamma_5$ can modify the system when is applied via smooth driving. We obtain the effective Hamiltonian as,\n\\begin{align}\n H^{cos}_{eff}(\\vec{k})=&H_0+\\frac{\\lambda_2\\alpha^2k^2_z}{\\omega}\\bigg(2k_xk_z\\{J_x,J_z\\}+2k_yk_z\\{J_y,J_z\\}\\cr\n +&(k^2_x-k^2_y)(J^2_x-J^2_y)+2k_xk_y\\{J_x,J_y\\}\\bigg).\n\\end{align}\nwith energies,\n\\begin{align}\n E^{\\pm}(\\vec{k})=\\lambda_1 k^2\\pm \\frac{\\lambda_2\\sqrt{f(\\vec{k})\\left[k^2_z\\alpha^2-2\\omega^2\\right]+4k^4\\omega^4}}{\\omega^2},\n\\end{align}\nwhere each $f(\\vec{k})=3 (k_x^2+k_y^2)k_z^2 (k_x^2+k_y^2 +4k_z^2)\\alpha^2$ and $E^{\\pm}$ are doubly degenerate. Figure.~\\ref{fig:kzG5smooth} shows the spectrum for $k_y=0$ plane, where four linear Dirac nodes coexist with a QBT at which is located at $\\Gamma$ point. A similar plot can be obtained for $k_x=0$ plane, then the system also possesses line-nodes in $k_x-k_y$ plane at $k_z\\neq 0$. Therefore, in addition to the coexistence of Dirac nodes and QBT at $k_x-k_z$ and $k_y-k_z$ planes, a smooth driving can lead to a richer phase diagram. This analysis was only for the purpose of comparison with kicked driving results, so a detailed analysis of such models will be left for future works.\n\n\\subsection{Effect of Lattice Regularization}\n\nLets now look at the effect of lattice regularization. One of the typical effect of lattice versus continuum model is the appearance of more nodes, usually at the boundaries of the Brillioun Zone \\cite{Ghorashi2018}. However, we show that the main features discussed in the previous subsections survives in lattice models. For the sake of brevity we restrict ourselves to $h=0$ limit. We consider a cubic lattice, which captures the physics of continuum model for $\\alpha,\\omega,\\vec{k}<<1$. In the $k_z$ cut (Figure.~\\ref{fig:latt}), we find: (i) two nodes at BZ boundaries, $k_z=\\pm \\pi$, as we expected, (ii) one at the $\\Gamma$ point with quadratic dispersion in $k_i \\perp k_z$, in agreement with the result of continuum model, (iii) however, due to lattice regularization there can be two nodes (instead of the node at $k_z=\\frac{\\alpha}{2\\lambda_2 T}$ in the continuum model) with linear dispersion away from the boundaries and the $\\Gamma$ point, which are located at $k_z=\\sin^{-1}(\\frac{\\alpha}{2\\lambda_2 T}), k_z=-\\sin^{-1}(\\frac{\\alpha}{2\\lambda_2 T})+\\pi$. Figure.~\\ref{fig:latt}, shows the lattice bandstructure along $k_z$ for the case of $k_z\\Gamma_5$ driving. Therefore, we confirm that the generation of the hybrid dispersion Dirac semimetal (and consequently Weyl semimetal) persists in the lattice model.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{LSM_kzG5_03_lattice.pdf}\n \\caption{The $k_x=k_y=0$ cut of the bandstructure of LSM on a cubic lattice in presence of $\\alpha\\sin(k_z)\\Gamma_5$ ($\\alpha=0.3$) along the $k_z$ axis. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used.}\n \\label{fig:latt}\n\\end{figure}\n\\section{DISCUSSION AND CONCLUDING REMARKS}\n\nWe have proposed a dynamical way to realize hybrid Dirac and Weyl semimetals in a three-dimensional Luttinger semimetal via applying a nonuniform (momentum-dependent) periodic $\\delta$-kick. We explicitly demonstrated this through two examples of nonuniform kicking which break both inversion and time-reversal symmetries while preserving their combinations. We have identified the first example of an unusual hybrid Dirac semimetal phase where two nodes not only have different types but also have different dispersions ( a linear Dirac and QBT coexist). Then by applying an external magnetic field we demonstrated the emergence of hybrid Weyl semimetals. Next, we found that the combination of a tilted QBT with an external magnetic field provides a promising setup for generation of hybrid Weyl semimetals. Moreover, by interpreting the tilted QBT as a QBT with emergent psudomagnetic field proportional to kick strength, we discussed the interplay between the kick strength and the external magnetic field. \\\\\n\\indent We note that the experimental realization of models discussed in this work can be difficult, specifically for the case of uniform strain which requires very fast time scales for periodic driving. Despite all the difficulties there are some proposals for the fast dynamical generation of strain \\cite{straindyn1,straindyn2,kickgraphene}. However, the possibility of interpreting the $k_z\\Gamma_5$ kick as an applied spin current (or proportional to it) along the $z$-direction is potentially a promising route, considering the recent developments in the field of ultrafast spintronics \\cite{ultrafastspin}. Moreover, the 2D QBT has been already proposed to be realized in optical lattices \\cite{QBToptical}, therefore, in practice a 3D QBT could be realized in optical lattices as a potential experimental setup where parameters can be tuned at will.\\\\\n\\indent The LSMs can describe the low-energy physics of many experimental candidates. Therefore, this work opens up a promising way for the realization of various hybrid Dirac and Weyl semimetals. Therefore, it could motivate further studies on the lacking investigation of physical properties of the hybrid Weyl semimetals. \\\\\nSome of the possible future directions are: the investigation of bulk and surface transport properties in the hybrid dispersion Dirac semimetals introduced here, in particular, how some of the most interesting properties of a typical Dirac semimetal, such as transport anomalies \\cite{review1}, would be different. Moreover, it would be interesting to see whether other multi-band systems with higher-spin can show similar physics.\n\n\\section*{Acknowledgement}\nWe thank Matthew Foster and Bitan Roy for useful comments. We also acknowledge useful comments and suggestions from the anonymous referees which helped to improve this manuscript. This work was supported by the U.S. Army Research Office Grant No.W911NF-18-1-0290. we also acknowledge partial support from NSF CAREER Grant No. DMR-1455233 and ONR Grant No. ONR-N00014-16-1-3158.\n\n\n\n\n\n\n\n\\section{Introduction}\nOver the last decade topological phases of matter have attracted many attentions for providing tremendous insights both in fundamental and experimental aspects of condensed matter physics \\cite{Chiu2016}. In addition to the gapped phases which initially ignited the field, recently discovery of gapless topological phases stimulated many works towards the understanding of nontrivial topology of gapless systems \\cite{review1}. Among those, Weyl and Dirac semimetals are of particular interest due to their experimental discovery and many unique physical properties \\cite{review1}. The Dirac\/Weyl smimetals (DSMs\/WSMs) are characterized by isolated point touchings of two degenerate\/nondegenerate bands in momentum space. Weyl nodes can be generated by splitting of degenerate Dirac nodes usually via breaking of either time-reversal ($\\mathcal{T}$) or inversion ($\\mathcal{I}$) symmetries or both. In the standard form, the low energy excitations near the Weyl nodes, disperse linearly along all three momentum directions with each node carries monopole charge of $\\pm 1$. As a result, on the surface, there exists a Fermi arc that connects a pair of Weyl nodes with opposite chiralities. Recently, generalization of WSMs to multi-Weyl nodes have been proposed where each nodes have higher-order dispersion in one or more directions and consequently caries monopole charge of larger than one \\cite{MultiWeyl1,MultiWeyl2}. \\\\\n\\indent On the other hand, DSMs\/WSMs can also be classified into type-I and type-II, based on the tilting of their nodes. In the standard type-I WSMs, Fermi surfaces are point-like while in the type-II WSMs, Weyl nodes are tilted resulting in formation of electron and hole pockets producing finite density of states at the Fermi level \\cite{typeII1,typeII2}. In a conventional type-I and II WSMs, two Weyl nodes with opposite chiralities have same types, however, recently, a theoretical proposal \\cite{hybridWeyloriginal}, introduced a new WSM where a pair of Weyl nodes with different chiralities can have different types, forming the so-called \\emph{hybrid Weyl semimetals}.\\\\\n\\indent Besides the DSMs\/WSMs, another class of three-dimensional nodal semimetals are Luttinger semimetals (LSMs) where possess a quadratic band touching (QBT) point between doubly degenerate valence and conduction bands of $J=3\/2$ (effective) fermions at an isolated point in the Brillouin zone.\nThe LSM provides the low-energy description for a plethora of both strongly and weakly correlated compounds, such as the 227 pyrochlore iridates (Ln2Ir2O7, with Ln being a lanthanide element) \\cite{LSMapp1,LSMapp2, LSMapp3,LSMapp4}, half-Heusler compounds (ternary alloys such as LnPtBi, LnPdBi) \\cite{LSMapp5,LSMapp6}, HgTe \\cite{semiconductor1,semiconductor2,QBToriginal,Ruan2016,mottQBT}, and gray-tin \\cite{LSMapp7,LSMapp8}.\nMoreover, LSMs proved to show many interesting behaviors\\cite{LSMprop1,LSMprop2,LSMprop3,LSMprop4,LSMprop5}, specially in the presence of interaction, for example, investigation of the magnetic and superconducting orders actively have been explored \\cite{LSMapp9, Roy-Ghorashi2019, GhorashiPRB2017, GhorashiPRL2018, Ghorashi2019,SatoSC32,LSMBoettcher2,LSMSCLiu1,LSMBoettcher1,Szabo-Bitan32-2018}.\\\\\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{FlashpickickedLSMv4.png}\n \\caption{Schematic picture of summary of the results obtained in this work. Starting from a Luttinger semimetal via two different nonuniform ($k$-dependent) periodic kickings, where break inversion ($\\mathcal{I}$) and time-reversal ($\\mathcal{T}$) symmetries while preserving their combinations ($\\mathcal{IT}$), we have obtained a \\emph{hybrid dispersion Dirac semimetals} (e.g., $k_z\\Gamma_5$ where $\\Gamma_i$ are Dirac matrices [see text for details]) and tilted LSM (e.g., $k_z\\Gamma_0$). Then, by applying an external magnetic field, $J_z$, in parallel to the direction of nodes (or kick direction), \\emph{hybrid Weyl phases} can be generated.}\n \\label{fig:adpic}\n\\end{figure*}\n\\indent The coexistence of various Weyl nodes with different charges and\/or types is an interesting phenomena that could help towards understanding as well as manipulation of the properties of various Weyl nodes in an equal footing setup. Besides a few works reporting the coexistence of type-I and II Dirac\/Weyl nodes \\cite{hybridWSMexp,hybridDSM1,hybridWSM1,WSMexp1&2,weylcoexPRL}, there have been also proposals\\cite{LSMFloq1,Ghorashi2018}, claiming dynamical generation of various Weyl nodes of different types and\/or charges in one system. Application of the light is shown to be a powerful method to change the material properties \\cite{FloquetRev}. In particular, the conversion of a topologically trivial phase into a nontrivial one using periodic driving has attracted enormous attention in the past decade \\cite{FloquetRev,Floquet1, Floquet2, Floquet3}. Specifically many proposals on Floquet WSMs in various systems exist, such as Dirac semimetals \\cite{FloqWeyl1,FloqWeyl2}, band insulators \\cite{FloqWeyl3}, stacked graphene \\cite{FloqWeyl4}, line-nodal semimetals \\cite{FloqWeyl2,FloqWeyl5}, and crossing-line semimetals \\cite{FloqWeyl6,FloqWeyl7}. Also, proposals have been made to create tunable WSMs in pyrochlore iridates with Zeeman fields \\cite{Weylpyro1,Weylpyro2}. Very recently, using circular\/elliptic polarized light on Luttinger Hamiltonian in the high-frequency limit we have shown a very rich phase diagram of various Weyl semimetals, including coexistence of type-I and II as well as single and double Weyl nodes \\cite{Ghorashi2018} . \\\\\n \\indent Despite the several dynamical proposals for the generation of different Weyl phases, a promising setup for the realization of hybrid Dirac and Weyl semimetals is still lacking. In this work, we tackle this issue by an alternative way of periodic driving, in particular the periodic kicking. Using the periodic $\\delta$-function kicks can typically simplify theoretical studies by allowing to perform calculations analytically to a large extent (in contrast to sinusoidal driving or elliptical\/circular light) \\cite{ Floquet3,kicking2}.\nHowever, for the sake of comparison we also briefly discuss the smooth driving case to show that some of the features of our discussion can be hold up in smooth driving setup as well as long as the perturbation breaks inversion (uniaxially) and time-reversal symmetres but preserve their combinations. In this paper, we show two examples of such perturbations which along with an external magnetic field induce various hybrid Dirac and Weyl phases, including a new \\emph{hybrid dispersion Dirac semimetal}. Figure.~\\ref{fig:adpic} summarizes the result of this work.\n\n\\section{Model and Formalism}\n\\subsection{Model}\nWe start with reviewing the main ingredients of Luttinger Hamiltonian in the non-equilibrium limit, which can be represented as,\n\n\\begin{align}\n H_{L}(\\vec{k})=\\int \\frac{d^3\\vec{k}}{(2\\pi)^3}\\Psi^{\\dagger}_{\\vec{k}} h_L(\\vec{k}) \\Psi_{\\vec{k}},\n\\end{align}\nwhere\n\\begin{align}\n h_{L}(\\vec{k})=&(\\frac{k^2}{2m_0}-\\mu)\\Gamma_0-\\frac{1}{2m_1}\\sum^3_{a=1} d_a(\\vec{k})\\Gamma_a\\cr\n -&\\frac{1}{2m_2}\\sum^5_{a=4} d_a(\\vec{k})\\Gamma_a\n\\end{align}\nwhere $k^2=k^2_x+k^2_y+k^2_z$ and,\n\\begin{align}\n \\Psi^T_{k}= (c_{\\vec{k},3\/2}, c_{\\vec{k},1\/2}, c_{\\vec{k},-1\/2}, c_{\\vec{k},-3\/2}).\n\\end{align}\n$\\mu$ is the chemical potential measured from the band touching point. $\\Gamma_a$ are the well-known gamma matrices which are given by,\n\\begin{align}\n \\Gamma_1=\\tau_3\\sigma_2,\\,\\, \\Gamma_2=\\tau_3\\sigma_1,\\,\\, \\Gamma_3=\\tau_2,\\,\\,\n \\Gamma_4=\\tau_1,\\,\\, \\Gamma_5=\\tau_3\\sigma_3,\n\\end{align}\n and satisfy $\\{\\Gamma_a,\\Gamma_b\\}=\\delta_{a,b}$, while $\\Gamma_0$ is four dimensional identity matrix. $\\tau$ and $\\sigma$ denote space of sign and magnitude of spin projection $m_s\\in\\{\\pm 3\/2, \\pm 1\/2\\}$, respectively. $d_a(\\vex{k})$ are given as,\n \\begin{align}\n d_1=\\sqrt{3}k_y k_z,\\,\\, d_2=\\sqrt{3}k_x k_z,\\,\\, d_3=\\sqrt{3}k_x k_y,\\,\\,\\cr\n d_4=\\frac{\\sqrt{3}}{2}(k_x^2-k_y^2),\\,\\, d_5=\\frac{1}{2}(2k_z^2-k_x^2-k_y^2)\n \\end{align}\n The five mutually anticommuting $\\Gamma$ matrices can be written\nin terms of spin-$3\/2$ matrices according to,\n\\begin{align}\n \\Gamma_1=&\\frac{1}{\\sqrt{3}}\\{J_y,J_z\\},\\,\\,\\Gamma_2=\\frac{1}{\\sqrt{3}}\\{J_x,J_z\\},\\,\\,\\Gamma_3=\\frac{1}{\\sqrt{3}}\\{J_x,J_y\\},\\cr\n \\Gamma_4=&\\frac{1}{\\sqrt{3}}(J^2_x-J^2_y),\\,\\,\\Gamma_5=J^2_z-\\frac{5}{4},\n\\end{align}\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.3\\textwidth]{LSM_curveup.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_curveopposite.pdf}\n \\caption{The band structure for LSM along the $k_z$ axis with $\\lambda_1=0.6$ and (a) $\\lambda_2=0.1$, (b) $\\lambda_2=0.6$.}\n \\label{fig:LSMbare}\n\\end{figure}\n Here we take the isotropic limit of $m_1=m_2 \\equiv m$. Therefore, the Luttinger Hamiltonian can also be written in an alternative way,\n\\begin{align}\n h_L(\\vec{k})= [(\\lambda_1+5\\lambda_2\/2)k^2-\\mu]\\Gamma_0-2\\lambda_2(\\vec{J}.\\vec{k})^2\n\\end{align}\n with $\\vec{J}=(J_x,J_y,J_z)$ and $\\vec{k}=(k_x,k_y,k_z)$ and we used $\\lambda_1=1\/2m_0$ and $\\lambda_2=1\/4m$. $J_{x,y,z}$ are effective spin-$3\/2$ operators,\n \\begin{eqnarray}\n&\\,J_z=\\begin{bmatrix}\n \\frac{3}{2} & 0 & 0 & 0 \\\\\n 0 & \\frac{1}{2} & 0 & 0\\\\\n 0 & 0 & -\\frac{1}{2} & 0 \\\\\n 0 & 0 & 0 & -\\frac{3}{2} \\\\\n \\end{bmatrix},J_x=\\begin{bmatrix}\n 0 & \\frac{\\sqrt{3}}{2} & 0 & 0 \\\\\n \\frac{\\sqrt{3}}{2} & 0 & 1 & 0\\\\\n 0 & 1 & 0 & \\frac{\\sqrt{3}}{2} \\\\\n 0 & 0 & \\frac{\\sqrt{3}}{2} & 0 \\\\\n \\end{bmatrix}\\cr\n &\\,J_y=\\begin{bmatrix}\n 0 & \\frac{-i\\sqrt{3}}{2} & 0 & 0 \\\\\n \\frac{\\sqrt{3}}{2} & 0 & -i & 0\\\\\n 0 & 1 & 0 & \\frac{-i\\sqrt{3}}{2} \\\\\n 0 & 0 & \\frac{\\sqrt{3}}{2} & 0 \\\\\n \\end{bmatrix}.\n\\end{eqnarray}\nThe energy dispersions are $E(k)=(\\lambda_{1}\\mp2\\lambda_{2})k^{2}-\\mu$ for the $j=3\/2$ and the $j=1\/2$ bands, respectively. Four bands come in doubly degenerate pairs as a result of time-reversal (with antiunitary operator $\\mathcal{T}=\\Gamma_1\\Gamma_3\\mathcal{K}$ and $\\vec{k}\\rightarrow -\\vec{k}$ where $\\mathcal{K}$ is complex conjugation) and inversion ($\\mathcal{I}=I_{4\\times4}$ and $\\vec{k}\\rightarrow -\\vec{k}$) symmetries. For $\\lambda_{2}<2\\lambda_{1}$\n($\\lambda_{2}>2\\lambda_{1}$), the degenerate bands curve the same (opposite) way as shown in Figure.~(\\ref{fig:LSMbare}). In the case of both bands bending the same way, Eq. (2) is widely used to model heavy- and light-hole bands in zinc-blende semiconductors \\cite{semiconductor1} and many properties of such a dispersion have been studied in the literature, including a recent study on the realization of fully gapped topological superconductivity with \\emph{p}-wave pairing which has states with exotic cubic and linear dispersions coexisting on the surface \\cite{congjunWuPRL16,GhorashiPRB2017}. On the other hand, when bands bend oppositely, the model in Eq. (2) is known as Luttinger semimetal with QBT and is used to describe behavior of certain pyrochlore iridates as well as some doped half-Heusler alloys such as LaPtBi \\cite{Chadov2010,Lin2010,halfheusler3}.\n\n\n\\subsection{Periodic driving}\nA general time-dependent problem with $H(t)=H_0+V(t)$, can be tackled using Floquet theory when $V(t+T)=V(t)$ is periodic. To proceed, we can expand the periodic potential in a Fourier series as\n\\begin{align}\n V(t)=V_0+\\sum^{\\infty}_{n=1} \\big(V_n e^{i\\omega nt}+V_{-n}e^{-i\\omega nt}\\big).\n\\end{align}\n In the limit of fast driving regime, in which the driving frequency is larger than any natural energy scale in the problem, one can obtain the effective Hamiltonian and Floquet operators perturbatively up to $\\mathcal{O}(1\/\\omega^2)$ \\cite{Floquet3}. The Floquet operator, $\\mathcal{F}(t)$, is the unitary time-evolution $\\hat{U}(t)$ after one period of drive can be factorized as,\n\\begin{align}\n \\mathcal{F}(t)=\\exp[-i \\alpha\\Lambda(\\vec{k})]\\exp[-i H_L T]=\\exp[-i H_{eff} T].\n\\end{align}\nSo the dynamics can have three stages: initial kick at $t_i$, the evolution of system with $H(t)$ in the interval $t_f-t_i$ and final kick at $t_f$ which describes the \"micromotion\" \\cite{Floquet3}. Then, the time-evolution operator can be expressed as,\n\\begin{align}\n \\hat{U}(t_i\\rightarrow t_f)= \\hat{U}(t_f)^{\\dagger}e^{-iH_{eff}(t_f-t_i)}\\hat{U}(t_i),\n\\end{align}\nwhere $\\hat{U}(t)=e^{-i\\mathcal{F}(t)}$. $\\mathcal{F}(t)$ is a time-periodic operator with zero average over one period. Lets set $t_i=0$, then $H_{eff}$ and $\\mathcal{F}(t)$ can be expanded as,\n\\begin{align}\n H_{eff}=\\sum_{n=0}^{\\infty}\\frac{1}{\\omega^n}H_{eff}^n,\\,\\,\\mathcal{F}(t)=\\sum_{n=1}^{\\infty}\\frac{1}{\\omega^n}\\mathcal{F}^n.\n\\end{align}\nA generic quantum $\\delta$-kick can be implemented with following perturbation,\n\\begin{align}\n H_{kick}(t)=\\alpha \\Lambda(\\vec{k}) \\sum_{n=-\\infty}^{\\infty} \\delta(t-nT)\n\\end{align}\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.3\\textwidth]{LSM_G5p_05_kz.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_G5m_05_kz.pdf}\n \\caption{The band structure of LSM along the $k_z$ axis in the presence of uniform strain ($\\alpha \\Gamma_5$) with (a) $\\alpha=0.5$ (tensile) and (b) $\\alpha=-0.5$ (compression). $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used.}\n \\label{fig:G5}\n\\end{figure}\nwhere $T=2\\pi\/\\omega$, $\\alpha$ is the kicking strength and $\\Lambda(\\vec{k})$ is the matrix representation of a perturbation, which in general can be a function of momentum, $\\vec{k}$, and could be used to mimic a nonuniform kicking. Following a perturbative expansion the effective Hamiltonian for $\\delta$-kick, with $H_{kick,n}=\\alpha \\Lambda(\\vec{k})\/T\\,\\text{for all}\\,\\,n$, can be obtained as \\cite{Floquet3,kicking2},\n\\begin{align}\\label{effkick}\n H^{kick}_{eff}=&\\,H_L+\\frac{\\alpha\\Lambda(\\vec{k})}{T}+\\frac{1}{24}[[\\alpha\\Lambda(\\vec{k}),H_L],\\alpha\\Lambda(\\vec{k})]+\\mathcal{O}(1\/\\omega^3).\n \n \n\\end{align}\nWe note that the third term in Eq.~(\\ref{effkick}), can be dropped in the limit of weak kicks, i.e, to the first order of kick strength. Therefore, a dynamical kicking effectively is equivalent to directly adding the kick term as a perturbation to the bare Hamiltonian, something which is not always relevant in the equilibrium. In the rest of this work we only focus on the physics of the effective Hamiltonian and we leave \"micromotion\" physics for future studies. \\\\\nSimilarly, the effective Hamiltonian for the case of smooth driving of the form $H(t)=H_0+\\alpha \\Lambda(\\vex{k})\\cos(\\omega t)$ can be obtained as \\cite{Floquet3},\n\\begin{align}\\label{effsmooth}\n H^{cos}_{eff}=H_0+\\frac{1}{4\\omega^2}[[\\alpha \\Lambda(\\vec{k}),H_0],\\alpha \\Lambda(\\vec{k})]+\\mathcal{O}(1\/\\omega^3).\n\\end{align}\nUnlike Eq.~(\\ref{effkick}) of periodic kicking, the first correction to the $H_0$ in the case of smooth driving is $\\mathcal{O}(\\alpha^2)$.\\\\\n\\indent In the following we first investigate the effect of uniform kicking ($k$-independent) and show that some of the known results can be retrieved. Then we turn to nonuniform periodic kicking and show that by introducing $k$-dependent $\\mathcal{IT}$ symmetric kicks, interestingly, different hybrid Dirac\/Weyl semimetals can be obtained. Finally, we briefly compare the case of smooth driving of Eq.~(\\ref{effsmooth}) with the results obtained by periodic kicking.\n\n\n \\section{Uniform kicking}\nThere are many possibilities of uniform kicking, including: five $\\Gamma_j$, and ten commutators $\\Gamma_{ij}=[\\Gamma_i,\\Gamma_j]\/(2i)$ with $i > j$ , which can be expressed in terms of the products of odd number of spin-$3\/2$ matrices. Here we focus on uniform kicks which are proportional to $\\Lambda=\\Gamma_j$. The effective Hamiltonian for such kicks, in a sufficiently weak kick limit, is given by,\n\\begin{align}\\label{effG5}\n H^j_{eff}=h_L(\\vec{k})+\\frac{\\alpha}{T}\\Gamma_j,\n\\end{align}\nwith spectrum,\n\\begin{align}\n E^j_{\\pm}(\\vec{k})=\\lambda_1k^2\\pm \\frac{\\sqrt{4\\lambda_2^2\\sum_i d^2_i T^2 - 4\\lambda_2\\alpha T d_j+\\alpha^2}}{T},\n\\end{align}\n\nOf particular interest is the $\\Gamma_5$ kicking, that breaks the rotational invariance and can be thought of as the effect of an external strain in $z$ direction \\cite{Ruan2016,QBToriginal}. It is known that a uniaxial strain, can realize two different phases in LSMs depending on the sign of $\\alpha$ \\cite{Ruan2016,QBToriginal,Weylpyro1,Weylpyro2,LSMmagnetic}. Figure.~\\ref{fig:G5}a(b) shows the bandstructure corresponding to effective Hamiltonian of Eq.~(\\ref{effG5}) with $\\Gamma_j=\\Gamma_5$ and for $\\alpha > 0$ ($\\alpha<0$) which represents a Dirac semimetal (trivial\/topological insulator).\n\nIn the case of a tensile strain, there are two Dirac nodes at $k_z=\\pm\\frac{\\alpha}{2\\lambda_2 T}$ ($\\alpha > 0 $). Now if we add an external magnetic field, we expect that each Dirac nodes split to two Weyl points and form Weyl semimetal phases due to breaking of $\\mathcal{T}$ symmetry. Interestingly, even in the topological insulator limit of $\\alpha<0$, one can realize Weyl semimetals depending on the strength of magnetic field \\cite{LSMmagnetic}. We also note that magnetic field alone can drive a Luttinger semimetal to a Weyl semimetal phase \\cite{LSMmagnetic,Weylpyro2}. Moreover, in the experimentally relevant system of pyrochlore irridates, it is shown that magnetic field can give rise to rich phase diagram of topological semimetals \\cite{Weylpyro2}. It is important to mention that an external magnetic field can have orbital effects for sufficiently large strength. However, here and in the rest of this paper, we neglect the orbital effects of magnetic field for sake of simplicity following many works in the literature \\cite{Weylpyro1, Weylpyro2,LSMFloq1} and we refer readers to some of the works which explored the effects of magnetic field in Landau levels and quantum oscillations \\cite{LSMmagnetic,LSMquantumoscillation}. The detailed analysis of the effect of magnetic field (also possible pseudomagnetic fields due to other perturbations) will be left for future studies.\n\n\\section{Nonuniform kicking}\nNow lets turn to the main part of this work, where we investigate the effect of nonuniform kicks. Similar to the uniform kicking discussed in the previous section, we can consider numerous possibilities of nonuniform driving. However, here, we restrict ourselves to a \"linear in momentum\" kicks of the form $k_i\\Gamma_i$ where $\\Gamma_i$ are time-reversal even. The very first consequence of such choice of the kicks, is the breaking of both $\\mathcal{T}$ and $\\mathcal{I}$ symmetries, while preserving their combinations $\\mathcal{IT}$, which is a necessary symmetry for keeping double degeneracy of the bands intact. Specifically, we focus on two types of nonuniform kicking: (a) $k_z\\Gamma_5$ and (b) a $k_z\\Gamma_0$.\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.42\\textwidth]{Fig4a.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz05_kz-R.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz07_kz-R.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz11_kz-R.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz15_kz-R.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz20_kz-R.pdf}\n \\includegraphics[width=0.3\\textwidth]{LSM_kzG5_08_Jz30_kz-R.pdf}\\includegraphics[width=0.42\\textwidth]{kzG5_PH_kxkz.pdf}\n \\caption{The band structure of LSM in the presence of $k_z\\Gamma_5$ with kick strength of $\\alpha=0.8$ and (a) $h=0$, $k_y-k_z$ plane showing quadratic dispersion along the $k_y$ for the node at $\\Gamma$ point, (b) $h=0.5$, (c) $h=0.7$, (d) $h=1.1$, (e) $h=1.5$, (f) $h=2$, (g) $h=3$ and (h) same as (a) but with $\\lambda_1=0$. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used in all plots (except (h) where $\\lambda_1=0$).}\n \\label{fig:kzG5}\n\\end{figure*}\n\\subsection{$\\Lambda=k_z\\Gamma_5$}\n\nAs it is mentioned above the $\\Gamma_5$ can be interpreted as the effect of strain in $k_z$ directions \\cite{Ruan2016}, similarly, $k_z\\Gamma_5$ also breaks the rotational invariance, then in principle we might still think of $k_z\\Gamma_5$ term as a type of nonuniform strain or strain gradient, even though full microscopic derivation of such strain could shed more light. The presence of $k_z$ suggests that microscopically on a lattice, $\\Gamma_5$ now acts non-local and directly modifies hopping terms. On the hand, one can convert the $k_z\\Gamma_5$ to the lattice model and explicitly write it down as, $\\sum_{\\sigma\\in\\{1\/2,3\/2\\}}(C^{\\dagger}_{k_{\\perp},i,\\sigma,\\uparrow}C_{k_{\\perp},j,\\sigma,\\uparrow}-C^{\\dagger}_{k_{\\perp},i,\\sigma,\\downarrow}C_{k_{\\perp},j,\\sigma,\\downarrow})\/2i+h.c$. This is similar to the spin current operator along $z$-axis. Therefore, we can also think of the $k_z\\Gamma_5$ kick as an applied spin current (or related to it) along $z$ direction. However, it is essential to note that besides the rotational invariance the $k_z\\Gamma_5$ kick also breaks inversion and time-reversal symmetries while preserving their combinations. Then the bands of driven system are still degenerate. Therefore, here, without restricting ourselves to any particular microscopic interpretation, we emphasize that to achieve the results obtained in this section one need perturbations which break both $\\mathcal{I}$ and $\\mathcal{T}$ but preserve $\\mathcal{IT}$.\\\\\nThe effective Hamiltonian for $k_z\\Gamma_5$ kick can be written as, $H_{eff}=h_{L}(\\vec{k})+\\frac{\\alpha}{T}k_z\\Gamma_5$, with following spectrum,\n\\begin{align}\n E_{\\pm}(\\vec{k})=\\lambda_1k^2\\pm \\frac{\\sqrt{4\\lambda_2^2T^2\\sum_i d^2_i - 4\\lambda_2\\alpha T k_z d_5+\\alpha^2 k^2_z}}{T}.\n\\end{align}\nFigure.~\\ref{fig:kzG5}, shows the spectrum in $k_x-k_z$ plane. There are two Dirac nodes at $k_z=0$ and $k_z=\\frac{\\alpha}{2\\lambda_2 T}$. While one of the nodes is always pinned to $\\Gamma$ point the other node can be tuned by kick parameters. Therefore, for fixed set of parameters the distance between two nodes for $k_z\\Gamma_5$ is half of the case with uniform strain $\\Gamma_5$. Moreover, we observe three main differences in compare to the case with uniform strain as is shown in Figure.~\\ref{fig:G5}. (i) There is no major difference between the tensile ($\\alpha >0 $) and compressive ($\\alpha <0 $) strain. In both cases system drives into a Dirac semimetal phase. (ii) Unlike the uniform tensile strain, due to broken inversion symmetry two Dirac nodes reside at different energies and (iii) the most important of all, remarkably, one of the nodes is quadratic while the other is linear, realizing an unique \\emph{hybrid Dirac semimetal}. Interestingly, the \"quadratic\" Dirac node (or a QBT which is linearly dispersed in $k_z$ direction) is pinned at the $\\Gamma$ point and the \"energy difference\" as well as distance between the nodes can be controlled with kick strength. Moreover, while the linear Dirac node is tilted, the node centered around the $\\Gamma$ point no matter how strong is the kick, shows no tilt. It should be emphasized, here, we define hybrid Dirac semimetal mainly based on the different dispersion of two Dirac nodes instead of their tilts, even though as we discussed above, they show different tilting (and types) as well. Moreover, We note that such hybrid phase is unique to the Dirac semimetals because it is impossible to realize a Weyl semimetal with pair of nodes with different dispersion or (magnitude of monopole charges).\\\\\n\\indent Next, lets apply an external magnetic field $h J_z$ where $h$ denotes the strength of magnetic field. The magnetic field splits Dirac nodes to Weyl nodes in $k_z$ direction. In the presence of a magnetic field an analytical expression of eigenvalues for the general $\\lambda_1,\\lambda_2$ can not be achieved, so in the following we proceed by solving the model numerically. Figure.~\\ref{fig:kzG5} depicts the evolution of the Weyl nodes by strength of external magnetic field. Starting from $h \\ll \\alpha $, there are 8 nodes on $k_z$ axis, four single and four double Weyl nodes, which are indicated in Figure.~\\ref{fig:kzG5} by the red and blue dots, respectively. Interestingly, we observe that one or more of the Weyl pairs realize a \\emph{hybrid Weyl phases} and they can survive up to a decently strong magnetic field (Figure.~\\ref{fig:kzG5}(b-f)). While the other Weyl nodes do not possess hybrid types, they show different tilts for each nodes of the same pair. Only at a very strong magnetic field (Figure.~\\ref{fig:kzG5}(g)) all of the Weyl points show a type-I structure.\nMoreover, we see that by increasing the magnetic field, from initial eight nodes, four of them merge and gap out and we left off with only four nodes (two single and two double nodes) at $h>>\\alpha$ limit. \\\\\n\\indent Finally, we comment on the particle-hole symmetric limit of $\\lambda_1=0$. In this limit we can get energies analytically even in presence of an external magnetic field as,\n\\begin{align}\n E^{(1)}_{\\pm}(\\vec{k})=&\\pm\\frac{h}{2}+(2\\lambda_2 k^2_z-\\frac{\\alpha k_z}{T}),\\cr E^{(2)}_{\\pm}(\\vec{k})=&\\pm\\frac{3h}{2}-(2\\lambda_2 k^2_z-\\frac{\\alpha k_z}{T}),\n\\end{align}\nwith 8 nodes located at,\n\\begin{align}\n k^{\\pm,\\pm}_{z,I}=&\\,\\frac{\\alpha\\pm \\sqrt{\\alpha^2\\pm 4h\\lambda_2T^2}}{4\\lambda_2 T},\\cr\n k^{\\pm,\\pm}_{z,II}=&\\,\\frac{\\alpha\\pm \\sqrt{\\alpha^2\\pm 8h\\lambda_2T^2}}{4\\lambda_2 T}.\n\\end{align}\nAs is clear from the above equations, four out of the eight nodes, $k^{\\pm,-}_{z,I,II}$, gap out by increasing $h$. However, it is noteworthy that even though a \\emph{hybrid dispersion Dirac semimetal phase} can still be generated, but now in the limit of $\\lambda_1=0$, the two nodes have the same energies Figure.~\\ref{fig:kzG5}(h), therefore, no longer a hybrid Weyl phase can be achieved in the presence of an external magnetic field.\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_kz.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz02.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz04.pdf}\n\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz06.pdf}\n\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz08.pdf}\\includegraphics[width=0.3\\textwidth]{LSM_kzG0_06_Jz15.pdf}\n\\caption{The band structure of LSM with tilted QBT with kick strength of $\\alpha=0.6$ and (a) $h=0$, (b) $h=0.2$, (c) $h=0.4$, (d) $h=0.6$, (e) $h=0.8$ and (f) $h=1.5$. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used in all plots.}\n\\label{fig:kzG0}\n\\end{figure*}\n\\subsection{$\\Lambda=k_z\\Gamma_0$: Tilted QBT}\n\nThe second class of nonuniform kicks which we investigate in this work, is $\\Lambda=k_z\\Gamma_0$. The effective spectrum for such driving can be obtained as,\n\\begin{align}\\label{tQBT}\n E_{\\pm}(\\vec{k})=(\\lambda_{1}\\mp2\\lambda_{2})k^{2}+\\frac{\\alpha k_z}{T}.\n\\end{align}\nThe mere effect of such driving is to effectively tilt the QBT in $k_z$ direction as shown in Figure.~\\ref{fig:kzG0}(a). However, the tilted QBT behaves very differently in the presence of an external perturbation. To clarify further, we rewrite the Eq.~(\\ref{tQBT}) at $k_x=k_y=0$ as,\n\\begin{align}\n E_{\\pm}(\\vec{k})=&\\,(\\lambda_{1}\\mp2\\lambda_{2})k_z^{2}+\\frac{\\alpha k_z}{T}\\cr\n =&\\,(\\lambda_{1}\\mp2\\lambda_{2})(k_z+\\mathcal{A}_z)^{2}\\cr\n =&\\,(\\lambda_{1}\\mp2\\lambda_{2})k_z^2+2(\\lambda_{1}\\mp2\\lambda_{2})\\mathcal{A}_z k_z+(\\lambda_{1}\\mp2\\lambda_{2})\\mathcal{A}^2_z\\cr\n \\simeq&\\,(\\lambda_{1}\\mp2\\lambda_{2})k_z^2+2(\\lambda_{1}\\mp2\\lambda_{2})\\mathcal{A}_z k_z,\n\\end{align}\nwhere in the last line we assumed $\\mathcal{A}_z<1$. Now by comparing the first and the last line of the above equation, we obtain $\\mathcal{A}_z=\\frac{\\alpha}{2(\\lambda_{1}\\mp2\\lambda_{2})T}$. Therefore, we can think of a tilted QBT as a QBT with a pseudo-electromagnetic potential $\\mathcal{A}= (0, 0, \\mathcal{A}_z)$ which is proportional to kick strength. In particular, interesting nontrivial topological phases can emerge by applying an external magnetic field in the presence of tilt, which otherwise are absent in a conventional QBT. As we mentioned previously, it is known that in the presence of a magnetic field, the QBT splits to multiple Weyl points. However, when the QBT is tilted, due to the competition between the external magnetic field and pseudo-magnetic field due to the $\\mathcal{A}$ (proportional to kick strength) there are various regimes which different types of WSM phases can be generated. Starting with the weak field limit ($\\alpha >> h$), there are two pairs of nodes, two double and two single nodes. In this regime, all Weyl nodes have a type-II nature (Figure.~\\ref{fig:kzG0}(b)). By further increasing of the field, for intermediate fields ($h \\lesssim \\alpha$), hybrid Weyl pairs are realized where one of nodes in each of single and double pairs are type-II and the others are type-I. Interestingly, the realized hybrid WSM can survive up to a very strong magnetic field. This could be experimentally beneficent as it demonstrates that hybrid WSMs generated here, are accessible in a broad range of external fields. Similar to the $k_z\\Gamma_5$ kick, we can get the analytical expression of energies and nodes position in the particle-hole limit,\n\\begin{align}\n E^{(1)}_{\\pm}(\\vec{k})=&\\pm\\frac{h}{2}+(2\\lambda_2 k^2_z+\\frac{\\alpha k_z}{T}),\\cr E^{(2)}_{\\pm}(\\vec{k})=&\\pm\\frac{3h}{2}-(2\\lambda_2 k^2_z-\\frac{\\alpha k_z}{T}),\n\\end{align}\nand,\n\\begin{align}\n k^{\\pm}_{z,I}=\\pm\\sqrt{\\frac{h}{2\\lambda_2}},\\,\\,k^{\\pm}_{z,II}=\\pm\\frac{\\sqrt{h}}{2\\sqrt{\\lambda_2}}=\\frac{k^{\\pm}_{z,I}}{\\sqrt{2}}.\n\\end{align}\nUnlike the $k_z\\Gamma_5$ kick, the presence of particle-hole symmetry does not prevent the generation of hybrid Weyl phases, because does not generate any other Dirac nodes and as we mentioned before only tilts the QBT. Moreover, the position of Weyl nodes are independent of kick strength and can be tuned solely using magnetic field.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{kzG5smooth_5_kxkz.pdf}\n \\includegraphics[width=0.4\\textwidth]{kzG5smooth_5_kykz.pdf}\n \\caption{The bandstructure of LSM at $k_x-k_z$ plane ($k_y=0$) in presence of $k_z\\Gamma_5$ smooth driving with $\\alpha=5$. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used.}\n \\label{fig:kzG5smooth}\n\\end{figure}\n\n\\subsection{Smooth driving}\nHere, we briefly discuss the case of smooth driving using Eq.~(\\ref{effsmooth}) and make comparison with the results obtained by periodic kicking in previous sections. First of all, any type of perturbation which is proportional to the identity will not modify the system in the case of smooth driving, as it obviously commutes with Hamiltonian. However, the $\\Lambda(\\vec{k})=\\alpha k_z\\Gamma_5$ can modify the system when is applied via smooth driving. We obtain the effective Hamiltonian as,\n\\begin{align}\n H^{cos}_{eff}(\\vec{k})=&H_0+\\frac{\\lambda_2\\alpha^2k^2_z}{\\omega}\\bigg(2k_xk_z\\{J_x,J_z\\}+2k_yk_z\\{J_y,J_z\\}\\cr\n +&(k^2_x-k^2_y)(J^2_x-J^2_y)+2k_xk_y\\{J_x,J_y\\}\\bigg).\n\\end{align}\nwith energies,\n\\begin{align}\n E^{\\pm}(\\vec{k})=\\lambda_1 k^2\\pm \\frac{\\lambda_2\\sqrt{f(\\vec{k})\\left[k^2_z\\alpha^2-2\\omega^2\\right]+4k^4\\omega^4}}{\\omega^2},\n\\end{align}\nwhere each $f(\\vec{k})=3 (k_x^2+k_y^2)k_z^2 (k_x^2+k_y^2 +4k_z^2)\\alpha^2$ and $E^{\\pm}$ are doubly degenerate. Figure.~\\ref{fig:kzG5smooth} shows the spectrum for $k_y=0$ plane, where four linear Dirac nodes coexist with a QBT at which is located at $\\Gamma$ point. A similar plot can be obtained for $k_x=0$ plane, then the system also possesses line-nodes in $k_x-k_y$ plane at $k_z\\neq 0$. Therefore, in addition to the coexistence of Dirac nodes and QBT at $k_x-k_z$ and $k_y-k_z$ planes, a smooth driving can lead to a richer phase diagram. This analysis was only for the purpose of comparison with kicked driving results, so a detailed analysis of such models will be left for future works.\n\n\\subsection{Effect of Lattice Regularization}\n\nLets now look at the effect of lattice regularization. One of the typical effect of lattice versus continuum model is the appearance of more nodes, usually at the boundaries of the Brillioun Zone \\cite{Ghorashi2018}. However, we show that the main features discussed in the previous subsections survives in lattice models. For the sake of brevity we restrict ourselves to $h=0$ limit. We consider a cubic lattice, which captures the physics of continuum model for $\\alpha,\\omega,\\vec{k}<<1$. In the $k_z$ cut (Figure.~\\ref{fig:latt}), we find: (i) two nodes at BZ boundaries, $k_z=\\pm \\pi$, as we expected, (ii) one at the $\\Gamma$ point with quadratic dispersion in $k_i \\perp k_z$, in agreement with the result of continuum model, (iii) however, due to lattice regularization there can be two nodes (instead of the node at $k_z=\\frac{\\alpha}{2\\lambda_2 T}$ in the continuum model) with linear dispersion away from the boundaries and the $\\Gamma$ point, which are located at $k_z=\\sin^{-1}(\\frac{\\alpha}{2\\lambda_2 T}), k_z=-\\sin^{-1}(\\frac{\\alpha}{2\\lambda_2 T})+\\pi$. Figure.~\\ref{fig:latt}, shows the lattice bandstructure along $k_z$ for the case of $k_z\\Gamma_5$ driving. Therefore, we confirm that the generation of the hybrid dispersion Dirac semimetal (and consequently Weyl semimetal) persists in the lattice model.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{LSM_kzG5_03_lattice.pdf}\n \\caption{The $k_x=k_y=0$ cut of the bandstructure of LSM on a cubic lattice in presence of $\\alpha\\sin(k_z)\\Gamma_5$ ($\\alpha=0.3$) along the $k_z$ axis. $\\lambda_1=\\lambda_2=0.6$ and $\\omega=20$ are used.}\n \\label{fig:latt}\n\\end{figure}\n\\section{DISCUSSION AND CONCLUDING REMARKS}\n\nWe have proposed a dynamical way to realize hybrid Dirac and Weyl semimetals in a three-dimensional Luttinger semimetal via applying a nonuniform (momentum-dependent) periodic $\\delta$-kick. We explicitly demonstrated this through two examples of nonuniform kicking which break both inversion and time-reversal symmetries while preserving their combinations. We have identified the first example of an unusual hybrid Dirac semimetal phase where two nodes not only have different types but also have different dispersions ( a linear Dirac and QBT coexist). Then by applying an external magnetic field we demonstrated the emergence of hybrid Weyl semimetals. Next, we found that the combination of a tilted QBT with an external magnetic field provides a promising setup for generation of hybrid Weyl semimetals. Moreover, by interpreting the tilted QBT as a QBT with emergent psudomagnetic field proportional to kick strength, we discussed the interplay between the kick strength and the external magnetic field. \\\\\n\\indent We note that the experimental realization of models discussed in this work can be difficult, specifically for the case of uniform strain which requires very fast time scales for periodic driving. Despite all the difficulties there are some proposals for the fast dynamical generation of strain \\cite{straindyn1,straindyn2,kickgraphene}. However, the possibility of interpreting the $k_z\\Gamma_5$ kick as an applied spin current (or proportional to it) along the $z$-direction is potentially a promising route, considering the recent developments in the field of ultrafast spintronics \\cite{ultrafastspin}. Moreover, the 2D QBT has been already proposed to be realized in optical lattices \\cite{QBToptical}, therefore, in practice a 3D QBT could be realized in optical lattices as a potential experimental setup where parameters can be tuned at will.\\\\\n\\indent The LSMs can describe the low-energy physics of many experimental candidates. Therefore, this work opens up a promising way for the realization of various hybrid Dirac and Weyl semimetals. Therefore, it could motivate further studies on the lacking investigation of physical properties of the hybrid Weyl semimetals. \\\\\nSome of the possible future directions are: the investigation of bulk and surface transport properties in the hybrid dispersion Dirac semimetals introduced here, in particular, how some of the most interesting properties of a typical Dirac semimetal, such as transport anomalies \\cite{review1}, would be different. Moreover, it would be interesting to see whether other multi-band systems with higher-spin can show similar physics.\n\n\\section*{Acknowledgement}\nWe thank Matthew Foster and Bitan Roy for useful comments. We also acknowledge useful comments and suggestions from the anonymous referees which helped to improve this manuscript. This work was supported by the U.S. Army Research Office Grant No.W911NF-18-1-0290. we also acknowledge partial support from NSF CAREER Grant No. DMR-1455233 and ONR Grant No. ONR-N00014-16-1-3158.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzehnu b/data_all_eng_slimpj/shuffled/split2/finalzzehnu new file mode 100644 index 0000000000000000000000000000000000000000..fba3467da6c66fda7ecbda3a65ca6f0a07e40796 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzehnu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}}\n\n The non-uniform and turbulent nature of the interstellar\nmedium (ISM) has long been evident from scattering\/scintillation studies in the\ndirection of pulsars and compact extra-galactic sources, as well as from HI\nemission\/absorption measurements. Power spectra of the observed \ncolumn density distribution of neutral hydrogen (HI) in the \nISM reveal its fractal nature. \nSome estimates of the slope of spatial power\nspectra for the absorbing HI distribution in the Galaxy show a range from --3.0 \n(Crovisier and Dickey 1983) to\n--2.75 $\\pm$ 0.25 (Deshpande, Dwarakanath \\& Goss 2001), while Green (1993)\nfound a range from --2.2 to --3.0 for the slopes from HI emission spectra,\nindicating steepening with distance. \nSeveral recent statistical studies of galactic and extra-galactic HI on large scales\n(e.g. Stanimirovic et al. 1999, Dickey et al. 2001, Elmegreen et al. 2001, \nStanimirovic \\& Lazarian 2001) also reveal power-law slopes characterizing\nthe spatial power spectra that are consistent with \nthe above-mentioned range of values. The prediction of Lazarian \\& Pogosyan (2000), \nthat the spatial spectra should steepen with the increasing thickness of the velocity\nslice, was confirmed by some of these analyses (e.g. Stanimirovic \\& Lazarian 2001).\nThe question we would like to address here is why the structure of the HI is \nlike this in a statistical sense. \n\nA prominent source of energy input driving the ISM evolution is\nthought to be supernovae (SN). Many researchers have attempted\ndetailed MHD simulations of a supernova-driven ISM to explore this aspect.\nFor example, Korpi et al. (1999) used a three-dimensional, non-ideal MHD\nmodel and monitored several physical parameters. \nThere have been several other detailed simulation studies exploring \nMHD turbulence in different regimes (Cho, Lazarian \\& Vishniac 2002; \nWada et al. 2002; Vollmer \\& Beckert 2002; Cho \\& Lazarian 2003; \nMiville-Deschenes, Levrier \\& Falgarone 2003; Slyz et al. 2005; and\nMac Low et al. 2005).\n\nOur simulation study takes\na highly simplified view of the ISM and monitors only the statistical\ndescription of the density and velocity distributions. As a first step\ntoward assessing what the essential features of the processes underlying\na supernova-driven ISM are, we retain the kinematical and structural\ncomponents alone and assume that these are entirely determined by the\nexpanding shells of SNRs. \n\nWe explore two basic approaches in our modeling of a\nSN-driven ISM; I) a simple simulation of an equivalent snapshot \ndistribution, and \nII) a more complex simulation incorporating time evolution. Sections 2 and 3\ndescribe the details and limitations of these two modeling approaches separately,\nwhile Section 4 is dedicated to the analysis of both of the models. Results are\ndiscussed in Section 5, and our conclusions on the implications of the two\nmodels are presented in Section 6.\n\n\n\\noindent{\\section{Snapshot Distribution}}\n\nWith the goal of keeping our model as simple as possible, we first model\nresultant contributions from supernovae\ndirectly as an ensemble of simple, non-overlapping, spherical bubbles \n(or voids) in what is otherwise a homogeneous ISM.\nWe use a three-dimensional matrix to simulate a region in space that we\nwish to populate with a large number of bubbles of different radii. \nIn this picture, the bubbles of different sizes\nrelate to supernova remnants (SNRs) of correspondingly different ages \n(and in different\nphases of their evolution) that have resulted from supernovae at different epochs.\nThis method effectively generates a snapshot\nin time, and we invoke the ergodicity theorem to justify the basis of\nsuch a simulation. Thus, while the bubbles do not evolve and grow iteratively\nin this approach,\nwe attempt to achieve the same end result by simply generating simultaneously\na random distribution of locations and sizes. For this first simulation,\neach of the volume cells carry a binary tag (i.e. 0 or 1) indicating whether the given cell \n(or pixel) is inside a bubble or outside in the undisturbed ISM. We wrap \nthe respective sides (\\& corners) of the array, ensuring continuation of the bubbles across \nthem, to avoid any undesirable edge-effects, e.g. discontinuities and consequent\n``mass loss\" at the edges. This procedure is indeed consistent with the wrapping that\nis implicit in the Fourier analysis of this data, a step that is performed subsequently.\nWe use a random number generator (with uniform distribution) to determine the coordinates \nand radius of each\nbubble. Several parameters, such as the minimum bubble radius, the\nmaximum bubble radius, and the total number of bubbles, are retained as\nuser selectable variables. Checks are made, and iterations are skipped if needed\nto ensure that new bubbles do not overlap with any of the previously input bubbles. \nWe note that this non-overlap constraint is unphysical, but was unavoidable\nin order to ensure that the simulated volume was not entirely emptied at any stage.\nNaturally, the distribution of radii for included bubbles would deviate significantly\nfrom a uniform distribution if not explicitly constrained, \nsuch that the number of bubbles dropped rapidly with increasing radius. \nSuch a decline in the number of large holes may in fact \nbe expected if the sources of energy input are strongly clustered, and\nare of a wide range of energies.\nAn image resulting from this simulation is shown\nin Figure~\\ref{fig:bubble_image}.\nSimilar simulations are repeated a) by constraining the resultant distribution of the \nradii to be uniform, and\/or b) with smooth-edged bubbles (with density falling \ntowards center as $(1-cos(\\pi r\/r_0))\/2$, where $r_0$ is the bubble radius, \nand $r$ is the distance from the bubble center). \n\n\n\\noindent{\\section{Time Evolution}\n\nThe second approach to the simulation process involves incorporating a time axis\nand allowing planted supernovae and the consequent SNRs to evolve. \nWe make several assumptions for this simple model. \nTo begin with, we include only Type II supernovae in our simulation.\nWe use an average galactic rate of 1\/44 yr$^{-1}$ \n(Tammann, Loeffler \\& Schroeder 1994) and\nconsider a uniform distribution across a 20\\% range about this rate,\nalthough Type II supernovae are correlated in both space and time.\nThe progenitor stars are assumed to have\nmasses in the range 8-20 solar masses ($M_{\\sun}$), and their non-uniform\ndistribution within this range is modeled in consistency with \nthe initial mass function as\ngiven by Wheeler, Miller \\& Scalo (1980) and approximated in our range of interest\nby a power law. As for the compact remnant mass, we assume a value of 1.4 $M_{\\sun}$.\nWe then consider two types of ejecta, diffuse and clumpy (Willingale et al.\n2003), with different velocities, \nand use the relevant values estimated by Willingale et al based on the Cas-A data. \nFinally, we assume that the rest of the mass\n(i.e., M[progenitor] -- M[compact-remnant] -- M[ejecta]) \nwas lost prior to the explosion by the star's stellar wind and other mass loss\nmechanisms. We will denote this component by M[preSN-loss].\n\nFor this simulation, we use four 3-dimensional arrays to store\nthe spatial distribution of \nthe mass and the 3-d velocity that evolve as a function of time.\nThe spatial resolution (the volume corresponding to a pixel or a cell) and \nthe dynamic range of scales \nare chosen by specifying, in turn, suitable values for both the\nspatial extent represented by the entire cube and the number of pixels in\nthe desired storage arrays. \nWe initialize the mass-cube with a starting ISM density, assumed\nuniform and generally of the order of 0.6 atoms\/cm$^3$. This value is\nseveral times smaller than the average value that the ISM density would \nattain after subsequent mass input from supernovae. \n\nTo initiate a supernova within the simulated\nspace, we choose a random pixel location in 3-d, and input the mass \\& velocity distribution\nassociated with the supernova in the form of an expanding spherical \nshell.\nThe mass in this shell consists of the ejecta mass, M[ejecta],\nas well as the mass lost during the pre-supernova phase, M[preSN-loss].\nThe inclusion\nof this latter component is justified if the distance traveled by \nthis material, which is expelled in the pre-SN\nphase, is within the above mentioned cell size. This is indeed so given the typical cell\nsizes in our simulations, and in fact we adjust the spatial resolution to ensure\nthis to be the case. \nThe magnitude of the true initial velocity is estimated by conserving \ntotal kinetic energy \nbetween the two different types of ejecta (with assumed masses and velocities, \nsee Willingale et al. 2003 for details), and the mass lost during the \npre-supernova phase (i.e., M[preSN-loss]).\nWe use a basic template cube (3 x 3 x 3 pixels) to\nstore the 3-d velocity vectors directed radially outward and initially\nnormalized to a unit length. We then multiply these vectors by the magnitude of the\ntrue initial velocity as computed above.\nTo calculate the initial mass distribution in the shell, we distribute the\navailable mass into the template cube proportionally to the surface area of a\nsphere centered in the middle pixel and of radius 1 pixel. \nAs can be imagined, such a cube of cells is too coarse to capture the desired\nspherical symmetry of the shell. As discussed later, we have checked the sensitivity\nof our results to the size of this cube (i.e. the shell radius), and find that\nthe final results are practically unaltered even when we used a 5 x 5 x 5 matrix\nas a template cube to model the seed shell.\n\nAfter the\nfirst explosion is initiated, we evolve the simulation through a large\nnumber of iterations, each with a dynamic time step. To calculate what\nthe time step should be for a particular iteration, we search the velocity\narrays for the maximum magnitude and use this to\nset the time step such that the ``movement\" at this speed is restricted\nto less than one pixel. This procedure\nensures that, for a given iteration, the range of influence of any \npixel will never extend beyond a 3 x 3 x 3 cube around that pixel. Thus, for each\niteration, we scan through the pixel set, ``evolve\" the contribution of each\npixel by considering its interaction with matter within\na small 3 x 3 x 3 influence cube around it, and store\/add that resultant contribution\nto a buffer set of arrays for mass and 3-d velocity. \n\n\n\nWithin this smaller cube of {\\it influence}, we calculate the movement of the mass \nand use a spread function to decrease the effects of the quantization\nintroduced by the finite pixel size. Let $\\hat{d}$ be a unit vector \ndefining the direction of the destination pixel w.r.t. the central (or source) pixel, and\nlet D be the associated distance. The displacement $\\delta$D in that direction is \nthen simply $(\\vec{V}.\\hat{d}) \\delta t$, where $\\vec{V}$ is the velocity vector\nand $\\delta t$ is the time step. We restrict the spread of the central\npixel's contribution to destination pixels with positive values of $\\delta$D,\ni.e. to only those pixels that can be reached in the `forward' direction.\nThe fractional displacement f = $\\delta$D\/D determines how the original mass\nis to be shared between the original pixel and the destination pixel.\nThus the fraction of the source mass transported to a destination pixel $i$ \nis $W_i.f$, where\n$W_i=A_i\/(\\sum_i A_i)$, and $A_i = (\\vec{V}.\\hat{d}\/|V|)$ if the dot-product is\npositive, else $A_i = 0$.\nThis consideration allows us to weight the mass\ntransfer in the forward direction while still spreading it adequately laterally.\nTotal time is kept track of throughout the iterations, and another\nexplosion is initiated whenever the time after the last explosion reaches\na pre-computed interval. This entire process is\ncarried out until the input ``run time\" is complete, at which point the\nprogram switches into the ``evolution-only\" phase. In this phase, we allow the\niterations to continue for a certain defined length of time, but\ndisallow any new supernova. This phase of the simulation ensures the avoidance of\nany extreme densities (both high \\& low) due to a more recent SN\/SNR (closer to the \nend of the first phase) that has not yet had time to evolve and \ninteract with the surrounding matter. As our simulation is intended to \nstudy the structure of\nthe ISM, not the stars within it, this provision is necessary to make sure\nrecent explosions do not skew our results. \nFigure~\\ref{fig:col_den_sne_image} shows a sample result from a full \nsimulation in the form of a 2-d distribution of column density.\n\n\n\n\\noindent{\\section{Analysis \\& results}}\n\n Throughout our simulation runs (numbering ten or more in each of the cases\nexplored), particularly in the ``Time Evolution\" approach, \nwe closely monitor a variety of parameters and distributions. Typically,\nfor the evolutionary model, every 50 or so\niterations, we output the results in the form of mass (or density) \\& velocity distributions\nand\nobtain power-spectral descriptions for these. \nFor estimating the power spectra, in general, a given 2-d distribution is first \nstripped of\nits ``mean\", and then Fourier transformed. The resultant 2-d power spectrum is\nazimuthally averaged to obtain a 1-d description of the power as a function of\nthe spatial frequency (extending up to the sides, and not the corners).\nThe power estimates at\nhigher spatial frequencies benefit increasingly from the better statistics\nimplicit in azimuthal averaging as a function of radius.\nThese spectra are displayed on a log-log scale and best-fit power-law slopes\n(spectral indices) are computed over 3 (partly overlapping) ranges of the\nspatial frequencies to allow for the possibility that a given spectrum\nmay not be adequately described by a single power-law.\n\nIn the ``Snapshot\" simulation, we mainly monitor the column density \ndistribution. We also examine the density distributions (in, say, the \nX-Y plane) corresponding to thin slices at various depths (i.e. at various Z \nvalues). Here, the power spectra from several slices (typically 32) were \ncombined to obtain a better average description. \n\nFor the ``Time Evolution\" simulation, we extend our analysis to include \ntwo more data sets. The Z-velocity distribution is monitored (for thin Z-\nslices), as well as the column density distribution as a function of \nvelocity. The latter can be most directly compared to observational results. \n\n\\noindent{\\subsection{The Snapshot Distribution}}\n\n\nThe analysis of the column density distribution in Figure~\\ref{fig:bubble_image} \nwas performed following the\nabove procedure, and the resultant power spectrum is shown in \nFigure~\\ref{fig:bubble_spectrum}a.\nAs noted earlier, the underlying 3-d distribution includes a large number \n(500,000) of bubbles\/voids. \nThe spatial scale or the size of the simulated box is not explicitly defined, however\nan approximate scale of 1 kpc may be associated with the width of the simulated box.\nThe best-fit power-law slope for the linear\nportion of this log-log plot is close to -2.9, steeper than $-8\/3$. \nInterestingly, the spectral steepness decreased monotonically (starting from a slope \nof about $-4$ for a single bubble) as the number of bubbles increased. This trend\nis understood as due to a) the improved statistics, in general,\nand b) the increased weightage for finer structure, in particular.\nAlso, importantly, the rate of change of the spectral slope is \nobserved to decrease steadily.\nThe very slow decrease in the steepness during the latter half of the simulation suggests\nthat even after a many fold increase in the bubble count, the slope may \nreduce only slightly from its\npresent value and may stabilize at $-8\/3$ or thereabouts. \nWe have repeated the simulation runs\nto assess different realizations and find the above trend consistently evident.\nIt may be worth mentioning, as an example, that the slope\napproached the Kolmogorov value ($-11\/3$) after about 10000 bubbles, \nby which time about 42\\% of the\nvolume was occupied by the voids. At the half-way mark and at the end of this \nsimulation run the \noccupancy was about 53\\% and 54\\%, respectively. \nA similar trend was apparent in the spectra\nof distributions in Z-slices of the simulated cube. \nThe slope here was consistently shallower than\nthat for the column density distribution (i.e. for a {\\it thick} slab), \nby about 1 in the power-law index, as expected.\n\n\nIn the above simulations, the sharp edges of the bubbles are expected to enhance, somewhat\nartificially, the power at higher spatial frequencies, and this would make the spectra\nshallower than we would find in the absence of sharp edges.\nTo assess this expectation, we modeled the voids with a smooth decrease in the density as\none moved from the shell boundary to void center. Exactly as anticipated, the spectral \nslopes were consistently steeper (e.g. Figure~\\ref{fig:bubble_spectrum}b)\nthan for those derived for bubbles with sharp-edges \n(e.g. Figure~\\ref{fig:bubble_spectrum}a). \nOn the average, the power-law slope approached $-3.4$ for the column-density \ndistribution and was shallower by one order, i.e. about $-2.4$, for the density in the Z-slices.\nFor both cases, i.e. the sharp and smooth bubbles, the simulations were repeated\nwith an additional constraint wherein the resultant distribution of radii was maintained\nstatistically uniform (as would be implied by a constant SN rate and SNR birth rate). \nThe resulting distributions were\nindistinguishable in their appearance from the ones generated without the constraint.\nMore importantly, the derived spectra (and their slopes) were practically unaffected by \nthe additional constraint, implying that the results of these {\\it snap-shot} simulations\nare not sensitive to the distribution of radii of the bubbles populating the volume. \nIn these snap-shot simulations, we have treated SNRs as bubbles \"without shells\". \nSince the shells surrounding bubbles contribute significant HI column densities, we have\nsimulated distributions consisting of only shells (with a thickness to radius ratio of 0.1),\nand examined the corresponding power spectra. Here, the power-law slope approaches\n$-3$ for the total column density distribution, and $-2$ for the density in the Z-slices.\n\n\\noindent{\\subsection{Time Evolution}}\n\nFollowing the procedure described in section 3, many simulation runs\nwere carried out to study the results of a time-evolution approach. \nDue to memory and computing speed\nlimitations, we restricted the simulation volume to a (128)$^3$ pixel array, and \nwe artificially\nraised the SN rate by a factor of 100 from the rate appropriate for a (200 pc)$^3$\nvolume. The primary phase of the runs was chosen long enough so as to have every pixel\nof the volume visited by the ejecta. The nominal time scale of this primary phase\nwas a few million years, and the SN count was several hundred. \nDuring the ``evolution-only\" phase, corresponding to \na relatively longer duration of time when compared with the typical intervals \nbetween explosions, the {\\it rms} velocity reduced monotonically as shell diffusion \ncontinued without any further energy input.\n\nFigure~\\ref{fig:te_cd_spec} shows the spatial power spectrum of a sample column \ndensity distribution (such as in Figure~\\ref{fig:col_den_sne_image})\nobtained from these simulations. \nAlthough a single power-law is a poor \napproximation to this spectrum, its average (and the best-fit) power-law slope \ninterestingly is close to $-11\/3$ and does not change in any significant \nmanner during the `evolution only' phase. \nSpectra of distributions of column density in\nthin slabs (not shown here) reveal a similar non-`single-power-law' nature, \nand they show little variation in the `evolution only' phase. \nHowever, their average slope is consistently shallower than $-11\/3$,\nand closer to $-8\/3$. Power spectra for several slabs (at different Zs) \nwere averaged together to obtain statistically improved estimates.\n\n\n\nIt is also instructive to examine the velocity distribution and its spectrum. \nIn Figure~\\ref{fig:te_zv_spec}, we show a sample Z-velocity distribution in the X-Y plane at an arbitrary Z \nand an average of the spatial power spectra of such distributions at various Zs \n(computed separately and then averaged). A single power-law seems adequate to\ndescribe the spatial spectra of 1-d components of the 3-d velocity in our simulations.\nThe power-law slope is remarkably almost always $-11\/3$, or very close to it, \nand may change only slightly through\nthe `evolution only' phase as the velocity spread (range) \ndecreases by orders of magnitude.\nWhile forming these conclusions, we have of course ignored the behavior at the\nlower spatial frequency end due to its poor statistical significance.\n\n\nThe parameter and its distribution that can be most closely related to \nobservational results is the column density as a function of velocity, as in the\ncase of, for example, the 21-cm HI line observations. From the simulated distribution\nof mass and its 3-d velocity, we extract the spatial distribution \n(in, say, the X-Y plane)\nof the column density (i.e. integrated along Z) associated with a given range of the\nvelocity component along Z. In general, we examine about 30 velocity ranges\ncovering the estimated spread ($\\pm 1\\sigma$) of the 1-d velocity, \nand for each of these velocity slices, the transverse column-density distribution \nis Fourier analyzed and the \npower spectra are estimated. Such spectra for the different velocity slices \nare averaged together with natural weighting. \nFigure~\\ref{fig:te_vslice_spec} shows a sample\ncolumn-density distribution for one such velocity slice \n(across a velocity width of about 1 km\/s) \nand a power spectrum obtained by averaging power spectra for different slices.\nThe spectrum has a slope of $-8\/3$ at the higher spatial frequency end. \nIt is important to mention\nthat the same spectral slope also extended to lower spatial frequencies prior to\nentering the `evolution only' phase. Such velocity-sliced column-density\ndistributions appear to show considerable variation in their spectral signatures, \nin general, and more systematically as the `evolution only' phase progresses. \nThis is in contrast\nto the generally stable signatures evident for the other spectra we discussed above.\nHere, almost invariably, the average power-law slope is close to $-8\/3$ or shallower. \nThe latter is rare during the main-run phase, but is almost always the case during the \n`evolution only' phase. We have investigated this\naspect in greater detail and find that in this phase the spectrum becomes \nprogressively shallower, often accompanied by considerable flattening toward \nlower frequencies. We attempt to understand this in the following way. \nAs mentioned earlier, the velocity spread ({\\it rms})\ndecreases monotonically through the `evolution only' phase in the absence \nof any further energy input. As the velocity dispersion decreases and \nmixing continues, the scale over which significant velocity coherence \nsurvives also decreases progressively. This naturally\nleads to significant additional corrugation of the transverse \ndistribution on correspondingly smaller spatial scales when viewed \nthrough a velocity filter that is narrow compared to\nthe velocity spread. In the present case, the velocity filter widths \nare chosen as a fixed fraction of the {\\it rms} velocity, and thus may help \nto reduce the corrugation effects, although the consequences of progressive shortening \nof the coherence scale seem unavoidable.\n\n\n\\noindent{\\section{Discussion}}\n\nBefore we discuss our results and their implications, it is important to \nmention that although our primary motivation related to the neutral atomic\nhydrogen distribution in the ISM, the attempted modeling is equally\nrelevant for the ionized hydrogen component. In our SN-driven\nISM, it is not far fetched to consider the existence of these two components\nas mutually exclusive in their spatial distributions. \nFor example, the HI voids correspond to the very\nspace that is ionized as a consequence of the SN event. HII regions and the thin\nHI shells around them (that are in turn surrounded by H$_2$ shells, i.e. molecular \nclouds) represent another example of the\nsomewhat mutually exclusive existence of the two\ncomponents. As long as we are concerned about only the structural features\nand spatial distributions, modeling of either of the two components\nwould have implicit and direct relevance for the other. In more practical terms,\nvoids\/bubbles could be replaced by spheres, and the answers in terms of\nspatial spectrum or structure function would be unaltered.\n\nAs was seen earlier, the density spectrum from our `snap-shot' simulation approaches a\npower-law slope of about -3.4 as the number of supernovae, and the consequent \nbubble-like SNRs, is increased, so long as the bubbles are modeled without \nunnaturally sharp edges. The results of our `time evolution' modeling reveal a spectral\nsignature of the 1-d velocity distribution that matches remarkably well with that\nexpected from Kolmogorov turbulence. \nThis result appears significant and, in our opinion, is not\na chance coincidence. We remain however aware that though an incorrect\nspectral signature might be taken to imply an incorrect scenario,\ncorrect spectral signature does not necessarily imply the (sole) correct scenario. \n\n A Kolmogorov spectrum is theorized to\narise when energy cascades from larger scales down to smaller scales.\nImplicit in this, we believe, is a certain hierarchical \nstructure on a range of spatial scales.\nOur results from the snap-shot simulation seem to suggest that\nthe spectrum actually derived from observations\nmay primarily be the manifestation \nof structural features that are almost entirely dominated\nby the expanding voids\/shells of SN\/SNRs and\/or structures with similar morphology. \nThis interesting result may also suggest\nthat the role any explicit interaction between the ejecta from nearby\nevents plays is probably not very crucial in defining the statistical distribution.\nThough the explicit interaction between nearby events has to occur,\nit may contribute significantly mainly at relatively larger spatial-scales,\ni.e. as in the formation of hot, connected tunnels (Cox \\& Smith, 1974).\n\nThe simple approach in our `time evolution' models also seems adequate\nto capture the key features of the evolution and the resultant structural\ndescription of the ISM. Each new supernova event is incorporated by introducing \na {\\it seed shell} with its velocity field determined \nbased on the mass and the energy input contributed by the SN. \nIt is remarkable that momentum conservation alone\nas a governing feature in the subsequent evolution of a given shell \nas it interacts with the surrounding matter seems to yield velocity\nstructure that has a spatial spectrum similar to that for Kolmogorov turbulence.\nThis is not surprising, since momentum conservation is considered to be \nthe characteristic of the later phases (of SNR evolution) which last the longest.\n\nAlthough prompted by limitations in computing speeds to use SN rates\nthat were artificially raised, we have performed elaborate tests to verify that\nour basics results, i.e. the spatial spectral signatures, remain valid for a \nwide-range of SN rates. We have also checked and found that the exact shape\nof the SN-induced seed shell is not very critical, so long as sharp edges and corners\nare not present, the SN statistics are adequately large, and time spans are long enough\nto allow enough interaction with the surrounding matter.\n\nWe now turn to our specific results. \nFirstly, we find that our simple\nsimulations provide a more realistic\nstructural description of the ISM compared to when the distributions are \nsynthesized directly from a red spectrum across a spatial frequency range of interest \n(following a suitable power-law).\nThe present simulations are able to naturally produce 1-d and 2-d features such as\nfilaments and curved sheets in a manner that is internally consistent with \nthe diffuse component and the overall spatial spectrum. \nIn fact, such lower dimensional features apparent in our simulated density \ndistributions may be contributing to the deviations of the associated\nspatial spectrum from a single power-law nature.\nAlso, any mutual correlations between the velocity and density structures \nis implicit in our `time evolution'\nsimulations, an aspect that is rather difficult to incorporate by imposing explicit\ncorrelation properties, particularly since it could introduce an undesired bias.\n\nSecondly, it is instructive to note the significant differences between the\nspectral descriptions for the column density distributions and for\nthe distribution of 1-d components of the 3-d velocity. Despite the latter having \na consistent single power-law with Kolmogorov index, the former clearly deviates\nfrom it at both the low and high spatial frequencies. This may provide important\nclues for our understanding of apparent deviations from the Kolmogorov\nspectrum that interstellar scintillation observations have indicated\n(e.g., Roberts \\& Ables 1982), \nprompting serious need to invoke either steeper spectra (i.e., index $\\le-4$),\nor those with suitably larger `inner scale' (see, for example, \nGoodman \\& Narayan 1985; Blandford \\& Narayan 1985;\nRomani, Narayan \\& Blandford 1986).\nIf our results are to be taken at their face value,\nit appears to us that the spectral behavior needed (for column density\nacross thick or thin screens) to explain the observations\nmay not be in conflict with the underlying turbulence being of a Kolmogorov\nnature. Indeed, if the SN-induced turbulence is to be the dominant process\ndriving the ISM, there may not be any compelling need for invoking deviations\nfrom a basic Kolmogorov signature. The argument can be turned around to conclude\nthat the refractive scintillation observations are consistent with, \nand lend support to, \nthe possibility that the ISM may be primarily SN-driven.\n\nThirdly, we draw attention to our results on the distribution\nof density across narrow velocity slices. These reveal spatial spectra that\nare invariably significantly shallower than $-11\/3$, \nand rather close to $-8\/3$ or shallower,\neven while the spatial structure in velocity remains closely Kolmogorov.\nThese results can be readily compared with observations, and are found to indeed\nbe consistent with results derived from the distributions of Galactic\nHI emission and opacity observed in narrow frequency (or velocity) channels.\nThe unavoidable implications of such shallow spectra for the apparent\ncolumn density (or opacity) fluctuations on small transverse spatial scales\nhave been discussed in detail elsewhere (Deshpande 2000). \nFor the present discussion we \nfocus on the fact that the expected magnitude of such fluctuations\non a given transverse scale has a direct dependence on both the overall \nHI column-density (or opacity) and the details (such as the power-law slope)\nof its spatial spectrum. In this particular context, the noticeable\nvariation in the spectral characteristics associated with the velocity slices\nis of great significance. That is, we should not be surprised to find\na considerable variety in the distributions sampled by velocity slices,\nand consequently in the\nstrength of column density or opacity fluctuations while observing\nin different directions in the Galaxy. It may be recalled that the \nmore significant and systematic modifications of these spectra are seen to occur\nas our `evolution only' phase progresses. Had we used the expected SN rate\nfor the volume simulated (i.e. instead of an elevated rate), \nwe would expect the intervals between successive supernovae to be \nlong enough to be describable as genuine episodes of `evolution only' phase. \nGiven this, it is only reasonable to expect a rich variety of spectral characteristics\nto be evident episodically even while observing a given region of the Galaxy.\nIn practice, for observations made during a narrow epoch range, this would be\nthe case across the different regions of the Galaxy. Thus, the observed\nspatial variation of opacity in the case of even 3C138, often cited \nas evidence for the so-called `tiny-scale structure' (Faison \\& Goss 2001, and\nreferences therein), becomes easily understandable\nas a simple manifestation of a power-law distribution of opacity across narrow\nvelocity slices.\n\nLastly, we will next extend our simple recipe such as to monitor and study\nother aspects characterizing the ISM. The effect of Galactic rotation\non the apparent distribution in velocity slices needs to be assessed, particularly\nwhen simulating much bigger volumes than in the present case. More importantly, \none hopes to keep track of other variables, such as temperature and the resulting ionized\nfraction of the ISM. These extensions will be followed up in a separate paper.\n\n\\noindent{\\section{Conclusions\/Summary}}\n\nIn conclusion, we feel that our highly simplified approaches to the modeling of the\nISM have led to several promising and instructive results, as summarized below.\n\n \n1) Our `snap-shot' simulation of an SN-driven ISM, modeled as a set of \nnon-overlapping voids of a range of sizes, appears to be \nsuccessful in capturing the key structural features of the medium, \nincluding a power-law nature of the spatial spectrum (of slope about -3.4).\n\n2) Another, more realistic, approach following `time evolution' of the \nSN-induced shells also\nproduces a remarkable match with the structural features of the ISM derived from\navailable observations. A simple interaction with the surrounding matter,\nusing only momentum conservation, appears to be adequate to govern the\nevolution. \n\n3) Our simulations show that while the velocity distribution closely follows \na Kolmogorov spectrum, the column density\ndistribution shows deviations from this both for high and low spatial frequencies.\nThis may have important relevance to the properties of the medium as implied \nby observations of refractive scintillation.\n\n4) The spatial spectral characteristics of distributions across velocity slices \nand the significant variation between them suggest that the magnitude of the \ncolumn-density or opacity variation on small\ntransverse scales can differ considerably when observing different directions \nin the Galaxy.\n\n5) The present simple-minded approaches (i.e. the `time evolution' simulations) \nprovide a set of internally consistent \ndensity and velocity distributions that depict 1-d \\& 2-d structure in addition to the \ndiffuse component and can be very useful in detailed studies of other properties of the\nHI and ionized components of the interstellar medium.\n\n\\acknowledgments\nWe thank Chris Salter, V. Radhakrishnan, Bon-chul Koo and James Ingalls for their \ncritical reading of the manuscript and for useful comments.\nOur thanks also to an anonymous referee for several critical comments\nthat have helped us in improving the manuscript. \nWe gratefully acknowledge the excellent support from the computer groups at\nthe Arecibo Observatory and Cal Poly. \nThis work was made possible by support from the REU program of NAIC, funded\nby the NSF. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nThe purpose of this paper is to study hyperplane sections of certain schemes, including log smooth schemes. We first study the case of schemes over a field $k$ and prove two variants of Bertini's theorem for smoothness. The first one is a Bertini theorem for log smoothness. To state it, let be a monoid $Q$ and a map $Q\\to k$ such that $(k,Q)$ is a \\emph{log point} (\\ref{setup2}). Assume $X\\subset\\mathbb{P}^n_k$ is a subscheme, $H\\subset\\mathbb{P}^n_k$ a hyperplane, $X\\cap H$ the scheme-theoretic intersection, and $i:X\\cap H\\subset X$ the inclusion. Let $M$ be a log structure on $X$ and $f:(X,M)\\to (k,Q)$ a morphism of log schemes.\n\n\\begin{thm}[Bertini theorem for log smoothness (\\ref{bertini6})]\nAssume $f$ is a log smooth morphism. If $f$ is a \\emph{sharp} morphism of log schemes, then for $H$ generic $(X\\cap H, i^*M)$ is log smooth over $(k,Q)$.\n\\end{thm}\n\nThe notion of sharp morphism of log schemes is defined in \\ref{sharp}. Suffices to say here that sharpness is a quite natural condition which is mostly pertinent to positive characteristic (cf. \\ref{tame}). The theorem is optimal in the sense that we show by example that the sharpness assumption cannot be dropped in general (\\ref{cx}).\n\nThis Bertini theorem applies in particular to all varieties with toric singularities as well as singular schemes which are locally isomorphic to a normal crossings divisor in a smooth variety. In fact, the latter case was previously shown by Jannsen and Saito \\cite{jannsensaito} by reducing to the Bertini theorem for classical smoothness. By contrast, our proof is a direct logarithmic adaptation of the transcendence degree argument given by Jouanolou in \\cite{bertini}.\n\nThe second variant of Bertini's theorem, hinted at in \\cite[2.1]{fa2}, does not make any assumption on the singularities of $X$, but rather assumes the existence of a morphism of finite type $m:\\mathcal{X}\\to X$ and a vector bundle quotient $\\mathcal{E}$ of $m^*\\Omega^1_{X\/k}$.\n\n\\begin{thm}[Bertini theorem for abstract smoothness (\\ref{bertini7})]\\label{bertabstract}\nLet $\\mathcal{N}$ denote the conormal sheaf of $X\\cap H$ in $X$, and consider the natural morphism\n\\[ \\phi:m^*\\mathcal{N}\\to i^*\\mathcal{E} \\]\nof coherent sheaves on $\\mathcal{X}\\times_X(X\\cap H)$. If $\\dim\\mathcal{X}\\leq\\rk\\mathcal{E}$, then for $H$ generic the cokernel of $\\phi$ is a vector bundle of rank $\\rk\\mathcal{E}-1$ on $\\mathcal{X}\\times_X(X\\cap H)$.\n\\end{thm}\n\nIn the case $X$ is smooth and $\\mathcal{X}=X$ one recovers the classical Bertini theorem for smoothness; in fact, the proof of the latter given in \\cite{bertini} applies mutatis mutandis.\n\nWe then study hyperplane sections of singular schemes with the additional constraint that the hyperplanes all pass through a given point. In this case simple examples show that, unlike the classical smooth case, for $X$ log smooth one cannot guarantee that a generic hyperplane section through $P\\in X$ is log smooth at the point $P$ (take $X$ to be the union of coordinate axes in affine space and $P$ the origin). The interesting case is when $X$ is defined over a complete discrete valuation ring $V$ with singular special fibre. Assume the residue field $k$ of $V$ is algebraically closed and let $K$ denote the fraction field of $V$.\n\n\\begin{thm}[\\ref{maingoodnhbprop}]\\label{goodetalemap}\nAssume $\\cha(K)=0$. Let $X$ be a flat projective $V$-scheme with geometrically reduced generic fibre of dimension $d$ everywhere. Fix a closed point $P\\in X_k:=X\\otimes_Vk$. If $X$ is regular at $P$, then, up to modifying the projective embedding of $X$, for generic $V$-hyperplanes $H_1,...,H_d$ passing through $P$ the $V$-scheme $X\\cap H_1\\cap\\cdots\\cap H_d$ has \\'etale generic fibre.\n\\end{thm}\n\nThe basic idea of the proof is to find a nice model of $X_K:=X\\otimes_VK$ (in fact the blow up of $X$ at $P$) which has a convenient projective embedding for taking hyperplane sections.\n\nFinally, we use the results on hyperplane sections to construct $K(\\Pi,1)$-neighbourhoods of smooth $V$-schemes following Faltings \\cite{fa2}, and deduce that the $p$-adic nearby cycles can be computed locally as the Galois cohomology of the geometric generic fibre. For another exposition of this result, see Olsson \\cite{olsson}.\n\n\\subsection*{Acknowledgement}\nIn a previous version of this paper some claims were incorrect. I am indebted to Piotr Achinger for providing counterexamples to those claims.\n\n\\subsection*{Overview of the paper}\nIn \\S1 we define the notion of a sharp morphism of log schemes and give examples. Then we recall some facts about log smooth morphisms.\n\nIn \\S2 we set up and prove affine versions of the Bertini theorems. We first show the Bertini theorem for log smoothness \\ref{logbertini}. The idea of the proof is as follows. The argument of \\cite{bertini} is that if the generic hyperplane section of a smooth scheme $X$ were singular at a point $x$, then the resulting $k(x)$-linear relation between differentials in $\\Omega^1_{k(x)}$ would force the transcendence degrees to be too small. Our proof reduces to this very same argument. The key is that for a point $x$ on a \\emph{sharp} log scheme $X$ over a log point, the usual differentials $\\Omega^1_{k(x)}$ inject into the logarithmic differentials $\\omega^1_{k(x)}$ (\\ref{satz2}), therefore any relation between regular differentials in the latter is in fact a relation in the former. Then in \\ref{cx} we construct a counterexample which shows that the theorem is false in general for log smooth schemes which are not sharp. Afterwards we prove the Bertini theorem for abstract smoothness by applying the argument of \\cite{bertini}.\n\nIn \\S3 we show that the Bertini theorems in the quasi-projective case follow from the affine case already shown (\\ref{bertini6}, \\ref{bertini7}). Here we also follow the method sketched in \\cite{bertini}.\n\nIn \\S4, we study the case of schemes over a complete discrete valuation ring $V$. After some preliminaries in \\ref{special}, we study the case of a generic hyperplane through a given rational point $P\\in\\mathbb{P}^n(k)$. For classical smoothness, up to modifying the projective embedding there is a Bertini theorem (cf. \\cite[Exp. XI]{sga4.3}); the underlying idea is that in a suitable projective space any hyperplane corresponds to a hyperplane in $\\mathbb{P}^n_V$ through $P$, and one can apply the usual Bertini theorem there. Although one does not obtain information about the intersection at the point $P$, this suffices in the case $X$ has good reduction, see \\ref{smoothcase}. For the general case of Theorem \\ref{goodetalemap} we proceed as follows: By regularity, the intersection of a generic hyperplane through $P$ with $X$ is regular at $P$. To deal with the other points we use the Bertini theorem for abstract smoothness. Ultimately, what makes this argument work is the nice projective embedding of the blow up of $X$ at $P$ provided by \\ref{closedimmersion}.\n\nIn \\S5 we apply the previous results to construct good neighbourhoods of points on smooth projective $V$-schemes.\n\nIn \\S6 we construct the $K(\\Pi,1)$-neighbourhoods of points of smooth $V$-schemes. To finish we explain the relevance to $p$-adic nearby cycles.\n\n\\section{Sharp, tame and smooth morphisms of log schemes}\n\n\\subsection{References and notation}\nAs a general reference for logarithmic geometry we will use K. Kato's foundational articles \\cite{log}, although for a result on differentials we refer to Ogus' notes \\cite{logbook} for lack of other reference. All monoids are commutative with unit element denoted by $1$ (with the exception of $\\mathbb{N}$ and $\\mathbb{Z}$ whose unit element is traditionally denoted by $0$). Log structures are taken for the \\'etale topology. Given a morphism of monoids $Q\\to P$ and a $\\mathbb{Z}[Q]$-scheme $Y$ we write $Y_Q[P]:=Y\\times_{\\Spec(\\mathbb{Z}[Q])}\\Spec(\\mathbb{Z}[P])$.\n\n\\subsection{Log points}\\label{setup2}\nLet $(k,Q)$ be a \\emph{log point}, i.e. $k$ is field with a monoid $Q$ defining the log structure via the map $Q\\to k$ defined\n\\[ q \\mapsto\n\\begin{cases}\n1 & q=1 \\\\\n0 & q\\neq 1.\n\\end{cases} \\]\nA \\emph{monogenic log point} is the case where $Q$ is a monogenic monoid, i.e. $Q$ can be generated by a single element. The \\emph{standard log point} is the case where $Q=\\mathbb{N}$.\n\nWe consider log points as log schemes, so that a morphism of log points just means a morphism of the associated log schemes. An \\emph{extension of log points} $(k,Q)\\to (L,P)$ is a morphism of the log points, together with a map of monoids $Q\\to P$ giving a chart for this morphism.\n\n\\subsection{Sharp morphisms}\\label{sharp}\nLet $f:(X,M)\\to (Y,N)$ be a morphism of log schemes. Let $x\\to X$ be a geometric point, and $y=f(x)$. We say that $f$ is \\emph{sharp at $x$} if there is a chart $Q\\to P$ of $f$ at $x$ such that $P\\to k(x)$ and $Q\\to k(y)$ are log points.\n\nWe say that $f$ is \\emph{sharp} if $f$ is sharp at all $x\\to X$.\n\n\\begin{proposition}\\label{sharplemma}\nLet $f:(X,M)\\to (Y,N)$ be a morphism of log schemes.\n\\begin{enumerate}[(i)]\n\\item If $f$ is sharp at a geometric point $x\\to X$, then, in the above notation, $P$ and $Q$ are sharp monoids and $(k(y),Q)\\to (k(x),P)$ is an extension of log points.\n\\item If $N$ is the trivial log structure and $M$ is fine and saturated, then $f$ is sharp.\n\\item If $f$ is sharp and $g:Z\\to X$ is a morphism of schemes, then $f\\circ g:(Z,g^{\\ast}M)\\to (Y,N)$ is sharp.\n\\item If $f$ is sharp and $g:Z\\to Y$ is a morphism of schemes, then $g^*(f):(X\\times_YZ,(f^*(g))^{\\ast}M)\\to (Z,g^*N)$ is sharp.\n\\item If $f$ is sharp at a geometric point $x\\to X$ and $M$ is integral (resp. fine, resp. saturated), then, in the above notation, $P$ is integral (resp. fine, resp. saturated).\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n(i), (iii) and (iv) follow easily from the definition. For (ii), pick a geometric point $x\\to X$ and let $P=M_x\/\\mathcal{O}_{X,x}^{\\ast}$. Then it is well-known that the map $M_x\\to P$ has a section $P\\to M_x$ such that the log structure associated to $P\\to \\mathcal{O}_X$ is the log structure $\\alpha:M\\to \\mathcal{O}_X$ in a neighbourhood of $x$. This is obtained by choosing a section of $M^{\\gp}_x\\to P^{\\gp}$ which exists because $P^{\\gp}$ is a free abelian group (since $M$ is fine and saturated). Let $p\\in P$. If $p\\in\\alpha^{-1}(\\mathcal{O}_{X,x}^{\\ast})\\cong \\mathcal{O}_{X,x}^{\\ast}$, then $p=1$ by construction. So $P\\to k(x)$ is a log point. The fact that $P$ is sharp follows immediately from its definition. Since $N$ is the trivial log structure, we may take $Q=\\{1\\}$ and this proves that $f$ is sharp at $x$.\n\nFor (v), it suffices to note that the canonical map $P\\oplus\\mathcal{O}_{X,x}^{\\ast}\\to M_x$ is an isomorphism.\n\\end{proof}\n\nThe main case of interest in this paper is when $(Y,N)$ is a log point or the spectrum of discrete valuation ring with its canonical log structure. See \\ref{exsharp} below for an example of a sharp morphism over the standard log point, and \\ref{satz1} for a whole class of log schemes over a monogenic log point which are sharp.\n\n\\subsection{Example}\\label{exsharp}\nSuppose $X$ is a $k$-scheme with a geometric point $x\\to X$ such that locally for the \\'etale topology at $x$, $X$ is isomorphic to the spectrum of the ring\n\\[ \\dfrac{k(x)[T_1,...,T_{d+1}]}{(\\prod_{i=1}^rT_i^{e_i})} \\]\nfor some $e_i\\in\\mathbb{N}$, with $T_i(x)=0$ for $i=1,...,d+1$. We can define a log structure in a neighbourhood $U$ of $x$ by the map\n\\[ \\mathbb{N}^r\\to\\mathcal{O}_U:(n_1,..,n_r)\\mapsto\\prod_{i=1}^rT_i ^{n_i} \\]\nsuch that if we give $k$ the standard log point structure (\\ref{setup2}), then the map\n\\[ \\mathbb{N}\\to\\mathbb{N}^r:1\\mapsto (e_1,...,e_r) \\]\ndefines a morphism of log schemes $(U,\\mathbb{N}^r)\\to (k,\\mathbb{N})$.\n\n\\begin{lemma}\nThe morphism $(U,\\mathbb{N}^r)\\to (k,\\mathbb{N})$ is sharp at $x$.\n\\end{lemma}\n\\begin{proof}\nIt suffices to check that the induced map $\\mathbb{N}^r\\to k(x)$ defines the structure of a log point and that the diagram\n\\[ \\begin{CD}\n\\mathbb{N} @>>> \\mathbb{N}^r \\\\\n@VVV @VVV \\\\\nk @>>> \\mathcal{O}_{X,x}\n\\end{CD} \\]\ncommutes. The latter is obvious and since $T_i(x)=0$ for all $i$, $(k(x),\\mathbb{N}^r)$ is a log point.\n\\end{proof}\n\n\\subsection{Tame morphisms}\\label{tame}\nLet $f:(X,M)\\to (Y,N)$ be a morphism of log schemes. We say that $f$ is \\emph{tame} if the sheaf of abelian groups $M^{\\gp}\/(f^{\\ast}N)^{\\gp}$ has no torsion divisible by any of the residue characteristics of $Y$. (For any sheaf of monoids $M$ we write $M^{\\gp}$ for the associated sheaf of abelian groups.)\\\\\n\nThe interest of tame morphisms is that they provide a large class of sharp morphisms over a monogenic log point, as the following result shows.\n\n\\begin{proposition}\\label{satz1}\nSuppose that $f:(X,M)\\to (k,Q)$ is a morphism of log schemes with $(X,M)$ fine and saturated and $(k,Q)$ a monogenic log point. For any geometric point $x\\to X$ there is a fine saturated and sharp monoid $P$ defining $M$ in a neighbourhood of $x$, such that:\n\\begin{enumerate}[(i)]\n\\item $P\\to k(x)$ is a log point\n\\item if $f$ is tame, then there is a map $Q\\to P$ defining a chart of $f$ at $x$.\n\\end{enumerate}\nIn particular, if $f$ is tame, then $f$ is sharp.\n\\end{proposition}\n\\begin{proof}\nWe let $P=M_x\/\\mathcal{O}_{X,x}^*$ and then (i) can be proved as in \\ref{sharplemma} (ii).\n\nFor (ii), let $q\\in Q$ be a generator. Let $p_1,...,p_r$ be a set of generators of $P^{\\gp}\\cong\\mathbb{Z}^r$ and fix a section $s:P^{\\gp}\\to M_x^{\\gp}$. Since $M_x^{\\gp}=P^{\\gp}\\oplus\\mathcal{O}_{X,x}^{\\ast}$, we may write $q=u\\prod_{i=1}^rs(p_i)^{n_i}$ for some $u\\in\\mathcal{O}_{X,x}^{\\ast}$. Let $n=(n_1,...,n_r)$ be the greatest common divisor. Since $f$ is tame, $n$ is prime to $\\cha(k)$. Let $v\\in\\mathcal{O}_{X,x}^*$ satisfy $v^n=u$. There are integers $a_1,...,a_r$ such that $\\sum_{i=1}^ra_in_i=n$. Then we have $q=\\prod_{i=1}^r(v^{a_i}s(p_i))^{n_i}$. The section $s':P^{\\gp}\\to M_{x}^{\\gp}$ defined by $s'(p_i):= v^{a_i}s(p_i)$ is such that $q$ lies in $s'(P^{\\gp})\\cap M_x=s'(P)$. The induced map $Q\\to P$ is the desired chart.\n\\end{proof}\n\nSee Example \\ref{cx} for an example of a non-tame log smooth scheme over the standard log point which is not sharp.\n\n\\subsection{Log smooth morphisms}\\label{logsmoothdef}\nRecall that a morphism $f:(X,M)\\to (Y,N)$ of fine log schemes is \\emph{log smooth} if and only if, locally for the \\'etale topology, there exists a chart $Q\\to P$ of $f$ with $P$ and $Q$ fine monoids such that\n\\begin{enumerate}[(a)]\n\\item $\\ker(Q^{\\gp}\\to P^{\\gp})$ and the torsion subgroup of $\\cok(Q^{\\gp}\\to P^{\\gp})$ are finite groups of order invertible on $Y$\n\\item the induced map $X\\to Y_Q[P]$ is \\'etale.\n\\end{enumerate}\n$f$ is \\emph{log \\'etale} if, in addition, $\\cok(Q^{\\gp}\\to P^{\\gp})$ is a finite group of order invertible on $Y$.\n\n\\begin{remark} In spite of the above characterization, log smooth morphisms are not necessarily tame (see \\ref{cx} for an example).\n\\end{remark}\n\n\\subsection{Log differentials}\nLet $f:(X,M)\\to (Y,N)$ be a morphism of fine log schemes. We write $\\omega^1_{(X,M)\/(Y\/N)}$ for the module of relative logarithmic differentials.\n\n\\subsubsection{}\\label{diff}\nGiven any morphism $g:(Y,N)\\to (Z,L)$ of fine log schemes there is a right-exact sequence of sheaves of relative logarithmic differentials\n\\[ \\begin{CD}\n0 @>>> f^{\\ast}\\omega^1_{(Y,N)\/(Z,L)} @>>> \\omega^1_{(X,M)\/(Z,L)} @>>> \\omega^1_{(X,M)\/(Y,N)} @>>> 0\n\\end{CD} \\]\nand $f$ is log smooth if $g\\circ f$ is log smooth and the sequence is left-exact and locally split (\\cite[3.12]{log}).\n\n\\subsubsection{}\\label{jacobi}\nLet $i:Z\\hookrightarrow X$ be a closed immersion of ideal sheaf $\\mathcal{I}$ and give $Z$ the inverse image log structure of $(X,M)$ (making $i$ a strict closed immersion). Then there is a right-exact sequence\n\\[ \\begin{CD}\n0 @>>>\\mathcal{I}\/\\mathcal{I}^2 @>>> i^{\\ast}\\omega^1_{(X,M)\/(Y,N)} @>>> \\omega^1_{(Z,i^{\\ast}M)\/(Y,N)} @>>> 0\n\\end{CD} \\]\nand if $f$ is log smooth, then $f\\circ i$ is log smooth if and only if the sequence is left-exact and locally split. See \\cite[IV, 3.2.2]{logbook} for the proof.\n\n\\subsection{Generic log smoothness}\\label{generic}\nLet $f:(X,M)\\to (Y,N)$ be a morphism of finite presentation of fine log schemes with $Y$ irreducible. Assume moreover there is a chart $Q\\to N$ with $Q$ a fine monoid. Let $\\xi\\in Y$ be the generic point, endowed with the inverse image log structure of $(Y,N)$. We claim that if $f^{-1}(\\xi)$ is log smooth over $\\xi$, then there exists a dense open $V\\subset Y$ such that $f|_V$ is log smooth. We may assume $Y$ is quasi-compact. For each geometric point $x\\to f^{-1}(\\xi)$ there is an \\'etale neighbourhood $Z^x\\to f^{-1}(\\xi)$ of $x$, a chart $P^x\\to M|_{Z^x}$ with $P^x$ a fine monoid, and a map $Q\\to P^x$ such that $Z^x\\to \\xi_Q[P^x]$ is \\'etale and the map $Q\\to P^x$ satisfies property (a) of \\ref{logsmoothdef}. Moreover, since $M$ is fine, $P^x$ defines the log structure of $X$ in an \\'etale neighbourhood $X^x$ of $x$. Since $f$ is of finite presentation, up to replacing $X^x$ by an \\'etale neighbourhood of $x$, there is an open neighbourhood $Y^x$ of $\\xi$ in $Y$ and a morphism of finite presentation $X^x\\to (Y^x)_Q[P^x]$. Namely, we may take $Y^x$ to be a dense open subset of the image of $X^x\\to X\\overset{f}{\\to}Y$, up to replacing $X^x$ by $X^x\\times_YY^x$. Since this morphism becomes \\'etale at the point $\\xi\\in Y^x$, by \\cite[IV, 17.7.8]{ega} there is a dense open $U^x\\subset Y^x$ such that the map $X^x\\times_YU^x\\to (U^x)_Q[P^x]$ is \\'etale. Now since $f$ is of finite presentation, there is a finite number of points $x_1,...,x_n\\in f^{-1}(\\xi)$ such that the map $\\coprod_{i=1}^nX^{x_i}\\times_Y\\xi\\to f^{-1}(\\xi)$ is an \\'etale covering. If $U:=U^{x_1}\\cap U^{x_2}\\cap\\cdots\\cap U^{x_n}$, then there is a dense open $V\\subset U$ such that $\\coprod_{i=1}^nX^{x_i}\\times_YV\\to X\\times_YV$ is an \\'etale covering, and this proves the claim.\n\n\\subsection{Notation}\nIn the sequel we will usually drop the log structures from notation when it is clear which log structures are meant. Also we will always write $\\omega^1_{X\/Y}$ for logarithmic differentials, as opposed to the usual differentials $\\Omega^1_{X\/Y}$.\n\n\\section{Hyperplane sections}\n\nLet $k$ be a field.\n\n\\subsection{Affine hyperplane sections}\\label{setup}\nLet $X$ be a $k$-scheme of finite type and $f:X\\to\\mathbb{A}^n_k$ a morphism of $k$-schemes defined by global sections $f_1,...,f_n\\in\\Gamma(X,\\mathcal{O}_X)$. Let\n\\[ Z:=\\underline{\\Spec}_X\\left(\\mathcal{O}_X[U_0,...,U_n]\/(U_0+U_1f_1+U_2f_2+...+U_nf_n)\\right). \\]\nThen we have a projection morphism $q:Z\\to\\mathbb{A}^{n+1}_k$ whose fibre over a point $u=(u_0,...,u_n)$ is the fibre $f^{-1}(H_u)$, where $H_u$ is the hyperplane of equation $u_0+u_1T_1+u_2T_2+...+u_nT_n=0$ where $T_1,...,T_n$ are the global coordinates on $\\mathbb{A}^n_k$ with images $f_1,...,f_n$ respectively in $\\Gamma(X,\\mathcal{O}_X)$.\n\nThere is a natural isomorphism\n\\[ Z\\cong X\\times_k\\mathbb{A}^n_k \\]\nobtained by the map\n\\begin{eqnarray*}\n\\mathcal{O}_X[U_0,...,U_n]\/(U_0+U_1f_1+...+U_nf_n) &\\to &\\mathcal{O}_X[U_1,...,U_n]\\\\\nU_0 &\\mapsto& -\\sum_{i=1}^nU_if_i\n\\end{eqnarray*}\nso that $Z$ can viewed as trivial vector bundle on $X$.\n\nLet $\\mathbb{K}$ be the function field of $\\mathbb{A}^{n+1}_k$. We write $Z_\\mathbb{K}$ for the generic fibre of $q:Z\\to\\mathbb{A}^{n+1}_k$, $X_\\mathbb{K}:=X\\otimes_k\\mathbb{K}$, and $i_\\mathbb{K}:Z_\\mathbb{K}\\hookrightarrow X_\\mathbb{K}$ for the closed immersion deduced from $i$. Note that by \\cite[I, 6.3.18]{bertini} $q$ is a dominant morphism if and only if $\\dim\\overline{f(X)}>0$, hence $Z_{\\mathbb{K}}$ is non-empty if and only if $\\dim\\overline{f(X)}>0$.\n\nWe refer to \\cite[I, 6]{bertini} for further details.\n\n\\subsection{Bertini theorem for log smoothness}\nLet $(k,Q)$ be a log point, with $Q$ a fine monoid. We endow $\\mathbb{A}^{n+1}_k$ with the inverse image log structure of $(k,Q)$ via the canonical morphism $\\mathbb{A}^{n+1}_k\\to\\Spec(k)$. If $(X,M)\\to (k,Q)$ is a morphism of log schemes, then we have the fibre product $X\\times_{k}\\mathbb{A}^{n+1}_k$ in the category of log schemes (the log structure is just the inverse image of $M$ under the projection $X\\times_{k}\\mathbb{A}^{n+1}_k\\to X$). Finally, we give $i:Z\\hookrightarrow X\\times_{k}\\mathbb{A}^{n+1}_k$ the inverse image log structure.\n\nEndow $\\mathbb{K}$ with the inverse image log structure of $\\mathbb{A}^{n+1}_k$, $X_{\\mathbb{K}}$ the inverse image log structure of $X\\times_k\\mathbb{A}^{n+1}_k$, and $i_\\mathbb{K}:Z_\\mathbb{K}\\hookrightarrow X_\\mathbb{K}$ with the inverse image log structure of $X_{\\mathbb{K}}$.\n\n\\begin{theorem}\\label{main}\nWith the above notation and hypothesis. Assume that $(X,M)$ is a sharp log $(k,Q)$-scheme. If $f$ is unramified, then for any $z\\in Z_\\mathbb{K}$ the sequence\n\\[ \\begin{CD}\n0 @>>> k(z) @>{1\\mapsto \\sum_{i=1}^nU_idf_i}>> i_\\mathbb{K}^{\\ast}\\omega^1_{X_\\mathbb{K}\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{Z_{\\mathbb{K}}}}k(z) @>>> \\omega^1_{Z_\\mathbb{K}\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{Z_{\\mathbb{K}}}}k(z) @>>> 0\n\\end{CD} \\]\nis exact.\n\\end{theorem}\n\nThis theorem will be proved below. From it one derives the Bertini theorem for log smoothness.\n\n\\begin{corollary}\\label{logbertinigeneric}\nWith notation and hypothesis as in \\ref{main}. If $X$ is log smooth over $(k,Q)$, then $Z_\\mathbb{K}$ is log smooth over $(\\mathbb{K},Q)$.\n\\end{corollary}\n\\begin{proof}\nChoose $z\\in Z_{\\mathbb{K}}$. By \\ref{main} there is an ideal $I\\subsetneq\\mathcal{O}_{Z_{\\mathbb{K}},z}$ such that the sequence\n\\[ \\begin{CD}\n0 @>>> \\mathcal{O}_{Z_{\\mathbb{K}},z}\/I @>{1\\mapsto \\sum_{i=1}^nU_idf_i}>> (i_\\mathbb{K}^{\\ast}\\omega^1_{X_\\mathbb{K}\/\\mathbb{K}})_z @>>> (\\omega^1_{Z_{\\mathbb{K}}\/\\mathbb{K}})_z @>>> 0\n\\end{CD} \\]\nis exact. Since $\\omega^1_{X_\\mathbb{K}\/\\mathbb{K}}\\cong\\omega^1_{X\/k}\\otimes_k\\mathbb{K}$ is locally free, \\ref{main} implies that $\\text{Tor}_1^{\\mathcal{O}_{Z_{\\mathbb{K}},z}}((\\omega^1_{Z_{\\mathbb{K}}\/\\mathbb{K}})_z,k(z))=0$, i.e. $(\\omega^1_{Z_{\\mathbb{K}}\/\\mathbb{K}})_z$ is flat, hence so is $\\mathcal{O}_{Z_{\\mathbb{K}},z}\/I$. Thus, tensoring the exact sequence\n\\[ 0\\to I\\to \\mathcal{O}_{Z_{\\mathbb{K}},z}\\to \\mathcal{O}_{Z_{\\mathbb{K}},z}\/I\\to 0 \\]\nwith $k(z)$ we get an exact sequence\n\\[ 0\\to I\\otimes_{\\mathcal{O}_{Z_{\\mathbb{K}},z}}k(z)\\to k(z)\\to k(z)\\to 0 \\]\nand therefore $I\\otimes_{\\mathcal{O}_{Z_{\\mathbb{K}},z}}k(z)=0$, whence $I=0$ by Nakayama's lemma. It now follows from \\ref{jacobi} that $Z_{\\mathbb{K}}$ is log smooth over $(\\mathbb{K},Q)$ at $z$.\n\\end{proof}\n\n\\begin{corollary}\\label{logbertini}\nWith notation and hypothesis as in \\ref{main}. If $X$ is quasi-compact and $(X,M)\\to (k,Q)$ is log smooth, then there is a dense open $V\\subset \\mathbb{A}^{n+1}_k$ such that $q^{-1}(V)\\to V$ is log smooth over $(k,Q)$.\n\\end{corollary}\n\\begin{proof}\nThis follows from \\ref{generic}.\n\\end{proof}\n\n\\begin{proof}[Proof of \\ref{main}]\nThe proof is in the style of \\cite[I, 6.3]{bertini}. We may assume that $k$ is algebraically closed. We first show a lemma.\n\n\\begin{lemma}\\label{satz2}\nLet $(k,Q)\\to (L,P)$ be an extension of log points. Then the natural map\n\\[ \\Omega^1_{L\/k}\\to\\omega^1_{L\/k} \\]\nhas a section, in particular is injective.\n\\end{lemma}\n\\begin{proof}\nLet $N=Q\\oplus k^{\\ast}$ and $M=P\\oplus L^{\\ast}$ be the log structures. Note that if $(p,u)\\in P\\oplus L^{\\ast}$ then the image of $(p,u)$ in $L$ is 0 unless $p=1$. Let $d:L\\to\\Omega^1_{L\/k}$ be the exterior derivative and let $\\partial:M\\to\\Omega^1_{L\/k}$ be the homomorphism of monoids $M=P\\oplus L^{\\ast}\\to\\Omega^1_{L\/k}:(p,u)\\mapsto u^{-1}du$. Since the map $N\\to M$ maps $Q$ to $P$, one checks easily that the pair $(d,\\partial)$ forms a log derivation of $(L,M)$ to $\\Omega^1_{L\/k}$ over $(k,N)$ (\\cite[5.1]{logsmooth}). So, by \\cite[5.3]{logsmooth}, there is a unique $L$-linear map $s:\\omega^1_{L\/k}\\to\\Omega^1_{L\/k}$ such that $d=s\\circ d$ and $\\partial=s\\circ\\dlog$, where $\\dlog:M\\to\\omega^1_{L\/k}$ is the canonical map (the hypothesis that the log structures be fine is superfluous, cf. \\cite[IV, 1.1.6]{logbook}). In particular, the composition of $s$ with the canonical map $\\Omega^1_{L\/k}\\to\\omega^1_{L\/k}$ maps $dx\\in\\Omega^1_{L\/k}$ to itself, hence is the identity map by linearity.\n\\end{proof}\n\nLet $z\\in Z_{\\mathbb{K}}$ and suppose for a contradiction that the map\n\\[ \\begin{CD}\nk(z) @>{1\\mapsto \\sum_{i=1}^n=U_idf_i}>> \\omega^1_{X_\\mathbb{K}\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{X_\\mathbb{K}}}k(z)\n\\end{CD} \\]\nis zero. Let $x\\in X$ be the image of $z\\in Z_\\mathbb{K}$ and let $d=\\td(k(x)\/k)$ be the transcendence degree of $k(x)$ over $k$. Note that $d>0$: otherwise $Z_{\\mathbb{K}}\\times_X\\Spec(k(x))=\\Spec(k(U_0,...,U_n)\/(\\sum_iU_if_i))=\\emptyset$ and there is no point $z$ lying above $x$.\n\nChoose an algebraic closure $\\overline{k(x)}$ of $k(x)$ and let $k(x)^{\\text{sep}}$ be the separable algebraic closure of $k(x)$ in $\\overline{k(x)}$. Let $\\bar{x}\\to X$ be the geometric point lying above $x$ corresponding to the inclusion $k(x)\\subset k(x)^{\\text{sep}}$. Since $X$ is sharp over $k$, by definition (\\ref{sharp}) there is a chart $Q\\to P$ of $f$ at $\\bar{x}$ such that the induced map $P\\to k(\\bar{x})$ is a log point. Then the map $(k,Q)\\to (k(\\bar{x}),P)$ is an extension of log points, so by \\ref{satz2} the quotient $\\omega^1_{k(\\bar{x})\/k}\\cong\\omega^1_{k(x)\/k}\\otimes_{k(x)}k(\\bar{x})$ of $\\omega^1_{X\/k}\\otimes_{\\mathcal{O}_X}k(\\bar{x})$ contains $\\Omega^1_{k(\\bar{x})\/k}\\cong\\Omega^1_{k(x)\/k}\\otimes_{k(x)}k(\\bar{x})$. Thus, the canonical map $\\Omega^1_{k(x)\/k}\\to\\omega^1_{k(x)\/k}$ is injective. Hence $\\sum_iU_idf_i=0$ in $\\Omega^1_{k(x)\/k}\\otimes_{k(x)}k(z)$. Since $k$ is algebraically closed, it follows that $d=\\dim_{k(x)}\\Omega^1_{k(x)\/k}$. Since $f:X\\to\\mathbb{A}^n_k$ is unramified, $\\Omega^1_{k(x)\/k}$ is generated by $df_1,...,df_n$, so without loss of generality we may assume that $df_1,...,df_d$ form a $k(x)$-basis of $\\Omega^1_{k(x)\/k}$. Thus, for $i=d+1,...,n$ we may write $df_i=\\sum_{j=1}^da_{ij}df_j$ for some $a_{ij}\\in k(x)$. Then the relation $\\sum_iU_idf_i=0$ gives the following equations\n\\[ U_j+\\sum_{i=d+1}^nU_ia_{ij}=0 \\]\nfor each $j=1,...,d$. So we deduce that $U_1,...,U_d\\in k(z)$ belong to the $k(x)$-vector space generated by $U_{d+1},U_{d+2},...,U_n$. Moreover, since $U_0=-\\sum_{i=1}^nU_if_i$, it follows that\n\\[ k(z)=k(x)(U_0,U_1,...,U_n)=k(x)(U_{d+1},U_{d+2},...,U_n) \\]\nand so\n\\[ \\td(k(z)\/k(x))\\leq n-d. \\]\nOn the other hand we have\n\\begin{eqnarray*}\n\\td(k(z)\/k(x)) &=& \\td(k(z)\/k)-\\td(k(x)\/k) \\\\\n&\\geq & n+1-d\n\\end{eqnarray*}\nwhich is the desired contradiction.\n\\end{proof}\n\n\\begin{cx}\\label{cx}\nWe give an example of a non-sharp (and non-tame) log smooth scheme over the standard log point $(k,\\mathbb{N})$ whose generic hyperplane section is nowhere log smooth. Let $0\\neq p=\\cha(k)$ and consider the morphism of monoids\n\\begin{eqnarray*}\nh:\\mathbb{N} &\\to & \\mathbb{N}\\oplus\\mathbb{Z} \\\\\n1 &\\mapsto & (p,1).\n\\end{eqnarray*}\nThen $\\cok(h^{\\gp})\\cong\\mathbb{Z}$, hence\n\\[ X:=\\Spec(k)_{\\mathbb{N}}[\\mathbb{N}\\oplus\\mathbb{Z}]\\cong\\Spec\\left(k[t,u^{\\pm 1}]\/(t^p)\\right) \\]\nwith its natural log structure is log smooth over $(k,\\mathbb{N})$. Let $M$ denote the log structure on $X$, $N$ the standard log point structure on $k$, and $f:(X,M)\\to (k,N)$ the morphism given by map $h$. Note that the image of $(1,0)\\in\\mathbb{N}\\oplus\\mathbb{Z}$ in $M^{\\gp}\/(f^{\\ast}N)^{\\gp}$ is $p$-torsion: we have $p(1,0)=(p,0)=h(1)+(0,-1)$ and $(0,-1)\\in\\mathcal{O}_X^*$. So $f$ is not tame.\n\nWe claim that $f$ is not sharp. If not, then let $x\\to X$ be a geometric point and a chart $Q:=\\mathbb{N}\\to P$ of $f$ at $x$ such that $P\\to k(x)$ is a log point. Then we have an isomorphism $P\\oplus k(x)^*\\cong M_{k(x)}$. On the other hand, one sees easily that $M_{k(x)}=\\mathbb{N}\\oplus k(x)^*$, where $\\mathbb{N}\\to k(x)$ is the standard log point with $\\mathbb{N}$ generated by $t$. From this it follows that $P\\cong\\mathbb{N}$ as monoids, hence if $n\\in P$ is a generator, its image in $M_{k(x)}$ is of the form $tv$ for some $v\\in k(x)^*$. Now let $q=1\\in\\mathbb{N}=Q$ be the generator. By definition, its image in $M_{k(x)}$ is $t^pu$. On the other hand, its image in $P$ is $(tv)^m$ for some $m$. Thus, we must have $m=p$ and $u=v^p$. In particular, if the image of $x$ in $X$ is the generic point, then $k(x)$ is a separable closure of $k(u)$ so this is impossible. This proves the claim.\n\nNow set $f_1=t,f_2=u,f_3=u^{-1}$ and use the same notation as before. Pick a point $z\\in Z_{\\mathbb{K}}$ with image $x\\in X$. Then in $\\omega^1_{X\/k}$ we have\n\\[ 0=\\dlog(q)=\\dlog(p,1)=p\\dlog(1,0)+\\dlog(0,1)=\\dlog(0,1)=\\dlog(u) \\]\nso $du=0$ in $\\omega^1_{X\/k}$. Since $dt=t\\dlog(1,0)$ and $t(x)=0$, it follows that\n\\[ U_1dt+U_2du+U_3d(u^{-1})=0 \\]\nin $\\omega^1_{X_\\mathbb{K}\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{X_\\mathbb{K}}}k(z)$, so a generic hyperplane section of $X$ is nowhere log smooth over $(k,\\mathbb{N})$.\n\\end{cx}\n\n\\subsection{A generalization of the Bertini theorem for smoothness}\\label{bertinigeneral}\nLet again $f:X\\to\\mathbb{A}^n_k$ and $Z$ be as in \\ref{setup}. Here we do not assume the presence of any log structures. Assume there is a morphism of finite type $m:\\mathcal{X}\\to X$ with $\\dim \\mathcal{X}\\leq d$ and a surjective map of $\\mathcal{O}_\\mathcal{X}$-modules\n\\[ m^*\\Omega^1_{X\/k}\\twoheadrightarrow\\mathcal{E} \\]\nwith $\\mathcal{E}$ a vector bundle of rank $d$ on $\\mathcal{X}$. Define $\\mathcal{Z}=\\mathcal{X}\\times_XZ$. As before we let $\\mathbb{K}=k(U_0,...,U_n)$. Now, the map\n\\[ \\begin{CD} \\mathcal{O}_Z @>{1\\mapsto\\sum_{i=1}^nU_idf_i}>> \\Omega^1_{X\/k}\\otimes_{\\mathcal{O}_X}\\mathcal{O}_Z \\end{CD} \\]\nextends to a map\n\\[ \\mathcal{O}_\\mathcal{Z}\\overset{\\phi}{\\to}\\mathcal{E}\\otimes_{\\mathcal{O}_\\mathcal{X}}\\mathcal{O}_\\mathcal{Z}. \\]\nSet\n\\[ \\mathcal{F}:=\\cok(\\phi) \\]\nthe cokernel of the map $\\phi$. So we have a commutative diagram with exact rows\n\\[ \\xymatrix{\n\\mathcal{O}_{\\mathcal{Z}} \\ar[r] \\ar@{=}[d] & \\Omega^1_{X\/k}\\otimes_{\\mathcal{O}_{X}}\\mathcal{O}_{\\mathcal{Z}} \\ar[d] \\ar[r] & \\Omega^1_{Z\/k}\\otimes_{\\mathcal{O}_Z}\\mathcal{O}_{\\mathcal{Z}} \\ar[d] \\ar[r] & 0 \\\\\n\\mathcal{O}_{\\mathcal{Z}} \\ar[r]^{\\phi} & \\mathcal{E}\\otimes_{\\mathcal{O}_\\mathcal{X}}\\mathcal{O}_\\mathcal{Z} \\ar[r] & \\mathcal{F} \\ar[r] & 0\n}\n\\]\n\n\\begin{theorem}\\label{bertgeneral}\nIf $f$ is unramified, then $\\mathcal{F}_{\\mathbb{K}}$ is a rank $d-1$ vector bundle quotient of $\\Omega^1_{Z_{\\mathbb{K}}\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{Z_{\\mathbb{K}}}}\\mathcal{O}_{\\mathcal{Z}_{\\mathbb{K}}}$. Moreover, if $\\mathcal{Z}_{\\mathbb{K}}\\neq\\emptyset$, then $\\dim\\mathcal{Z}_{\\mathbb{K}}\\leq d-1$.\n\\end{theorem}\n\\begin{proof}\nNote that the last statement follows from \\cite[I, 6.3]{bertini}. For the first statement we will show that $\\mathcal{F}_{\\mathbb{K}}$ is a vector bundle of rank $d-1$ by the same argument as for the classical Bertini theorem for smoothness (loc. cit.). Since $\\mathcal{E}$ is flat, it suffices to show that the map\n\\[ \\phi(z):k(z)\\to\\mathcal{E}\\otimes_{\\mathcal{O}_\\mathcal{X}}k(z) \\]\nis injective at any point $z\\in \\mathcal{Z}_{\\mathbb{K}}$. Let $x\\in \\mathcal{X}$ be the image of $z$. Since $f$ is unramified, $df_1,...,df_n$ generate $\\Omega^1_{X\/k}$, and since $\\mathcal{E}$ is a quotient of the latter on $\\mathcal{X}$, we may assume that the images of $df_1,...,df_d$ form a $k(x)$-basis of $\\mathcal{E}\\otimes_{\\mathcal{O}_\\mathcal{X}}k(x)$. So for $i=d+1,...,n$ we can write $df_i=\\sum_{j=1}^da_{ij}df_j$ for some $a_{ij}\\in k(x)$. If $\\phi(z)$ is not injective, i.e. $\\sum_{i=1}^nU_idf_i=0$ in $\\mathcal{E}\\otimes_{\\mathcal{O}_\\mathcal{X}}k(z)$, then we get equations\n\\[ U_j+\\sum_{i=d+1}^nU_ja_{ij}=0 \\]\nfor $j=1,...,d$. Together with $U_0+\\sum_{i=1}^nU_if_i=0$, these equations imply that\n\\[ k(z)=k(x)(U_{d+1},...,U_n) \\]\nhence\n\\[ \\td(k(z)\/k(x))\\leq n-d. \\]\nOn the other hand, since $\\dim \\mathcal{X}\\leq d$ we have $\\td(k(x)\/k)\\leq d$, hence\n\\[ \\td(k(z)\/k(x))=\\td(k(z)\/k)-\\td(k(x)\/k)\\geq n+1-d \\]\na contradiction.\n\\end{proof}\n\nThe theorem easily implies the following generalization of Bertini's theorem for smoothness (the classical case being $\\mathcal{X}=X$ and $\\mathcal{E}=\\Omega^1_{X\/k}$).\n\n\\begin{corollary}\nIf $f$ is unramified and $X$ quasi-compact, then there is a dense open $U\\subset\\mathbb{A}^{n+1}_k$ such that $\\mathcal{F}_U$ is a rank $d-1$ vector bundle quotient of $\\Omega^1_{Z_U\/U}\\otimes_{\\mathcal{O}_{Z_U}}\\mathcal{Z}_U$.\n\\end{corollary}\n\\begin{proof}\nIt suffices to find $U$ such that $\\mathcal{F}_U$ is a vector bundle of rank $d-1$. First assume that $\\mathcal{F}_{\\mathbb{K}}$ is free of basis $e_1,...,e_{d-1}$ say. Then we can find a dense open set $U'\\subset\\mathbb{A}^{n+1}_k$ such that $e_i\\in\\mathcal{F}_{U'}$ for $i=1,...,d-1$, and this defines a map $\\mathcal{O}_{\\mathcal{Z}_{U'}}^{d-1}\\to\\mathcal{F}_{U'}$. Since the cokernel $\\mathcal{G}$ of this map vanishes over the generic point of $\\mathbb{A}^{n+1}_k$, by a limit argument we can find a dense open $U\\subset U'$ such that $\\mathcal{G}_U=0$. Since $X$, hence $\\mathcal{Z}$, is quasi-compact, $\\mathcal{Z}\\to\\mathbb{A}^{n+1}_k$ is of finite presentation, so the general case follows easily from this.\n\\end{proof}\n\n\\subsection{Multiple hyperplane sections}\\label{multiple}\nWe can generalize the previous results to the case of $r$ hyperplanes. This time we let\n\\[ Z:=\\underline{\\Spec}_{X}\\left(\\dfrac{\\mathcal{O}_{X}[U_i^{(j)}]_{0\\leq i\\leq n, 1\\leq j\\leq r}}{\\left(U_0^{(j)}+\\sum_{i=1}^nU_i^{(j)}f_i\\right)_{j=1,...,r}}\\right) \\]\nand $q:Z\\to\\mathbb{A}^{(n+1)r}_k$ the projection. The generic fibre of $q$ will be non-empty if (and only if) $\\dim\\overline{f(X)}\\geq r$; namely if the latter condition holds then there is an irreducible component $Y$ of $X$ for which $\\dim\\overline{f(Y)}\\geq r$ and then $Z\\times_XY\\cong Y\\times_k\\mathbb{A}^{nr}_k$ is an irreducible component of $Z\\cong X\\times_k\\mathbb{A}^{nr}_k$ and the generic fibre of $q|_{Z_Y}$ is non-empty by \\cite[I, 6.6]{bertini}.\n\n\\begin{corollary}\\label{bertini1}\nEndow $\\mathbb{A}^{(n+1)r}_k$ with the inverse image log structure from $\\Spec(k)$ and $Z$ with the inverse image log structure of $X$ (via the structure maps). If $f$ is unramified and $X\\to\\Spec(k)$ is sharp log smooth, then the generic fibre of $q$ is log smooth.\n\\end{corollary}\n\\begin{proof}\nThe proof is by induction on $r$, the case $r=1$ being of course \\ref{logbertinigeneric}. Let $\\mathbb{K}'=k(U_i^{(j)})_{0\\leq i\\leq n, 1\\leq j\\leq r-1}$ and consider\n\\[ Z':=\\underline{\\Spec}_{X}\\left(\\dfrac{\\mathcal{O}_{X}\\otimes_k\\mathbb{K}'}{\\left(U_0^{(j)}+\\sum_{i=1}^nU_i^{(j)}f_i\\right)_{j=1,...,r-1}}\\right). \\]\nBy induction, $Z'$ is log smooth over $(\\mathbb{K}',Q)$. Moreover, it is sharp by \\ref{sharplemma} (iii) and (iv). So by \\ref{logbertinigeneric} the scheme\n\\[ Z''=\\underline{\\Spec}_{Z'}\\left(\\dfrac{\\mathcal{O}_{Z'}\\otimes_{\\mathbb{K}'}\\mathbb{K}'(U_0^{(r)},U_1^{(r)},...,U_n^{(r)})}{\\left(U_0^{(r)}+\\sum_{i=1}^nU_i^{(r)}f_i\\right)}\\right) \\]\nis log smooth over the field $\\mathbb{K}'(U_0^{(r)},U_1^{(r)},...,U_n^{(r)})=k(U_i^{(j)})_{0\\leq i\\leq n, 1\\leq j\\leq r}$. To complete the proof it suffices to remark that this is just the generic fibre of $q$.\n\\end{proof}\n\nLet $\\mathcal{X}$ be as in \\ref{bertinigeneral}. Define $\\mathcal{Z}=\\mathcal{X}\\times_XZ$ with $Z$ as above. There is a canonical map $\\phi:\\mathcal{N}\\otimes_{\\mathcal{O}_Z}\\mathcal{O}_{\\mathcal{Z}}\\to\\mathcal{E}\\otimes_{\\mathcal{O}_{\\mathcal{X}}}\\mathcal{O}_{\\mathcal{Z}}$, where $\\mathcal{N}$ is the conormal sheaf of $Z$ in $X\\times\\mathbb{A}^{(n+1)r}$. Set $\\mathcal{F}:=\\cok(\\phi)$.\n\n\\begin{corollary}\\label{bertini2}\nWith notation and assumptions as in \\ref{bertinigeneral} except $\\mathcal{Z}$ and $\\mathbb{K}:=k(U_i^{(j)})_{0\\leq i\\leq n, 1\\leq j\\leq r}$. If $f$ is unramified, then $\\mathcal{F}_{\\mathbb{K}}$ is a rank $d-r$ vector bundle quotient of $\\Omega^1_{Z_{\\mathbb{K}}\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{Z_{\\mathbb{K}}}}\\mathcal{O}_{\\mathcal{Z}_{\\mathbb{K}}}$ and $\\dim\\mathcal{Z}_{\\mathbb{K}}\\leq d-r$ if $\\mathcal{Z}_{\\mathbb{K}}\\neq\\emptyset$.\n\\end{corollary}\n\\begin{proof}\nThe proof is an easy induction on $r$ as in that of \\ref{bertini1} and we use the same notation. The case $r=1$ being \\ref{bertgeneral}, assume $r>1$. Let $\\mathcal{Z}'=\\mathcal{X}\\times_XZ'$ and $\\mathcal{Z}''=\\mathcal{X}\\times_XZ''$. Note that $\\mathcal{Z}''=\\mathcal{Z}_{\\mathbb{K}}$. We may assume $\\mathcal{Z}_{\\mathbb{K}}\\neq\\emptyset$. By induction on $r$ we have a rank $d-(r-1)$ vector bundle quotient of $\\Omega^1_{Z'\/\\mathbb{K}'}\\otimes_{\\mathcal{O}_{Z'}}\\mathcal{O}_{\\mathcal{Z}'}$ and since $\\mathcal{Z}_{\\mathbb{K}}\\neq\\emptyset$ we must also have $\\mathcal{Z}'\\neq\\emptyset$, hence $\\dim\\mathcal{Z}'\\leq d-(r-1)$ by induction hypothesis. Applying \\ref{bertgeneral}, we get a rank $d-r$ vector bundle quotient of $\\Omega^1_{Z''\/\\mathbb{K}}\\otimes_{\\mathcal{O}_{Z''}}\\mathcal{O}_{\\mathcal{Z}''}$. Since it is the quotient of $\\mathcal{E}\\otimes_{\\mathcal{O}_{\\mathcal{X}}}\\mathcal{O}_{\\mathcal{Z}''}$ by the submodule generated by the images of the differentials $\\sum_iU_i^{(j)}df_i$ for $j=1,...,r$, it is equal to $\\mathcal{F}_{\\mathbb{K}}$. Finally, since $\\mathcal{Z}''\\neq\\emptyset$ we have $\\dim\\mathcal{Z}''\\leq d-(r-1)-1=d-r$ by \\cite[I, 6.3]{bertini}, and this completes the induction.\n\\end{proof}\n\n\\section{Quasi-projective case}\nLet $T$ be an arbitrary irreducible affine noetherian scheme and let $f:X\\to\\mathbb{P}^n_T$ be a morphism of finite type.\n\n\\subsection{Grassmannians}\\label{setupgrass}\nLet $G_{\\mathbb{P}^n_T,r}:=\\Grass_{n+1-r}(\\Gamma(\\mathbb{P}^n_T,\\mathcal{O}_{\\mathbb{P}^n_T}(1)))$ be the grassmannian of quotients of $\\Gamma(\\mathbb{P}^n_T,\\mathcal{O}_{\\mathbb{P}^n_T}(1))\\simeq\\mathcal{O}_T^{n+1}$ which are locally free of rank $n+1-r$. Let $S$ be a $T$-scheme and $l_1,...,l_r\\in\\mathcal{O}_S(S)^{n+1}$. We say that the $l_i$ are \\emph{non-degenerate} if the element $\\wedge_{i=1}^rl_i\\in\\wedge_{i=1}^r\\mathcal{O}_S^{n+1}$ generates a direct summand. In this case it is clear that the $l_i$ induce a locally free quotient $Q$ of $\\mathcal{O}_S^{n+1}$ of rank $n+1-r$, i.e. an element in $G_{\\mathbb{P}^n_T,r}(S)$. Conversely, if $S$ is a $T$-scheme over which every vector bundle is free, then to give a quotient of $\\mathcal{O}_S^{n+1}$ or rank $n+1-r$ is the same as giving a free submodule $M\\subset\\mathcal{O}_S^{n+1}$ of rank $r$ which has a non-degenerate basis $l_1,...,l_r$. In any case, if $l_1,...,l_r$ are non-degenerate then each of the $l_i$ determines a hyperplane in $\\mathbb{P}^n_S$ so that $L=\\mathbb{P}_S(Q)\\hookrightarrow\\mathbb{P}^n_S$ is the subscheme cut out by the $l_i$, $1\\leq i\\leq r$.\n\nFix a projective space $\\mathbb{P}^n_T$ and $r\\in\\mathbb{N}$ and write $G:=G_{\\mathbb{P}^n_T,r}$. Let $\\mathcal{Q}$ be the tautological quotient on $G$. There is a canonical closed immersion\n\\[ \\Grass_1(\\mathcal{Q})\\hookrightarrow\\Grass_1(\\mathcal{O}^{n+1}_{G})\\cong\\Grass_1(\\mathcal{O}^{n+1}_T)\\times_TG=\\mathbb{P}^n_T\\times_TG \\]\nwhose image is the set of points $(x,L)\\in\\mathbb{P}^n_T\\times_TG$ such that $x\\in L$.\nSo if $f:X\\to\\mathbb{P}^n_T$ is a morphism, then the functor\n\\[ \\mathcal{H}_X=\\{ (x,L)\\in X\\times_TG: f(x)\\in L \\} \\]\nis the scheme $\\mathcal{H}_X:=\\Grass_1(\\mathcal{Q})\\times_{\\mathbb{P}^n_T}X$. It comes equipped with a projection\n\\[ p:\\mathcal{H}_X\\to G. \\]\nIn particular, if $k$ is a field and $P\\in\\mathbb{P}^n(k)$, then the linear subvariety defined by a point $l\\in \\mathcal{H}_P(k)$ can be identified with the intersection of a set of $r$ non-degenerate $k$-hyperplanes through the point $P$, and the map $p:\\mathcal{H}_P\\to G$ is a closed immersion.\n\n\\begin{lemma}\\label{smoothincidencevariety}\nFix a fine log structure on $T$. If $X$ is log smooth over $T$, then the scheme $\\mathcal{H}_X$ (endowed with the inverse image log structure of $X$) is log smooth over $T$. In particular, $\\mathcal{H}_P$ is smooth over $k$.\n\\end{lemma}\n\\begin{proof}\nWe must check that for a nilpotent exact closed immersion $S_0\\hookrightarrow S$ of log $T$-schemes any element in $\\mathcal{H}_X(S_0)$ lifts to an element of $\\mathcal{H}_X(S)$, up to replacing $S$ by an \\'etale covering. Now we have\n\\[ \\mathcal{H}_X(S_0)=\\{(x,L)\\in X(S_0)\\times G(S_0):f(x)\\in L(S_0)\\} \\]\nand let us translate the meaning of a point $(x,L)\\in \\mathcal{H}_X(S_0)$ in terms of modules. The point $L\\in G(S_0)$ corresponds to a quotient\n\\[ \\mathcal{O}_{S_0}^{n+1}\\twoheadrightarrow Q_0 \\]\nwith $Q_0$ locally free of rank $n+1-r$, the point $f(x)\\in\\mathbb{P}^n(S_0)$ corresponds to a quotient\n\\[ \\mathcal{O}_{S_0}^{n+1}\\twoheadrightarrow P_0 \\]\nwith $P_0$ locally free of rank one, and finally the condition $(x,L)\\in \\mathcal{H}_X(S_0)$ means that the quotient onto $P_0$ factors\n\\[ \\mathcal{O}_{S_0}^{n+1}\\twoheadrightarrow Q_0\\to P_0. \\]\nNow, $X$ being log smooth, up to localizing for the \\'etale topology on $S_0$ we can find a point $x'\\in X(S)$ lifting $x$. Then $f(x')$ corresponds to a quotient\n\\[ \\mathcal{O}_{S}^{n+1}\\twoheadrightarrow P \\]\nwith $P$ locally free of rank one. Let $M:=\\ker\\left(\\mathcal{O}_{S}^{n+1}\\twoheadrightarrow P\\right)$, $N_0=\\ker\\left(\\mathcal{O}_{S_0}^{n+1}\\twoheadrightarrow Q_0\\right)$. Up to localizing on $S_0$ we may assume that $N_0$ is a free module on the basis $l_1,...,l_r$. Since $N_0\\subset M\\otimes_{\\mathcal{O}_{S}}\\mathcal{O}_{S_0}$, we can choose lifts $n_1,...,n_r\\in M$ of $l_1,...,l_r$. Then $n_1,...,n_r\\in\\mathcal{O}_{S}^{n+1}$ form part of a basis (by Nakayama's lemma), so if we let $N$ be the free submodule generated by the $n_i$, then $Q:=\\mathcal{O}_{S}^{n+1}\/N$ is a free module. We take $Q$ to be the lift of $Q_0$. By construction we have that the map $\\mathcal{O}_{S}^{n+1}\\twoheadrightarrow P$ factors\n\\[ \\mathcal{O}_{S}^{n+1}\\twoheadrightarrow Q\\to P. \\]\nThis means that $f(x')$ lies on the linear space $L'$ corresponding to $Q$, so the point $(x',L')\\in \\mathcal{H}_X(S)$ is a lift of $(x,L)$.\n\\end{proof}\n\n\\subsection{Reduction to the affine case}\nLet $M_1,...,M_{n+1\\choose r}$ be the $r\\times r$ minors of the matrix $\\left(U_i^{(j)}\\right)_{0\\leq i\\leq n,1\\leq j\\leq r}$. Consider the open $Y\\subset\\mathbb{A}^{(n+1)r}_T$ defined\n\\[ Y:=\\mathbb{A}^{(n+1)r}_T-\\bigcap_{i=1}^{{n+1\\choose r}}V(M_i) \\]\nwhere $V(M_i)\\subset\\mathbb{A}^{(n+1)r}_T=\\Spec(\\mathcal{O}_T[U_i^{(j)}])$ is the zero set of $M_i$. Then for any $T$-scheme $S$ the set $Y(S)$ is a subset of the set of affine homogeneous $S$-hyperplanes $l_1,...,l_r$ which are non-degenerate: after making choice $X_0,...,X_n$ of homogeneous coordinates an $S$-point\n\\[ (u_0^{(1)},...,u_n^{(1)},...,u_0^{(r)},...,u_n^{(r)})\\in Y(S) \\]\ncorresponds to the hyperplanes $\\sum_{i=0}^nu_i^{(j)}X_i=0$ for $1\\leq j\\leq r$. This identification depends on the choice of homogeneous coordinates. Fixing such a choice, let $Q$ be the quotient of $\\oplus_{i=0}^n\\mathcal{O}_YX_i$ by the $\\mathcal{O}_Y$-submodule generated by the elements $l_j:=\\sum_{i=0}^nU_i^{(j)}X_i$ for $1\\leq j\\leq r$. Since, by definition, at every point of $Y$ some $r\\times r$ minor of the matrix $(U_i^{(j)})$ is invertible it follows that $l_1\\wedge l_2\\wedge\\cdots\\wedge l_r$ forms part of a basis of $\\wedge^r(\\oplus_{i=0}^n\\mathcal{O}_YX_i)$, which implies that $Q$ is a vector bundle of rank $n+1-r$ on $Y$. This defines a morphism\n\\[ g:Y\\to G:=G_{\\mathbb{P}^n_T,r}. \\]\n\n\\begin{lemma}\\phantomsection\\label{dominant}\n\\begin{enumerate}[(i)]\n\\item The morphism $g:Y\\to G$ is surjective.\n\\item $G_t$ is irreducible for all $t\\in T$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nLet $S$ be the spectrum of a field and $l\\in G(S)$. Since $S$ is the spectrum of a field, there is a set $\\{l_1,...,l_r\\}$ of non-degenerate homogeneous $S$-hyperplanes such that $l$ is the quotient of $\\oplus_{i=0}^n\\mathcal{O}_SX_i$ by the submodule generated by $l_1,...,l_r$. For each $1\\leq j\\leq r$, write $l_j=\\sum_{i=0}^nu_i^{(j)}X_i$. These hyperplanes are non-degenerate if and only if some $r\\times r$-minor of the matrix $(u_i^{(j)})$ is non-zero. Hence the point $(u_i^{(j)})\\in\\mathbb{A}^{(n+1)r}(S)$ lies in $Y(S)$, and this proves (i). Since $Y_t$ is irreducible, (i) implies (ii).\n\\end{proof}\n\n\\begin{proposition}\\label{subspaceclosed}\nAssume $T$ is the spectrum of a field $k$. Let $\\mathcal{Q}$ denote the tautological quotient of $\\Gamma(\\mathbb{P}^n_k,\\mathcal{O}(1))\\otimes_k\\mathcal{O}_G$. If $S\\subset\\Gamma(\\mathbb{P}^n_k,\\mathcal{O}(1))$ is a subspace of dimension at most $n+1-r$ and $\\mathcal{Q}':=\\im(S\\otimes_k\\mathcal{O}_G\\to\\mathcal{Q})$, then there is a dense open $U\\subset G$ such that the map $S\\otimes_k\\mathcal{O}_U\\to\\mathcal{Q}'|_U$ is an isomorphism and $\\mathcal{Q}\/\\mathcal{Q}'|_U$ is a vector bundle on $U$.\n\\end{proposition}\n\\begin{proof}\nLet $X_0,....,X_m$ be a basis of $S$ and let $X_{m+1},...,X_n$ be elements of $\\Gamma(\\mathbb{P}^n_k,\\mathcal{O}(1))$ such that $X_0,...,X_n$ form a basis. Let $g:Y\\to G$ be the morphism determined by these homogeneous coordinates. It suffices to check that the map $S\\otimes_k\\mathcal{O}_G\\to\\mathcal{Q}'$ is an isomorphism over the generic point of $G$. By \\ref{dominant} (i), for this we may base change to the fraction field $k(Y)$ of $Y$.\n\nLet $R=\\Gamma(\\mathbb{P}^n_k,\\mathcal{O}(1))\/S$. Then we have a commutative diagram with exact rows\n\\[ \\xymatrix{\n0 \\ar[r] & S\\otimes_kk(Y) \\ar[r] \\ar[d] & \\oplus_{i=0}^n k(Y)X_i \\ar[r] \\ar[d] & R\\otimes_kk(Y) \\ar[r] \\ar[d] & 0 \\\\\n0 \\ar[r] & \\mathcal{Q}'\\otimes_{\\mathcal{O}_G}k(Y) \\ar[r] & \\mathcal{Q}\\otimes_{\\mathcal{O}_G}k(Y) \\ar[r] & (\\mathcal{Q}\/\\mathcal{Q}')\\otimes_{\\mathcal{O}_G}k(Y) \\ar[r] & 0 \\\\\n} \\]\nand, by definition of the morphism $g$, the kernel $\\kappa$ of the middle vertical map is generated by the elements $\\sum_{i=0}^{n}U_i^{(j)}X_i$ for $j=1,...,r$. If $0\\neq s\\in\\kappa\\cap (S\\otimes k(Y))$, then we may write\n\\[ s=\\sum_{j=1}^ra_j\\sum_{i=0}^{n}U_i^{(j)}X_i \\]\nand\n\\[ s=\\sum_{i=0}^mb_iX_i \\]\nfor some $a_j,b_i\\in k(Y)$. Equating we find\n\\[ \\sum_{j=1}^ra_jU_i^{(j)}=0 \\]\nfor $i=m+1,...,n$. This implies that the rank of the matrix $U:=(U_i^{(j)})_{m+1\\leq i\\leq n,1\\leq j\\leq r}$ is strictly less than $r$. Since $r\\leq n-m$ by assumption, all $r\\times r$-minors of $U$ vanish. But the subset of $Y$ where all $r\\times r$-minors of $U$ vanish is a proper closed subset, so this cannot happen. Therefore, $\\kappa\\cap (S\\otimes k(Y))=0$.\n\\end{proof}\n\nWe now fix a choice of homogeneous coordinates and let $\\mathbb{A}^n_T\\subset\\mathbb{P}^n_T$ be a standard open with induced coordinates $T_1,...,T_n$. Let $X'=X\\times_{\\mathbb{P}^n_T}\\mathbb{A}^n_T$ and define\n\\[ Z:=\\underline{\\Spec}_{X'}\\left(\\dfrac{\\mathcal{O}_{X'}[U_i^{(j)}]_{0\\leq i\\leq n, 1\\leq j\\leq r}}{\\left(U_0^{(j)}+\\sum_{i=1}^nU_i^{(j)}f_i\\right)_{j=1,...,r}}\\right)\n\\]\nwhere $f_i\\in\\Gamma(X',\\mathcal{O}_{X'})$ is the image of $T_i$ for $i=1,...,n$. Let $i:Z\\hookrightarrow X'\\times_T\\mathbb{A}^{(n+1)r}_T$ be the closed immersion defining $Z$ and $q:Z\\to\\mathbb{A}^{(n+1)r}_T$ be the second projection. ($Z$ is the scheme associated to $X'$ in \\ref{multiple}, except that we assumed there that $T$ was the spectrum of a field.)\n\n\\begin{proposition}\\label{reduction}\nWith our fixed choice of coordinates.\n\\begin{enumerate}[(i)]\n\\item There is a cartesian square\n\\[\\xymatrix{\nZ\\times_{\\mathbb{A}^{(n+1)r}_T}Y \\ar[d]^{q} \\ar[r] & \\mathcal{H}_{X'} \\ar[d]^{p} \\\\\nY \\ar[r]^{g} & G\n} \\]\n\\item The closed immersion $i\\times_{\\mathbb{A}^{(n+1)r}_T}Y:Z\\times_{\\mathbb{A}^{(n+1)r}_T}Y\\hookrightarrow X'\\times_TY$ deduced from $i$ is equal, via the identification $Z\\times_{\\mathbb{A}^{(n+1)r}_T}Y=\\mathcal{H}_{X'}\\times_GY$ of (i), to the closed immersion $\\mathcal{H}_{X'}\\times_GY\\hookrightarrow X'\\times_TY$ deduced from the closed immersion $\\mathcal{H}_{X'}\\hookrightarrow X'\\times_kG$ defining $\\mathcal{H}_{X'}$.\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nBy definition, $Z_Y:=Z\\times_{\\mathbb{A}^{(n+1)r}_T}Y$ is the closed subscheme of $X'\\times_TY$ defined by the ideal generated by $U_0^{(j)}+\\sum_{i=1}^{n}U_i^{(j)}f_i$ for $j=1,...,r$. So for any $T$-scheme $S$, every element of $Z_Y(S)$ determines points $x\\in X'(S)$ and $(u_0^{(1)},...,u_n^{(1)},...,u_0^{(r)},...,u_n^{(r)})\\in Y(S)$ such that the image of $x$ in $\\mathbb{A}^n_T$ lies on the (non-degenerate) hyperplanes given by $u_0^{(j)}+\\sum_{i=1}^{n}u_i^{(j)}T_i=0$ for $1\\leq j\\leq r$, and conversely. This is obviously the same thing as $\\left(\\mathcal{H}_{X'}\\times_{G}Y\\right)(S)$. Since this correspondence is clearly functorial in $S$, this proves (i).\n\nFor (ii), an element of $P\\in Z_Y(S)$ determines points $x\\in X'(S)$ and $(u_0^{(1)},...,u_n^{(1)},...,u_0^{(r)},...,u_n^{(r)})\\in Y(S)$, and then $i(P)=(x,(u_0^{(1)},...,u_n^{(1)},...,u_0^{(r)},...,u_n^{(r)}))$. This is clearly equal to the image of $P$ under the composition $Z_Y=\\mathcal{H}_{X'}\\times_GY\\to X'\\times_TY$.\n\\end{proof}\n\nThis allows us to show the following Bertini theorem for log smoothness by reducing to the affine case proven before. In the following we assume $T=\\Spec(k)$ with $k$ a field.\n\n\\begin{corollary}\\label{bertini6}\nAssume $T$ has the structure of a log point. Let $f:X\\to\\mathbb{P}^n_k$ be unramified with $X$ a sharp log smooth $k$-scheme. Endow $\\mathcal{H}_X$ with the inverse image log structure of $X$ via the second projection $\\pr_2:\\Grass_1(\\mathcal{Q})\\times_{\\mathbb{P}^n_T}X\\to X$, and $G$ with the inverse image log structure from the structure morphism to $T$. There is a dense open $U\\subset G$ such that $p|_U:\\mathcal{H}_X|_{U}\\to U$ is log smooth.\n\\end{corollary}\n\\begin{proof}\nFor each standard open $\\mathbb{A}^n_k\\subset\\mathbb{P}^n_k$, let $X'=X\\times_{\\mathbb{P}^n_k}\\mathbb{A}^n_k$, so that we have a commutative diagram as in \\ref{reduction}. Endow $\\mathbb{A}^{(n+1)r}_T$ with the inverse image log structure of $T$ and $Z$ with the inverse image log structure from $X$ so that we are in the situation of \\ref{bertini1}. It is clear that with these log structures the diagram of \\ref{reduction} (i) is a cartesian square of log schemes. Hence by \\ref{bertini1} and \\ref{dominant}, the generic fibre of $\\mathcal{H}_{X'}\\to G$ becomes log smooth after the field extension $k(G)\\to k(Y)$. Since the log structures on $k(Y)$ and $k(G)$ are both induced by $Q\\to k$, it now follows easily from \\ref{jacobi} that the generic fibre of of $\\mathcal{H}_{X'}\\to G$ is itself log smooth. So by \\ref{generic} there is a dense open subset $U'\\subset G$ such that the restriction of $\\mathcal{H}_{X'}\\to G$ to $U'$ is log smooth. Hence we can take $U$ to be intersection of the (finitely many) $U'$ for each standard open $\\mathbb{A}^n_k\\subset\\mathbb{P}^n_k$.\n\\end{proof}\n\nWe also have the quasi-projective version of \\ref{bertini2}.\n\n\\begin{corollary}\\label{bertini7}\nLet $f:X\\to\\mathbb{P}^n_k$ be unramified and let $m:\\mathcal{X}\\to X$ be a morphism of finite type with $\\dim\\mathcal{X}\\leq d$ such that there is a surjective map of $\\mathcal{O}_{\\mathcal{X}}$-modules\n\\[ \\pi:m^*\\Omega^1_{X\/k}\\twoheadrightarrow\\mathcal{E} \\]\nwhere $\\mathcal{E}$ is vector bundle of rank $d$ on $\\mathcal{X}$. If $I$ denotes the ideal sheaf of $\\mathcal{H}_X$ in $X\\times_kG$, and\n\\[ \\phi:I\/I^2\\otimes_{\\mathcal{O}_X}\\mathcal{O}_{\\mathcal{X}}\\to\\mathcal{E}\\otimes_{\\mathcal{O}_{X}}\\mathcal{O}_{\\mathcal{H}_X} \\]\nthe composition of the natural map $\\left(I\/I^2\\to\\Omega^1_{X\/k}\\otimes_{\\mathcal{O}_X}\\mathcal{O}_{\\mathcal{H}_X}\\right)\\otimes_{\\mathcal{O}_X}\\mathcal{O}_{\\mathcal{X}}$ with the map $\\pi\\otimes_{\\mathcal{O}_X}\\mathcal{O}_{\\mathcal{H}_X}$, then there is a dense open $U\\subset G$ such that $\\cok(\\phi)|_U$ is a vector bundle of rank $d-r$ on $\\mathcal{H}_{\\mathcal{X}}\\times_GU$.\n\\end{corollary}\n\\begin{proof}\nSince $G$ is irreducible it suffices to show that the restriction of $\\cok(\\phi)$ to $\\mathcal{H}_{\\mathcal{X}}\\times_G\\Spec(k(G))$ is a vector bundle of rank $d-r$. We may check this locally and therefore replace $X$ by $X'$. By \\ref{reduction} we may reduce to the situation of \\ref{multiple} and apply \\ref{bertini2}.\n\\end{proof}\n\n\\section{Hyperplanes through a point over a discrete valuation ring}\nLet $V$ be a complete discrete valuation ring with uniformizer $\\pi$, algebraically closed residue field $k=V\/\\pi V$, fraction field $K=V[1\/\\pi]$, and set $T:=\\Spec(V)$. We will often make the following abuse of notation: since a point $L\\in G_{\\mathbb{P}^n_T,r}(K)=G_{\\mathbb{P}^n_T,r}(T)$ can be identified with a linear subvariety of $\\mathbb{P}^n_T$, for a subscheme $X\\subset \\mathbb{P}^n_T$ we write $X\\cap L$ for the scheme-theoretic intersection of $X$ with the linear subvariety defined by $L$.\n\n\\subsection{Specialization of points}\\label{special}\nLet $X$ be a separated $T$-scheme of finite type and $x\\in X_K$ be a point. We define the \\emph{specialization} $\\spe_X(x)$ of $x$ to be $\\overline{\\left\\{x\\right\\}}\\cap X_k$, where $\\overline{\\left\\{x\\right\\}}$ is closure of $x$ in $X$. We simply write $\\spe(x)$ for $\\spe_X(x)$ when no ambiguity can occur.\n\nWe gather here some facts on specialization.\n\n\\begin{lemma}\\label{closed}\nIf $x\\in X_K$ is a closed point, then either $\\spe(x)=\\emptyset$ or $\\spe(x)$ is a closed point. Conversely, if $\\spe(x)$ is a closed point, then $x$ is closed.\n\\end{lemma}\n\\begin{proof}\nSince $X$ is a separated $T$-scheme of finite type, by a theorem of Nagata there is a proper $T$-scheme $\\bar{X}$ and an open immersion $X\\hookrightarrow\\bar{X}$. Let $x\\in X_K$ be closed. Then there is a finite extension $K\\subset L$ such that $x\\in X(L)\\subset\\bar{X}(L)$. Then $x$ gives a point in $s\\in\\bar{X}(V_L)$, where $V_L$ is the normalization of $V$ in $L$. Since both $\\Spec(V_L)$ and $\\bar{X}$ are proper over $T$, $s(\\Spec(V_L))$ is closed in $\\bar{X}$. It follows that the closure of $x$ in $\\bar{X}$ is equal to $s(\\Spec(V_L))$. So $\\spe(x)$ is isomorphic to a subscheme of $\\Spec(V_L\\otimes_Vk)$, i.e. is a closed point or is empty.\n\nFor the converse, let $U=\\Spec(A)\\subset X$ be an affine open neighbourhood of $\\spe(x)\\in X$. If $x\\notin U$, then $x\\in X-U$, so $\\spe(x)\\in\\overline{\\left\\{x\\right\\}}\\subset X-U$, which is absurd. Hence $x\\in U$, and $\\overline{\\left\\{x\\right\\}}\\cap U$ corresponds to a quotient $A\\to B$ with $B$ reduced. Then $\\Spec(B)$ is irreducible (otherwise $x$ lies on an irreducible component which is a proper closed subset of $\\overline{\\left\\{x\\right\\}}$), so $\\Spec(B)$ is integral. Moreover, the reduction of $B\\otimes_{V}k$ is $k$. Since $B\\otimes_VK\\neq 0$, by \\cite[IV, 14.3.10]{ega} we have $\\dim(B\\otimes_{V}K)=0$. By Noether normalization $B\\otimes_{V}K$ is a finite $K$-algebra, so since $B\\otimes_VK$ is a domain it is a finite field extension of $K$.\n\\end{proof}\n\n\\begin{lemma}\\label{gen}\nLet $x\\in X_k$ be a closed point. The set $\\spe^{-1}(x)$ is equal to the set of closed points of $\\Spec(\\mathcal{O}_{X,x}\\otimes_VK)$, viewed as a subset of $X_K$.\n\\end{lemma}\n\\begin{proof}\nLet $U$ be an open neighbourhood of $x\\in X_k$. If $y\\in X_K$ is a closed point such that $\\spe(y)=x$ and $y\\notin U$, then $x\\in\\overline{\\left\\{y\\right\\}}\\not\\subset U$, a contradiction. Hence $U$ contains $\\spe^{-1}(x)$. Since $\\Spec(\\mathcal{O}_{X,x}\\otimes_VK)=\\cap_{U\\ni x}U_K$, where the intersection is taken over all open neighbourhoods of $x$, it follows that $\\Spec(\\mathcal{O}_{X,x}\\otimes_VK)$ contains $\\spe^{-1}(x)$.\n\nFor the converse, pick a closed point of $y\\in\\Spec(\\mathcal{O}_{X,x}\\otimes_VK)$. It gives a quotient $\\mathcal{O}_{X,x}\\otimes_VK\\to L$ where $L=K(y)$ is the residue field at $y$. We first claim that $L$ is a finite extension of $K$. Let $C$ be the image of $\\mathcal{O}_{X,x}$ in $L$. It is a local ring with residue field $k(x)=k$ satisfying $C[1\/\\pi]=L$. So by a theorem of Artin-Tate (\\cite[0, 16.3.3]{ega}) $C$ is a local domain of dimension at most 1. Since $V$ is universally catenary, by \\cite[IV, 5.6.4]{ega} we have $\\dim V+\\td(L\/K)=\\dim C+\\td(k(x)\/k)\\leq 1$, hence $\\td(L\/K)=0$ as claimed.\n\nNow for every small enough affine open $\\Spec(A)=U\\subset X$ containing $x$, $y$ induces a quotient $A\\otimes_VK\\to L$. We claim that the image $V'$ of $A$ in $L$ has special fibre over $V$ consisting of a single point. Note that $\\pi$ is not a unit in $V'$ (otherwise $\\pi$ would be a unit in $C$). Now, since $V'$ is an integral domain and $V$ is henselian, it suffices to show that $V'$ is a finite $V$-module, for then we know that $\\Spec(V'\\otimes_Vk)$ is connected and finite, hence a single point. To see this, first note that since $V'$ is a one-dimensional domain ($\\dim(V'_{\\mathfrak{p}})\\leq\\dim(V)+\\td(L\/K)=1$ for any prime ideal $\\mathfrak{p}\\subset V'$), $V'\\otimes_{V}k$ is a zero-dimensional $k$-algebra of finite type, so by Noether normalization it is finite. Lifting a $k$-basis $e_1,...,e_n$ of $V'\\otimes_{V}k$ we see that $V'\\subset \\sum_{i=1}^nVe_i+\\pi V'$, hence $V'\\subset\\sum_{i=1}^nVe_i+\\pi^mV'$ for all $m\\geq 1$. Since $V'$ is $\\pi$-adically separated (by Krull's Intersection Theorem) and $V$ complete, this implies that $V'\\subset\\widehat{V'}=\\sum_{i=1}^nVe_i$, whence $V'=\\sum_{i=1}^nVe_i$. This proves the claim. Note that the specialization $\\spe(y)$ of $y$ in $U$ is given by the quotient $A\\to V'\\otimes_Vk$, hence $\\spe(y)$ is a closed point in $U_k$. Since this holds for every small enough neighbourhood $U$ of $x$, it follows that $\\spe(y)=x$, as required.\n\\end{proof}\n\n\\begin{corollary}\\label{sp}\nLet $x\\in X_k$ be closed. An open subset $U_0\\subset X_K$ is of the form $U_0=U_{K}$ for some open neighbourhood $U$ of $x$ if and only if $\\spe^{-1}_X(x)\\subset U_0$.\n\\end{corollary}\n\\begin{proof}\nWe first show the if part. Let $Y_0$ be the complement of $U_0$ in $X_{K}$ and let $Y$ be its closure in $X$. If $x\\in Y$, then by the last lemma any closed point of $y\\in\\Spec(\\mathcal{O}_{Y,x}\\otimes_{V}K)$ is a closed point of $Y_0$ satisfying $\\spe_X(y)=x$. Since $\\spe^{-1}_X(x)\\subset U_0$ by assumption, this is absurd. So $U:=X-Y$ contains $x$. For the converse, it suffices to note that if $x\\in U$, then by the last lemma $\\spe^{-1}_X(x)\\subset U$.\n\\end{proof}\n\n\\subsection{Hyperplanes through a point}\\label{hyperplanespoint}\nLet $P\\in\\mathbb{P}^n(k)$ and let $I_P\\subset\\mathcal{O}_{\\mathbb{P}^n_T}$ be its ideal sheaf. For a sheaf $\\mathcal{F}$ on $\\mathbb{P}^n_T$ we write $\\Gamma(\\mathcal{F}):=\\Gamma(\\mathbb{P}^n_T,\\mathcal{F})$ to simplify. Then for any integer $d$ we have an exact sequence\n\\[ 0\\to I_P(d)\\to\\mathcal{O}_{\\mathbb{P}^n_T}(d)\\to k(P)\\to 0 \\]\nso taking global sections we get a left-exact sequence\n\\begin{equation}\\label{exseq:globalideal}\n0\\to \\Gamma(I_P(d))\\to\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(d))\\to k(P)\\to 0.\n\\end{equation}\n\n\\begin{lemma}\\label{hyperplanelocus}\nIf $d\\geq 0$, then the sequence \\ref{exseq:globalideal} is exact.\n\\end{lemma}\n\\begin{proof}\nIt suffices to show $\\Gamma(I_P(d))\\neq\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(d))$. For $d=0$ this is obvious, so assume $d\\geq 1$. If $\\Gamma(I_P(d))=\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(d))$ is bijective, then every hypersurface of degree $d$ vanishes at $P$, equivalently every hyperplane in $\\mathbb{P}(\\Gamma(\\mathcal{O}(d)))$ vanishes on the image of $P$. But in a suitable choice of coordinates we have $P=(1:0:\\cdots:0)\\in\\mathbb{P}(\\Gamma(\\mathcal{O}(d)))\\otimes_Vk$ and the hyperplane given by the first coordinate does not vanish at $P$. So $\\Gamma(I_P(d))\\neq\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(d))$.\n\\end{proof}\n\nSince $H^1(\\mathbb{P}^n_T,\\mathcal{O}_{\\mathbb{P}^n_T}(d))=0$ for $d\\geq 0$ we immediately find\n\n\\begin{corollary}\n$H^1(\\mathbb{P}^n_T,I_P(d))=0$ for $d\\geq 0$.\n\\end{corollary}\n\nSince $I_P(d)$ is $\\pi$-torsion free, taking cohomology in the exact sequence\n\\[ 0\\to I_P(d)\\overset{\\cdot\\pi}{\\to} I_P(d)\\to I_P(d)\\otimes_Vk\\to 0 \\]\nwe obtain\n\n\\begin{corollary}\\label{corhyperplanelocus}\n$\\Gamma(I_P(d))\\otimes_Vk=\\Gamma(I_P(d)\\otimes_Vk)$ for $d\\geq 0$.\n\\end{corollary}\n\nFor $d\\geq 0$ we can tensor the exact sequence \\ref{exseq:globalideal} with $k$ to obtain exact sequences\n\\begin{equation}\\label{exseq:globalideal1} 0\\to k(P)\\to\\Gamma(I_P(d))\\otimes_Vk\\to\\Gamma(I_P(d)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k})\\to 0\n\\end{equation}\n\\begin{equation}\\label{exseq:globalideal2} 0\\to \\Gamma(I_P(d)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k})\\to \\Gamma(\\mathcal{O}_{\\mathbb{P}^n_k}(d))\\to k(P)\\to 0.\n\\end{equation}\n\n\\begin{proposition}\\label{global}\n$I_P(d)$ and $I_P(d)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k}$ are generated by global sections for $d\\geq 1$.\n\\end{proposition}\n\\begin{proof}\nWe first show that $I_P(d)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k}$ is generated by global sections for $d\\geq 1$. By \\cite[Lecture 14, Prop.]{mumford} it suffices to check that $I_P\\cdot\\mathcal{O}_{\\mathbb{P}^n_k}$ is $d$-regular, i.e. $H^i(\\mathbb{P}^n_k,I_P\\cdot\\mathcal{O}_{\\mathbb{P}^n_k}(d-i))=0$ for $i\\geq 1$. To see this, note that since $d\\geq 1$, by \\ref{exseq:globalideal2} for $i\\geq 1$ we have $H^i(\\mathbb{P}^n_k,I_P\\cdot\\mathcal{O}_{\\mathbb{P}^n_k}(d-i))=H^i(\\mathbb{P}^n_k,\\mathcal{O}_{\\mathbb{P}^n_k}(d-i))$, and by Serre's computation of the cohomology of projective space we know that this is zero for $0}[l] \\ar[r]^{\\iota^{-1}} & \\spe^{-1}_G(\\mathcal{H}_P) \\ar@{^(->}[r] \\ar[d]^{\\spe_G} & G_K \\ar[d]^{\\spe_G} \\\\\nG'_k & G'_0\\ar@{_(->}[l] \\ar[r]^{\\rho} & \\mathcal{H}_P\\ar@{^(->}[r] & G_k\n} \\]\n\\end{lemma}\n\\begin{proof}\n(i) : Consider the sequence \\ref{exseq:globalideal2}. Any quotient of $\\Gamma(I_P(2)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k})$ yields a quotient of $\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_k}(2))$ which has $\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_k}(2))\/\\Gamma(I_P(2)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k})\\cong k$ as quotient, and conversely. This is exactly statement (i).\n\n(ii) : This follows from the isomorphism $\\Gamma(I_P(2))\\otimes_VK=\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(2))\\otimes_VK$.\n\n(iii) : Let $g\\in \\Gamma(I_P(2))\\otimes_Vk$ be a generator of the image of the map $k(P)\\to\\Gamma(I_P(2))\\otimes_Vk$ of sequence \\ref{exseq:globalideal1}, and let $\\mathcal{Q}'$ be the tautological quotient on $G'_k$. Consider the commutative diagram with exact rows\n\\[ \\xymatrix{\n0 \\ar[r] & \\mathcal{O}_{G'_k}g \\ar[r] \\ar[d] & \\Gamma(I_P(2))\\otimes_V\\mathcal{O}_{G'_k} \\ar[d] \\\\\n0 \\ar[r] & \\mathcal{L} \\ar[r] & \\mathcal{Q}'\n} \\]\nwhere $\\mathcal{L}\\subset \\mathcal{Q}'$ is the image of $\\mathcal{O}_{G'_k}g$. Note that $\\mathcal{L}\\neq 0$ since the image of $g$ is non-zero in a generic quotient of $\\Gamma(I_P(2))\\otimes_Vk$. Since $G'_k$ is integral and $\\mathcal{L}$ is a torsion-free quotient of $\\mathcal{O}_{G'_k}g$, it follows easily that $\\mathcal{L}=\\mathcal{O}_{G'_k}g$. Thus, we get a commutative diagram with exact rows\n\\[ \\xymatrix{\n0 \\ar[r] & \\mathcal{O}_{G'_k}g \\ar[r] \\ar@{=}[d] & \\Gamma(I_P(2))\\otimes_V\\mathcal{O}_{G'_k} \\ar[r] \\ar[d] & \\Gamma(I_P(2)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k})\\otimes_k\\mathcal{O}_{G'_k} \\ar[r] \\ar[d] & 0 \\\\\n0 \\ar[r] & \\mathcal{O}_{G'_k}g \\ar[r] & \\mathcal{Q}' \\ar[r] & \\mathcal{Q}'' \\ar[r] & 0\n} \\]\nand there is a dense open $G'_0\\subset G'_k$ over which the quotient $\\mathcal{Q}''$ is locally free and therefore defines a map $\\rho:G'_0\\to G_{\\mathbb{P}(\\Gamma(I_P(2)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k})),r}=\\mathcal{H}_P$. Any choice of splitting of the exact sequence \\ref{exseq:globalideal1} provides a section of this map, so this map is surjective and this proves (iii).\n\n(iv) : Let $K_L$ denote the residue field of $L$ and $V_L$ the normalization of $V$ in $K_L$. Suppose $L$ is given by a subspace $S\\subset\\Gamma(I_P(2))\\otimes_VK_L$. We claim that the condition $\\spe_{G'}(L)\\in G'_0$ is equivalent to $S\\cap(\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(2))\\otimes_VV_L)=S\\cap(\\Gamma(I_P(2))\\otimes_VV_L)$. To see this, setting $Q:=\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(2))\\otimes_VV_L\/(S\\cap \\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(2))\\otimes_VV_L)$ and $Q':=\\Gamma(I_P(2))\\otimes_VV_L\/(S\\cap\\Gamma(I_P(2))\\otimes_VV_L)$ we have a commutative diagram with exact rows\n\\[ \\xymatrix{\n0 \\ar[r] & \\Gamma(I_P(2))\\otimes_VV_L \\ar[r] \\ar[d] & \\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(2))_VV_L \\ar[r] \\ar[d] & k(P)\\otimes_VV_L \\ar[d] \\ar[r] & 0 \\\\\n0 \\ar[r] & Q' \\ar[r] & Q \\ar[r] & k(P)\\otimes_VV_L\/M \\ar[r] & 0\n} \\]\nwhere $M=S\\cap(\\Gamma(\\mathcal{O}_{\\mathbb{P}^n_T}(2))\\otimes_VV_L)\/S\\cap(\\Gamma(I_P(2))\\otimes_VV_L)$. Taking tensor product $\\otimes_Vk$ we get a commutative diagram with exact rows\n\\[ \\xymatrix{\n0 \\ar[r] & k(P)\\otimes_VV_L \\ar[r] \\ar[d] & \\Gamma(I_P(2))\\otimes_Vk\\otimes_VV_L \\ar[d] \\\\\n0 \\ar[r] & k(P)\\otimes_VV_L\/M \\ar[r] & Q'\\otimes_Vk\n} \\]\nIn particular, the map $k(P)\\otimes_VV_L\\to Q\\otimes_Vk$ is injective if and only if $M=0$. By definition of $G'_0$ (see the proof of (iii)), this map is injective if and only if the point $l\\in G'(k\\otimes_VV_L)$ determined by $Q'\\otimes_Vk$ lies $G'_0$. Since $\\spe_{G'}(L)$ is equal to the support of $l$, we see that $M=0$ if and only if $\\spe_{G'}(L)\\in G'_0$ as claimed. In particular, since we clearly have $\\spe_{G}(\\iota^{-1}(L))\\in\\mathcal{H}_P$ if and only if $M\\neq k(P)\\otimes_VV_L$, $M=0$ implies $\\spe_{G}(\\iota^{-1}(L))\\in\\mathcal{H}_P$ i.e. (iv).\n\n(v) : With notation as in the proof of (iv), note that in the case $K_L=K$ we have $M=0$ if and only if $M\\neq k(P)$, hence $\\spe_{G'}(L)\\in G_0'$ if and only if $\\spe_{G}(\\iota^{-1}(L))\\in\\mathcal{H}_P$ as required.\n\\end{proof}\n\n\\begin{lemma}\\label{exceptionaldivisorintersection}\nWith notation as in \\ref{closedimmersion} and \\ref{zariskiopen}. Assume further that $X$ is regular of dimension $d+1$ at $P$. If $d\\geq r$, then there is a dense open $U\\subset\\mathcal{H}_P$ such that for each $L\\in G(T)$ with $\\spe_G(L)\\in U$, $X\\cap L$ is regular at $P$ of dimension $d+1-r$.\n\\end{lemma}\n\\begin{proof}\nLet $\\mathfrak{m}\\subset\\mathcal{O}_{X}$ be the ideal sheaf of $P$. The closed immersion $E\\hookrightarrow\\mathbb{P}(\\Gamma(I_P(2)))$ of \\ref{closedimmersion} is by definition that arising from the line bundle quotient\n\\[ \\Gamma(I_P(2))\\otimes_V\\mathcal{O}_{E}\\to(\\mathfrak{m}(2)\\cdot\\mathcal{O}_{\\tilde{X}})\\otimes_{\\mathcal{O}_{\\tilde{X}}}\\mathcal{O}_E. \\]\nSince $E$ is a linear subvariety of $\\mathbb{P}(\\Gamma(I_P(2)))\\otimes_Vk$ under this embedding, it follows that the induced map\n\\[ \\sigma:\\Gamma(I_P(2))\\to\\Gamma(E,\\mathfrak{m}(2)\\cdot\\mathcal{O}_{\\tilde{X}}) \\]\nis surjective and the closed immersion $E\\hookrightarrow\\mathbb{P}(\\Gamma(I_P(2)))$ is equal to the morphism $\\Proj(\\Sym_V\\sigma)$.\n\nLet $S'=\\ker(\\sigma\\otimes_Vk)$ and $S:=\\im(S\\subset\\Gamma(I_P(2))\\otimes_Vk\\to\\Gamma(I_P(2)\\cdot\\mathcal{O}_{\\mathbb{P}^n_k}))$. Let $N=\\dim\\Gamma(I_P(2))\\otimes_Vk$. Note that $\\dim S'=N-(d+1) 0$ denote the throughput from the origin $\\mathrm{o}$ to the destination $\\mathrm{d}$, and $\\nu = m(\\delta^{(\\mathrm{o})}-\\delta^{(\\mathrm{d})}) \\in \\mathds{R}^\\mathrm{N}$. Let $\\mathcal{P}=\\{1,\\cdots,\\mathrm{P}\\}$ denote the set of paths from $\\mathrm{o}$ to $\\mathrm{d}$. \nAn admissible path flow is a vector $z \\in \\mathds{R}^{\\mathrm{P}}_+$ satisfying the mass constraint\n\\begin{equation}\n\t\\label{constraints}\n\\mathbf{1}^T z=m. \n\\end{equation}\nLet $A \\in \\mathds{R}^{\\mathrm{E} \\times \\mathrm{P}}$ denote the link-path incidence matrix, with entries $A_{ep}=1$ if link $e$ belongs to the path $p$ or $0$ otherwise. The path flow induces a unique link flow $f \\in \\mathds{R}^\\mathrm{E}$ via\n\\begin{equation}\n\tf=Az.\n\t\\label{incidence_path}\n\\end{equation}\nEvery link $e$ is endowed with a non-negative and strictly increasing delay function $\\tau_e : \\mathds{R}_+ \\to \\mathds{R}_+$. We assume that the delay functions are in the form $\\tau_e(f_e) = \\tau_e(0) + a_e(f_e)$, where $\\tau_e(0)$ is the travel time of the link when there is no flow on it, and $a_e(f_e)$ describes congestion effects, with $a_e(0)=0$.\nThe cost of path $p$ under flow distribution $f$ is the sum of the delay functions of the links belonging to $p$, i.e.,\n\\begin{equation}\n\tc_p(f)=\\sum_{e \\in \\mathcal{E}}A_{ep}\\tau_e(f_e).\n\t\\label{cost_path}\n\\end{equation}\n\\begin{definition}[Routing game]\n\tA \\emph{routing game} is a triple $(\\mathcal{G}, \\tau, \\nu)$.\n\\end{definition}\n\nA Wardrop equilibrium is a flow distribution such that no one has incentive in changing path. More precisely, we have the following definition.\n\\begin{definition}[Wardrop equilibrium]\n\tA path flow $z^*$, with associated link flow $f^*=Az^*$, is a Wardrop equilibrium if for every path $p$\n\t\\begin{equation*}\n\t\tz^*_p>0 \\implies c_p(f^*)\\le c_q(f^*), \\quad \\forall q \\in \\mathcal{P}.\n\t\\end{equation*}\n\\end{definition}\\medskip\n\nLet $B \\in \\mathds{R}^{\\mathrm{N} \\times \\mathrm{E}}$ denote the node-link incidence matrix, with entries $B_{ne}=1$ if $n=\\xi(e)$, $B_{ne}=-1$ if $n=\\theta(e)$, or $B_{ne}=0$ otherwise.\nIt is proved in \\cite{beckmann1956studies} that a link flow $f^*$ is a Wardrop equilibrium of a routing game if and only if\n\\begin{equation}\n\t\\begin{aligned}\n\t\tf^* = \\ & \\underset{f \\in \\mathds{R}_+^\\mathrm{E}, Bf=\\nu}{\\arg\\min}\n\t\t& & \\sum_{e \\in \\mathcal{E}} \\int_0^{f_e} \\tau_e(s) ds,\n\t\\end{aligned}\n\t\\label{convex_prob}\n\\end{equation}\nwhere $Bf=\\nu$ is the projection of \\eqref{constraints} on the link set.\nSince the delay functions are assumed strictly increasing, the objective function in \\eqref{convex_prob} is strictly convex and the Wardrop equilibrium $f^*$ is unique. \n\\begin{definition}[Social cost]\n\tThe social cost of a routing game is the total travel time at the equilibrium, i.e.,\n\t\\begin{equation*}\n\t\tC^{(0)}=\\sum_{e \\in \\mathcal{E}} f_e^*\\tau_e(f_e^*).\n\t\\end{equation*}\n\\end{definition}\n\\medskip\nThe social cost can be interpreted as a measure of performance by a planner that aims at minimizing the overall congestion on the transportation network.\nWe now provide an equivalent characterization of the social cost of a routing game. To this end, let $\\lambda^*$ and $\\gamma^*$ denote the Lagrangian multipliers associated to $f^*\\ge \\mathbf{0}$ and $Bf=\\nu$, respectively.\nThe KKT conditions of \\eqref{convex_prob} read:\n\\begin{align}\n\t\\label{kkt2}\n\t\\begin{cases}\n\t\t\\tau_e(f_e^*)+\\gamma_{\\theta(e)}^*-\\gamma^*_{\\xi(e)} - \\lambda^*_{e}=0 & \\forall e \\in \\mathcal{E},\\\\\n\t\t\\sum_{e \\in \\mathcal{E}: \\theta(e)=i} f_e - \\sum_{e \\in \\mathcal{E}: \\xi(e)=i} f_e + \\nu_i=0 & \\forall i \\in \\mathcal{N},\\\\\n\t\t\\lambda_{e}^*f_{e}^*=0 & \\forall e \\in \\mathcal{E},\\\\\n\t\t\\lambda_{e}^*\\ge 0 & \\forall e \\in \\mathcal{E},\\\\\n\t\tf_{e}^*\\ge 0 & \\forall e \\in \\mathcal{E}.\n\t\\end{cases}\n\\end{align}\nThe third condition, known as complementary slackness, implies that if $\\lambda_e^*>0$, then $f_e^*=0$, i.e., link $e$ is not used at the equilibrium. We let $\\mathcal{E}_+$ denote the set of the links $e$ such that $\\lambda_e^*>0$. The next lemma shows that the social cost may be characterized in terms of the Lagrangian multiplier $\\gamma^*$.\n\\begin{lemma}\n\t\\label{cost}\n\tLet $(\\mathcal{G},\\tau,\\nu)$ denote a routing game. Then, \n\t\\begin{equation*}\n\t\t\\begin{gathered}\n\t\t\tC^{(0)}=m(\\gamma^*_\\mathrm{o}-\\gamma_\\mathrm{d}^*).\n\t\t\\end{gathered}\n\t\\end{equation*}\n\\end{lemma}\\medskip\n\\begin{proof}\n\tSee Appendix~\\ref{app:proofs}.\n\\end{proof}\n\n\n\n\n\nWe consider a NDP where the planner can improve the delay functions of the network with the goal of minimizing a combination of the social cost after the intervention and the cost of the intervention itself. Specifically, let $u \\in \\mathds{R}_+^{\\mathrm{E}}$ denote the intervention vector, with corresponding delay functions\n$$\n\\tau_e^{(u_e)}(f_e) = \\tau_e(0) + \\frac{a_e(f_e)}{1+u_e}.\n$$\nThis type of interventions may correspond for instance to enlarging some roads of the network. We let $h_e : [0,+\\infty) \\to [0,+\\infty)$ denote the cost associated to the intervention on link $e$. The goal of the planner is to minimize a combination of the social cost and the intervention cost, where $\\alpha \\ge 0$ is the trade-off parameter. More precisely, by letting $f^*(u)$ denote the Wardrop equilibrium corresponding to intervention $u$, the NDP reads as follows.\n\\begin{problem}\n\t\\label{prob:general}\n\tLet $(\\mathcal{G},\\tau,\\nu)$ be a routing game, and $\\alpha\\ge0$ be the trade-off parameter. The goal is to select $u^*$ such that\n\\begin{equation}\n\t\\label{eq:ndp}\n\t\tu^* \\in \\underset{u \\in \\mathds{R}^\\mathrm{E}_+}{\\arg\\min} \\\n\t\t\\sum_{e \\in \\mathcal{E}} f^*_e(u) \\tau_e^{(u_e)}(f^*_e(u)) + \\alpha h(u),\n\\end{equation}\nwhere $h(u)=\\sum_e h_e(u_e)$, and\n\\begin{equation}\n\t\\begin{aligned}\n\tf^*(u) = \\ & \\underset{f \\in \\mathds{R}_+^\\mathrm{E}, Bf=\\nu}{\\arg\\min}\n\t& & \\sum_{e \\in \\mathcal{E}} \\int_0^{f_e} \\tau_e^{(u_e)}(s) ds.\n\t\\end{aligned}\n\\label{eq:f(k)}\n\\end{equation}\n\\end{problem}\n\\medskip\n\\begin{remark}\nWe stress the fact that Problem~\\ref{prob:general} is bi-level, in the sense that the planner optimizes the intervention $u$ according to a cost function that depends on the Wardrop equilibrium $f^*(u)$, which in turn is the solution of the optimization problem \\eqref{eq:f(k)}, whose objective function depends on the intervention $u$ itself.\n\\end{remark}\n\n\\begin{remark}\nProblem~\\ref{prob:general} is not equivalent to the toll design problem. The key difference between the two problems is that tolls modify the Wardrop equilibrium, but the performance of tolls is evaluated with respect to the original delay functions $\\tau_e$. On the contrary, in Problem~\\ref{prob:general} the intervention is evaluated with respect to the new delay functions $\\tau_e^{(u_e)}$.\n\\end{remark}\n\nProblem~\\ref{prob:general} is in general non-convex, and hard to solve because of its bi-level nature. For these reasons, in the next section we shall study a simplified problem where the delay functions are affine and the planner may intervene on one link only. In this setting we are able to rephrase the problem as a single-level optimization problem, and provide an electrical network interpretation of the problem.\n\n\n\n\\section{Single-link interventions in affine networks}\n\\label{sec3}\nIn this section we provide an electrical network formulation of the NDP under some restrictive assumptions. In particular, we provide a closed formula for the social cost variation in terms of electrical quantities computed on a related resistor network. To this end, we restrict our analysis to the space of feasible interventions $\\mathcal{U}$, defined as\n$$\n\\mathcal{U} := \\{u : u_e \\delta^{(e)} \\ \\text{for a link} \\ e \\in \\mathcal{E}, u_e \\ge 0 \\}.\n$$\nIn other words, $\\mathcal{U}$ represents the space of interventions on a single link of the network. We also assume that the delay functions are affine, i.e., $\\tau_e(f_e) = a_ef_e + b_e$ for every $e$, and denote by $(\\mathcal{G},a,b,\\nu)$ routing games with affine delay functions.\nFor an intervention $u$, let $(\\mathcal{G},a^{(u)},b,\\nu)$ denote the corresponding affine routing game, $C^{(u)}$ denote the corresponding social cost, and $\\Delta C^{(u)} = C^{(0)} - C^{(u)}$ denote the social cost gain. Our problem can be expressed as follows.\n\\begin{problem}\n\tLet $(\\mathcal{G}, a, b, \\nu)$ be an affine routing game and $\\alpha \\ge0$ be the trade-off parameter. Find\n\t\\begin{equation*}\n\t\t\tu^* \\in \\ \\underset{u \\in \\mathcal{U}}{\\arg\\max}\n\t\t\t \\ (\\Delta C^{(u)} - \\alpha h(u)).\n\t\n\t\\end{equation*}\n\t\\label{prob1}\n\\end{problem}\n\nThe next example shows that the problem cannot be decoupled by first selecting the optimal link $e^*$ and then the optimal strength of the intervention $u^*_e$.\n\\begin{example}\n\t\\label{ex:ex}\n\tConsider the transportation network in Figure~\\ref{fig:ex}, with linear delay functions\n\t$\\tau_e(f_e) = a_ef_e$.\n\tBy some computation, one can prove that\n\t\\begin{equation*}\n\t\t\\begin{aligned}\n\t\t\t\\Delta C^{(u_1\\delta^{(1)})} & = m \\frac{a_1a_2^2 u_1}{(a_1+a_2)((u_1+1) a_1 + a_2)},\\\\\n\t\t\t\\Delta C^{(u_2 \\delta^{(2)})} & = m \\frac{a_1^2a_2 u_2}{(a_1+a_2)(a_1 + (u_2+1) a_2)},\\\\\n\t\t\t\\Delta C^{(u_3 \\delta^{(3)})} & = m \\frac{u_3}{u_3+1}.\n\t\t\\end{aligned}\n\t\\end{equation*}\n\tIn Figure~\\ref{fig:ex} the social cost variation corresponding to intervention on every link $e$ is illustrated as functions of $u_e$. Observe that the link that maximizes the social cost gain depends on $u_e$. Thus,\n\tthe problem cannot be decoupled by first selecting the optimal link $e^*$ and then the optimal $u_e^*$.\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}\n\t\t\t\\node[draw, circle] (1) at (0,0.5) {o};\n\t\t\t\\node[draw, circle] (2) at (0,-1) {n};\n\t\t\t\\node[draw, circle] (3) at (0,-2.5) {d};\n\t\t\t\\node (4) at (0,-3) {};\n\t\t\t\n\t\t\n\t\t\t\\path [->, >=latex] (1) edge [bend right=30]\n\t\t\tnode [left] {$e_1$} (2);\n\t\t\t\\path [->, >=latex] (1) edge [bend left=30]\n\t\t\tnode [right] {$e_2$} (2);\n\t\t\t\\path [->, >=latex] (2) edge [bend left=0]\n\t\t\tnode [right] {$e_3$} (3);\n\t\t\\end{tikzpicture}\n\t\t\\includegraphics[width = 6cm]{optimal_link_kappa.png}\n\t\t\\caption{\\emph{Left}: the graph of Example~\\ref{ex:ex}. \\emph{Right}: the social cost variation corresponding to single link interventions, with assignment $a_1 = 3, a_2 = 2, a_3 = 1, m = 3$.\n\t\t\t\\label{fig:ex}}\n\t\\end{figure}\n\\end{example}\n\nOur theoretical results rely on the following technical assumption, stating that the support of the Wardrop equilibrium is not modified with an intervention.\n\\begin{assumption}\n\tLet $\\mathcal{E}_+(u)$ be the set of links $e$ such that for the routing game $(\\mathcal{G}, a^{(u)}, b, \\nu)$ the Lagrangian multiplier $\\lambda_e^*(u)>0$.\n\tWe assume that $\\mathcal{E}_+(u)=\\mathcal{E}_+$ for every $u$ in $\\mathcal{U}$.\n\t\\label{assumption}\n\\end{assumption}\n\nAssumption~\\ref{assumption} is not new in the literature \\cite{steinberg1983prevalence,dafermos1984some}.\nWe will get back to the assumption in Section~\\ref{sec:assumption}.\nWith a slight abuse of notation, from now on let $\\mathcal{E}$ denote $\\mathcal{E} \\setminus \\mathcal{E}_+$.\nWe now define a mapping from the transportation network $\\mathcal{G}$ to an associated resistor network $\\mathcal{G}_R$.\n\\begin{definition}[Associated resistor network]\n\tGiven the transportation network $\\mathcal{G}=(\\mathcal{N},\\mathcal{E})$, the \\emph{associated resistor network} $\\mathcal{G}_R=(\\mathcal{N},\\mathcal{L},W)$ is constructed as follows:\n\t\t\\begin{itemize}\n\t\t\t\\item the node set $\\mathcal{N}$ is the same.\n\t\t\t\\item $W \\in \\mathds{R}^{\\mathrm{N} \\times \\mathrm{N}}$ is the conductance matrix, with elements\n\t\t\t\\begin{equation} \n\t\t\t\tW_{ij}= \\begin{cases}\n\t\t\t\t\t\\sum_{\\substack{e \\in \\mathcal{E}: \\\\\n\t\t\t\t\t\t\t\\xi(e)=i, \\theta(e)=j, \\ \\text{or} \\\\\n\t\t\t\t\t\t\t\\xi(e)=j, \\theta(e)=i}} \\frac{1}{a_e} & \\text{if}\\ i \\neq j \\\\\n\t\t\t\t\t0 & \\text{if}\\ i=j. \n\t\t\t\t\\end{cases}\n\t\t\t\t\\label{W}\n\t\t\t\\end{equation}\n\t\t\tNote that $W$ is symmetric, thus $\\mathcal{G}_R$ is undirected. The element $W_{ij}$ has to be interpreted as the conductance between nodes $i$ and $j$.\n\t\t\t\\item Multiple links connecting the same pair of nodes are not allowed, hence every link $l$ in $\\mathcal{L}$ can be identified by a unordered pair of nodes $\\{i,j\\}$, and the set $\\mathcal{L}$ is uniquely determined by $W$. Let $\\mathrm{L}$ denote the cardinality of $\\mathcal{L}$. The mapping $M: \\mathcal{E} \\to \\mathcal{L}$ associates to every link $e$ of the transportation network the corresponding link $l = M(e) = \\{\\xi(e),\\theta(e)\\}$ of the resistor network. Note by \\eqref{W} that $M(e)$ belongs to $\\mathcal{L}$ for every $e$ in $\\mathcal{E}$. \n\t\t\\end{itemize}\n\\end{definition}\n\nNote that the coefficients $a_e$ correspond to resistances in the resistor networks. We let $w = W\\mathbf{1}$ denote the degree distribution of the resistor network, and $w^* = \\max_{i \\in \\mathcal{N}}w_i$ denote the maximal degree. Before establishing our first main result, we define two relevant quantities.\n\\begin{definition}\n\t\\label{def:voltage}\n\tLet $v \\in \\mathds{R}^\\mathrm{N}$ be the \\emph{voltage} vector on $\\mathcal G_R$ when a net electrical current $m$ is injected from $\\mathrm{o}$ to $\\mathrm{d}$, i.e., $v$ is the unique solution of\n\\begin{equation}\n\t\\label{eq:voltage}\n\t\\sum_{k \\in \\mathcal{N}} W_{hk}(v_h- v_k)=m(\\delta^{(\\mathrm{o})}-\\delta^{(\\mathrm{d})})\\qquad \\forall h\\in\\mathcal N.\n\\end{equation}\nFor a link $e$ in $\\mathcal{E}$, let $y_e$ denote the \\emph{electrical current} flowing from $\\xi(e)$ to $\\theta(e)$ on link $M(e)$ of $\\mathcal G_R$, and let $\\Delta v_e = v_{\\xi(e)}-v_{\\theta(e)}$. By Ohm's law, $\\Delta v_e = a_e y_e$.\n\\end{definition}\n\\begin{definition}\n\t\\label{def:effective}\n\tLet $\\overline v \\in \\mathds{R}^\\mathrm{N}$ be the voltage vector on $\\mathcal G_R$ when a unitary current is injected from $i$ to $j$, i.e., \n\t\\begin{equation}\n\t\t\\label{eq:potential}\n\t\t\\sum_{k \\in \\mathcal{N}} W_{hk}(\\overline v_h- \\overline v_k)=\\delta^{(i)}-\\delta^{(j)}\\qquad \\forall h\\in\\mathcal N.\n\t\\end{equation}\n The \\emph{effective resistance} $r_{l}$ of link $l = \\{i,j\\}$ in $\\mathcal L$ is the effective resistance between $i$ and $j$, i.e., $r_{l}=\\overline{v}_i-\\overline{v}_j$. Given a link $e$ in $\\mathcal E$, we denote by $r_e$ the effective resistance of link $M(e)$ of the associated resistor network.\n\\end{definition}\n\nThe next theorem establishes a relation between the social cost gain with a single-link intervention and the associated resistor network. \n\\begin{theorem}\n\tLet $(\\mathcal{G}, a, b, \\nu)$ be an affine routing game, and let Assumption~\\ref{assumption} hold. Then,\n\t\\begin{equation}\n\t\t \\Delta C^{(u_e \\delta^{(e)})}=a_e f_e^*\\frac{y_e}{\\frac{1}{u_e}+\\frac{r_e}{a_e}}.\n\t\t \t\t\\label{el_form}\n\t\\end{equation}\n\n\t\\label{thm}\n\\end{theorem}\n\\begin{proof}\nSee Appendix~\\ref{app:proofs}.\n\\end{proof}\n\nThe ratio $r_{e}\/a_e$ belongs to $(0,1]$ and is also known as \\emph{spanning tree centrality}, which measures the fraction of spanning trees including link $M(e)$ among all spanning trees of the undirected network $\\mathcal G_R$ \\cite{hayashi2016efficient}. The spanning tree centrality of a link is maximized when removing the link disconnects the network. \nTheorem \\ref{thm} states that the social cost variation due to intervention on link $e$ is: \n\t\\begin{itemize}\n\t\t\\item proportional to $a_e f_e^*$, which measures the delay at the equilibrium due to congestion on link $e$;\n\t\t\\item decreasing in the spanning tree centrality. Intuitively speaking, the benefits of intervention on link $e$ is larger when the intervention modifies the equilibrium flows so that agents can move from paths not including $e$ to paths including $e$, namely when $f_e^*$ increases after the intervention. This phenomenon does not occur if $e$ is a bridge, i.e., if $r_{e}\/a_e=1$, and occurs largely when many paths from $\\xi(e)$ to $\\theta(e)$ exist, i.e., when $r_{e}\/a_e$ is small;\n\t\t\\item proportional to the current $y_e$. The role of this term is more clear in the special case of linear delay functions. In this case $y_e = f_e^*$ for all links $e$ in $\\mathcal{E} \\setminus \\mathcal E_+ $, hence $a_e f_e^* y_e^* = a_e (f_e^*)^2$, which is the total travel time on link $e$ before the intervention. \n\t\\end{itemize}\nThe idea behind the proof is that with affine delay functions the KKT conditions of the Wardrop equilibrium are linear, and under Assumption~\\ref{assumption} single-link interventions are equivalent to rank-1 perturbations of the system. Thus, by Lemma~\\ref{cost} we can compute the cost variation by looking at Lagrangian multiplier $\\gamma_\\mathrm{o}^*$, and then express such a variation in terms of electrical quantities.\nIn order to solve Problem~\\ref{prob1} by the electrical formulation, we need to compute \\eqref{el_form} for every link $e$ in $\\mathcal E$. The Wardrop equilibrium $f^*$ is assumed to be observable and therefore given. The voltage $v$ (and thus $y$) can be derived by solving the linear system \\eqref{eq:voltage} and has to be computed only once.\nOn the contrary, the computation of $r_{e}$ must be repeated for every link, hence it requires to solve $\\mathrm{L}$ sparse linear systems. To reduce the computational effort, in Section~\\ref{approximation} we shall propose a method to \\textit{approximate} the effective resistance of a link that, under a suitable assumption on the sparseness of the network, does not scale with the network size, allowing for a more efficient solution to Problem~\\ref{prob1}. The next result shows how to compute the derivative of the social cost variation for small interventions.\n\\begin{corollary}\nLet $(\\mathcal{G}, a, b, \\nu)$ be a routing game, and assume that for every $i$ in $\\mathcal E$ it holds either $f_i^*>0$ or $\\lambda_i^*>0$. Then,\n$$\n\\frac{\\partial \\Delta C(u)}{\\partial u_e}\\Big|_{u = \\mathbf{0}} = \n\\begin{cases}\na_e f_e^* y_e \\quad & \\text{if} \\ \\lambda_e^* = 0, \\\\\n0 & \\text{if} \\ \\lambda_e^* > 0.\n\\end{cases}\n$$\n\\end{corollary}\n\\medskip\n\\begin{proof}\nThe fact that for every link $i$ it holds either $f_i^*>0$ or $\\lambda_i^*>0$ implies that for infinitesimal interventions the support of $f^*$ is not modified. If $\\lambda_e^*=0$, then $f_e^*$ and we can derive the social cost variation in \\eqref{el_form} with respect to $u_e$. The case $\\lambda_e^*>0$ follows from continuity arguments and from the complementary slackness condition, which implies that $f_e^*(u)=0$ in a neighborhood of $u = \\mathbf{0}$.\n\\end{proof}\n\\begin{remark}\nObserve that the derivative of the social cost does not depend on the effective resistance of the link.\n\\end{remark}\n\n\\subsection{On the validity of Assumption~\\ref{assumption}}\n\\label{sec:assumption}\nIn this section we discuss Assumption~\\ref{assumption}. In particular, we show that the assumption is without loss of generality on series-parallel networks, if the throughput is sufficiently large. We first recall the definition of directed series-parallel networks, and then present the result in Proposition~\\ref{sp_prp}.\n\\begin{definition\n\tA directed network $\\mathcal{G}$ is series-parallel if and only if\n\t(i) it is composed of two nodes only ($\\mathrm{o}$ and $\\mathrm{d}$), connected by single link from $\\mathrm{o}$ to $\\mathrm{d}$, or\n\t(ii) it is the result of connecting two directed series-parallel networks $\\mathcal{G}_1$ and $\\mathcal{G}_2$ in parallel, by merging $\\mathrm{o}_1$ with $\\mathrm{o}_2$ and $\\mathrm{d}_1$ with $\\mathrm{d}_2$, or\n\t(iii) it is the result of connecting two directed series-parallel networks $\\mathcal{G}_1$ and $\\mathcal{G}_2$ in series, by merging $\\mathrm{d}_1$ with $\\mathrm{o}_2$.\n\\end{definition}\n\n\\begin{proposition}\n\t\\label{sp_prp}\n\tLet $(\\mathcal{G}, a, b, \\nu)$ be a routing game. If $\\mathcal{G}$ is series-parallel, there exists $\\overline{m}$ such that for every $m \\ge \\overline{m}$, $\\mathcal{E}_+=\\emptyset$. Furthermore, if $b=\\mathbf{0}$, $\\mathcal{E}_+=\\emptyset$ for every $m >0$.\n\\end{proposition}\n\\begin{proof}\n\tSee Appendix~\\ref{app:proofs}.\n\\end{proof}\n\n\\begin{remark}\n\tProposition~\\ref{sp_prp} implies that Assumption~\\ref{assumption} is without loss of generality on series-parallel networks if $m \\ge \\overline{m}$.\n\\end{remark}\n\nThe next example shows that, if the throughput is not sufficiently large, Assumption~\\ref{assumption} may be violated.\n\\begin{example}\n\tConsider the series-parallel network in Figure~\\ref{wheatstone}.\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale = 0.7]\n\t\t\t\\node[draw, circle] (1) at (0,0) {o};\n\t\t\t\\node[draw, circle] (2) at (4,0) {d};\n\t\t\n\t\t\t\\path [->, >=latex] (1) edge [bend right=30]\n\t\t\tnode [below] {$e_1$} (2);\n\t\t\t\\path [->, >=latex] (1) edge [bend left=30]\n\t\t\tnode [above] {$e_2$} (2);\n\t\t\\end{tikzpicture}\n\t\t\\caption{A directed series-parallel network. If the throughput is not sufficiently large, Assumption~\\ref{assumption} is not guaranteed to hold.}\n\t\t\\label{wheatstone}\n\t\\end{figure}\n\tLet $m=1$, and consider affine delay functions $\\tau_1(f_1)=f_1+1, \\tau_2(f_2)=f_2+3\/2$. One can verify that\n\t\\begin{equation*}\n\t\tf_1^*=3\/4, \\quad f_2^*=1\/4, \\quad \\lambda_1^*=\\lambda_2^*=0.\n\t\\end{equation*}\n\tModifying $a_1$ from $1$ to $1\/3$ (i.e., with $u=2\\delta^{(1)}$), we get:\n\t\\begin{equation*}\n\t\tf_1^*(u)=1, \\quad f_2^*(u)=0, \\quad \\lambda_1^*(u)=0, \\quad \\lambda_2^*(u)=1\/6,\n\t\\end{equation*}\n\tviolating Assumption~\\ref{assumption}. Proposition~\\ref{sp_prp} proves that this does not occur if $m$ is sufficiently large.\n\n\\end{example}\n\n\n\n\\section{An approximate solution to Problem 1}\n\\label{approximation}\nAs shown in the previous section, Problem~\\ref{prob1} may be rephrased in terms of electrical quantities over a related resistor network. Solving the NDP problem in this formulation requires to solve $\\mathrm{L}$ linear systems whose dimension scales linearly with $\\mathrm{N}$. Since the voltage $v$ may be computed in quasi-linear time by solving the sparse linear system \\eqref{eq:voltage} (see \\cite{cohen2014solving} for more details), the computational bottleneck is given by the computation of the effective resistance of every link of the resistor network. \nThe main idea of our method is that, although the effective resistance of a link depends on the entire network, it can be approximate by looking at a local portion of the network only.\nWe then formulate an algorithm to solve Problem~\\ref{prob1} by exploiting our approximation method.\n\\subsection{Approximating the effective resistance}\n\\label{method}\nWe introduce the following operations on resistor networks.\n\\begin{definition}[Cutting at distance $d$]\n\t\\label{def:cutting}\n\tA resistor network $\\mathcal{G}_R$ is cut at distance $d$ from link $l = \\{i,j\\}$ in $\\mathcal{L}$ if every node at distance greater than $d$ from link $l$ (i.e., from both $i$ and $j$) is removed, and every link with at least one endpoint in the set of the removed nodes is removed. Let $\\mathcal{G}^{U_{d}}_{l}$ and $r_{l}^{U_d}$ denote such a network and the effective resistance of link $l$ on it, respectively.\n\\end{definition}\n\n\\begin{definition}[Shorting at distance $d$]\n\tA resistor network $\\mathcal{G}_R$ is shorted at distance $d$ from $l$ in $\\mathcal{L}$ if all the nodes at distance greater than $d$ from link $l$ are shorted together, i.e., an infinite conductance is added between each pair of such nodes. Let $\\mathcal{G}^{L_{d}}_{l}$ and $r_{l}^{L_d}$ denote such a network and the effective resistance of link $l$ on it, respectively.\n\\end{definition}\n\nWe refer to Figure~\\ref{quad} for an example of these techniques applied to a regular grid.\nWe next prove that $r^{U_d}_{l}$ and $r^{L_d}_{l}$ are respectively an upper and a lower bound for the effective resistance $r_{l}$ for every link $l$. To this end, let us introduce Rayleigh's monotonicity laws.\n\\begin{figure}\n\t\\centering\n\t\\begin{tikzpicture}[scale=0.7, transform shape]\n\t\t\\node[draw,fill=green] (1) at (0,0) {};\n\t\t\\node[draw,fill=green] (2) at (1,0) {};\n\t\n\t\t\\node[draw,fill=yellow] (3) at (-1,0) {};\n\t\t\\node[draw,fill=yellow] (4) at (0,1) {};\n\t\t\\node[draw,fill=yellow] (5) at (1,1) {};\n\t\t\\node[draw,fill=yellow] (6) at (2,0) {};\n\t\t\\node[draw,fill=yellow] (7) at (1,-1) {};\n\t\t\\node[draw,fill=yellow] (8) at (0,-1) {};\n\t\n\t\t\\node[draw,fill=orange] (9) at (-2,0) {};\n\t\t\\node[draw,fill=orange] (10) at (-1,1) {};\n\t\t\\node[draw,fill=orange] (11) at (0,2) {};\n\t\t\\node[draw,fill=orange] (12) at (1,2) {};\n\t\t\\node[draw,fill=orange] (13) at (2,1) {};\n\t\t\\node[draw,fill=orange] (14) at (3,0) {};\n\t\t\\node[draw,fill=orange] (15) at (2,-1) {};\n\t\t\\node[draw,fill=orange] (16) at (1,-2) {};\n\t\t\\node[draw,fill=orange] (17) at (0,-2) {};\n\t\t\\node[draw,fill=orange] (18) at (-1,-1) {};\n\t\n\t\t\\node[draw,fill=red] (19) at (-3,0) {};\n\t\t\\node[draw,fill=red] (20) at (-2,1) {};\n\t\t\\node[draw,fill=red] (21) at (-1,2) {};\n\t\t\\node[draw,fill=red] (22) at (0,3) {};\n\t\t\\node[draw,fill=red] (23) at (1,3) {};\n\t\t\\node[draw,fill=red] (24) at (2,2) {};\n\t\t\\node[draw,fill=red] (25) at (3,1) {};\n\t\t\\node[draw,fill=red] (26) at (4,0) {};\n\t\t\\node[draw,fill=red] (27) at (3,-1) {};\n\t\t\\node[draw,fill=red] (28) at (2,-2) {};\n\t\t\\node[draw,fill=red] (29) at (1,-3) {};\n\t\t\\node[draw,fill=red] (30) at (0,-3) {};\n\t\t\\node[draw,fill=red] (31) at (-1,-2) {};\n\t\t\\node[draw,fill=red] (32) at (-2,-1) {};\n\t\n\t\t\\node[draw,fill=none] (33) at (-3,1) {};\n\t\t\\node[draw,fill=none] (34) at (-2,2) {};\n\t\t\\node[draw,fill=none] (35) at (-1,3) {};\n\t\t\\node[draw,fill=none] (36) at (2,3) {};\n\t\t\\node[draw,fill=none] (37) at (3,2) {};\n\t\t\\node[draw,fill=none] (38) at (4,1) {};\n\t\t\\node[draw,fill=none] (39) at (4,-1) {};\n\t\t\\node[draw,fill=none] (40) at (3,-2) {};\n\t\t\\node[draw,fill=none] (41) at (2,-3) {};\n\t\t\\node[draw,fill=none] (42) at (-1,-3) {};\n\t\t\\node[draw,fill=none] (43) at (-2,-2) {};\n\t\t\\node[draw,fill=none] (44) at (-3,-1) {};\n\t\t\\node[draw,fill=none] (45) at (-3,2) {};\n\t\t\\node[draw,fill=none] (46) at (-2,3) {};\n\t\t\\node[draw,fill=none] (47) at (3,3) {};\n\t\t\\node[draw,fill=none] (48) at (4,2) {};\n\t\t\\node[draw,fill=none] (49) at (4,-2) {};\n\t\t\\node[draw,fill=none] (50) at (3,-3) {};\n\t\t\\node[draw,fill=none] (51) at (-2,-3) {};\n\t\t\\node[draw,fill=none] (52) at (-3,-2) {};\n\t\t\\node[draw,fill=none] (53) at (-3,3) {};\n\t\t\\node[draw,fill=none] (54) at (4,3) {};\n\t\t\\node[draw,fill=none] (55) at (4,-3) {};\n\t\t\\node[draw,fill=none] (56) at (-3,-3) {};\n\t\n\t\t\\path (1) edge node [above] {$l$} (2);\n\t\t\\path (1) edge (3);\n\t\t\\path (1) edge (4);\n\t\t\\path (1) edge (8);\n\t\t\\path (2) edge (5);\n\t\t\\path (2) edge (6);\n\t\t\\path (2) edge (7);\n\t\t\\path (4) edge (5);\n\t\t\\path (7) edge (8);\n\t\n\t\t\\path (3) edge (9);\n\t\t\\path (3) edge (10);\n\t\t\\path (10) edge (4);\n\t\t\\path (4) edge (11);\n\t\t\\path (5) edge (12);\n\t\t\\path (11) edge (12);\n\t\t\\path (3) edge (9);\n\t\t\\path (3) edge (10);\n\t\t\\path (10) edge (4);\n\t\t\\path (5) edge (13);\n\t\t\\path (6) edge (13);\n\t\t\\path (6) edge (14);\n\t\t\\path (6) edge (15);\n\t\t\\path (15) edge (7);\n\t\t\\path (16) edge (7);\n\t\t\\path (16) edge (17);\n\t\t\\path (8) edge (17);\n\t\t\\path (8) edge (18);\n\t\t\\path (3) edge (18);\n\t\n\t\t\\path (19) edge (9);\n\t\t\\path (9) edge (20);\n\t\t\\path (10) edge (20);\n\t\t\\path (10) edge (21);\n\t\t\\path (11) edge (21);\n\t\t\\path (11) edge (22);\n\t\t\\path (22) edge (23);\n\t\t\\path (12) edge (23);\n\t\t\\path (12) edge (24);\n\t\t\\path (24) edge (13);\n\t\t\\path (25) edge (13);\n\t\t\\path (25) edge (14);\n\t\t\\path (26) edge (14);\n\t\t\\path (27) edge (14);\n\t\t\\path (15) edge (28);\n\t\t\\path (15) edge (27);\n\t\t\\path (16) edge (28);\n\t\t\\path (16) edge (29);\n\t\t\\path (29) edge (30);\n\t\t\\path (17) edge (30);\n\t\t\\path (17) edge (31);\n\t\t\\path (31) edge (18);\n\t\t\\path (32) edge (9);\n\t\t\\path (32) edge (18);\n\t\n\t\t\\path (53) edge (46);\n\t\t\\path (46) edge (35);\n\t\t\\path (35) edge (22);\n\t\t\\path (23) edge (36);\n\t\t\\path (36) edge (47);\n\t\t\\path (47) edge (54);\n\t\t\\path (45) edge (34);\n\t\t\\path (34) edge (21);\n\t\t\\path (24) edge (37);\n\t\t\\path (37) edge (48);\n\t\t\\path (33) edge (20);\n\t\t\\path (25) edge (38);\n\t\t\\path (44) edge (32);\n\t\t\\path (27) edge (39);\n\t\t\\path (52) edge (43);\n\t\t\\path (43) edge (31);\n\t\t\\path (28) edge (40);\n\t\t\\path (40) edge (49);\n\t\t\\path (56) edge (51);\n\t\t\\path (51) edge (42);\n\t\t\\path (42) edge (30);\n\t\t\\path (29) edge (41);\n\t\t\\path (41) edge (50);\n\t\t\\path (50) edge (55);\n\t\n\t\t\\path (53) edge (45);\n\t\t\\path (46) edge (34);\n\t\t\\path (35) edge (21);\n\t\t\\path (24) edge (36);\n\t\t\\path (37) edge (47);\n\t\t\\path (48) edge (54);\n\t\t\\path (45) edge (33);\n\t\t\\path (34) edge (20);\n\t\t\\path (25) edge (37);\n\t\t\\path (38) edge (48);\n\t\t\\path (33) edge (19);\n\t\t\\path (26) edge (38);\n\t\t\\path (44) edge (19);\n\t\t\\path (26) edge (39);\n\t\t\\path (52) edge (44);\n\t\t\\path (43) edge (32);\n\t\t\\path (27) edge (40);\n\t\t\\path (39) edge (49);\n\t\t\\path (56) edge (52);\n\t\t\\path (51) edge (43);\n\t\t\\path (42) edge (31);\n\t\t\\path (28) edge (41);\n\t\t\\path (40) edge (50);\n\t\t\\path (49) edge (55);\n\t\\end{tikzpicture}\n\t\\vspace{0.3cm} \\\\\n\t\\centering{$\\mathcal{G}_{l}^{U_1}$ \\hspace{2.6cm} $\\mathcal{G}_{l}^{L_1}$} \\\\\n\t\\vspace{0.2cm}\n\t\t\\centering\n\t\t\\begin{tikzpicture}[scale=0.85, transform shape]\n\t\t\t\\node[draw,fill=green] (1) at (0,0) {};\n\t\t\t\\node[draw,fill=green] (2) at (1,0) {};\n\t\t\n\t\t\t\\node[draw,fill=yellow] (3) at (-1,0) {};\n\t\t\t\\node[draw,fill=yellow] (4) at (0,1) {};\n\t\t\t\\node[draw,fill=yellow] (5) at (1,1) {};\n\t\t\t\\node[draw,fill=yellow] (6) at (2,0) {};\n\t\t\t\\node[draw,fill=yellow] (7) at (1,-1) {};\n\t\t\t\\node[draw,fill=yellow] (8) at (0,-1) {};\n\t\t\n\t\t\t\\path (1) edge node [above] {$l$} (2);\n\t\t\t\\path (1) edge (3);\n\t\t\t\\path (1) edge (4);\n\t\t\t\\path (1) edge (8);\n\t\t\t\\path (2) edge (5);\n\t\t\t\\path (2) edge (6);\n\t\t\t\\path (2) edge (7);\n\t\t\t\\path (4) edge (5);\n\t\t\t\\path (7) edge (8);\n\t\t\\end{tikzpicture}\n\t\\quad\n\t\t\\begin{tikzpicture}[scale=0.65, transform shape]\n\t\t\t\\node[draw,fill=green] (1) at (0,0) {};\n\t\t\t\\node[draw,fill=green] (2) at (1,0) {};\n\t\t\n\t\t\t\\node[draw,fill=yellow] (3) at (-1,0) {};\n\t\t\t\\node[draw,fill=yellow] (4) at (0,1) {};\n\t\t\t\\node[draw,fill=yellow] (5) at (1,1) {};\n\t\t\t\\node[draw,fill=yellow] (6) at (2,0) {};\n\t\t\t\\node[draw,fill=yellow] (7) at (1,-1) {};\n\t\t\t\\node[draw,fill=yellow] (8) at (0,-1) {};\n\t\t\t\\node[draw,fill=orange] (9) at \n\t\t\t(0.5,2) {s};\n\t\t\n\t\t\t\\path (1) edge node [above] {$l$} (2);\n\t\t\t\\path (1) edge (3);\n\t\t\t\\path (1) edge (4);\n\t\t\t\\path (1) edge (8);\n\t\t\t\\path (2) edge (5);\n\t\t\t\\path (2) edge (6);\n\t\t\t\\path (2) edge (7);\n\t\t\t\\path (4) edge (5);\n\t\t\t\\path (7) edge (8);\n\t\t\n\t\t\t\\path (4) edge [bend right=0]\n\t\t\tnode [below] {} (9);\n\t\t\t\\path (5) edge [bend right=0]\n\t\t\tnode [below] {} (9);\n\t\t\t\\path (3) edge [bend right=-20]\n\t\t\tnode [below] {} (9);\n\t\t\t\\path (6) edge [bend right=20]\n\t\t\tnode [below] {} (9);\n\t\t\n\t\t\t\\coordinate (ghost) at (3, 0);\n\t\t\t\\coordinate (ghost2) at (-2,0);\n\t\t\n\t\t\t\\draw[-] (7) to[out=0, in=-90] (ghost) to[out=90, in=0] (9);\n\t\t\t\\draw[-] (8) to[out=180, in=-90] (ghost2)\n\t\t\tto[out=90, in=180] (9);\n\t\t\\end{tikzpicture}\n\t\\caption{Square grid. \\emph{Above}: the yellow, orange and red nodes are at distance $1$, $2$ and $3$, respectively from the green nodes.\n\t\\emph{Bottom-left}: the grid cut at distance $1$ from link $l$.\n\t\t\\emph{Bottom-right}: the grid shorted at distance $1$ from link $l$. \n\t\tNote that in the bottom right network the links connecting yellow nodes with node $s$ do not have unitary weights.}\n\t\\label{quad}\n\\end{figure}\n\\begin{lemma}[Rayleigh's monotonicity laws \\cite{levin2017markov}]\n\tIf the resistances of one or more links are increased, the effective\n\tresistance between two arbitrary nodes cannot decrease.\n\tIf the resistances of one or more links are decreased, the effective resistance cannot increase.\n\t\\label{ray}\n\\end{lemma}\n\\begin{proposition}\n\t\t\\label{prp_ray}\n\tLet $\\mathcal{G}_R$ be a resistor network. For every link $l=\\{i,j\\}$ in $\\mathcal{L}$, \n\t\\begin{equation*}\n\t\t\\label{bounds}\n\t\tr^{U_{d_1}}_{l}\\ge r^{U_{d_2}}_{l} \\ge r_{l} \\ge r^{L_{d_2}}_{l} \\ge r^{L_{d_1}}_{l}, \\quad \\forall d_2 \\ge d_1 \\ge 1.\n\t\\end{equation*}\n\tMoreover,\n\t\\begin{equation}\n\t\t1\/w^* \\le r^{L_d}_{l}\\le r^{U_d}_{l} \\le 1\/W_{ij}, \\quad \\forall d \\ge 1.\n\t\t\\label{remark}\n\t\\end{equation}\n\\end{proposition}\\medskip\n\\begin{proof}\nCutting a network at distance $d$ is equivalent to setting to infinity the resistance of all the links with at least one endpoint at distance greater than $d$. Shorting a network at distance $d$ is equivalent to setting to zero the resistance between any pair of nodes at distance greater than $d$. Then, by Rayleigh's monotonicity laws, it follows $r^{U_d}_{l}\\ge r_{l} \\ge r^{L_d}_{l}$. Similar arguments may be used to show that, if $d_10$ such that the random walk hits the set $\\mathcal{S}$), respectively.\n\t\\item $\\mathcal{N}_d$ denote the set of the nodes that are at distance $d$ from link $l = \\{i,j\\}$, i.e., at distance $d$ from $i$ (or $j$) and at distance greater or equal than $d$ from $j$ (or $i$). Index $l$ is omitted for simplicity of notation.\n\t\\item $p_k(X)$, $p_k^{U_d}(X)$ and $p_k^{L_d}(X)$, denote the probability that event $X$ occurs, conditioned on the fact that the random walk starts in $k$ at time $0$ and evolves over the resistor networks $\\mathcal{G}_R$, $\\mathcal{G}_{l}^{U_d}$ and $\\mathcal{G}_{l}^{L_d}$, respectively.\n\\end{itemize} \nThe next result provides a characterization of the bound gap in terms of random walks over $\\mathcal{G}_R$, $\\mathcal{G}_{l}^{U_d}$ and $\\mathcal{G}_{l}^{L_d}$.\n\\begin{proposition}\n\t\\label{gap}\n\tLet $\\mathcal{G}_R=(\\mathcal{N},\\mathcal{L},W)$ be a resistor network. For every link $l = \\{i,j\\}$ in $\\mathcal{L}$,\n\t\\begin{equation}\n\t\t\\begin{aligned}\n\t\tr^{U_d}_{l}-r^{L_d}_{l}\n\t\t& \\le \\frac{w_{i}}{(W_{ij})^2} \\underbrace{p_i(T_{\\mathcal{N}_d} < T_j)}_{\\text{Term 1}} \\ \\cdot \\\\\n\t\t& \\cdot \\underset{g \\in \\mathcal{N}_d}{ \\max} \\underbrace{\\big(p^{U_d}_g(T_i < T_j)-p^{L_d}_g(T_i < T_j)\\big)}_{\\text{Term 2}},\n\t\t\\end{aligned}\n\t\t\\label{equat_gap}\n\t\\end{equation}\nwhere the quantities in \\eqref{equat_gap} are computed with respect to the continuous-time Markov chain with transition rates $W$. \n\\end{proposition}\n\\begin{proof}\nSee Appendix~\\ref{app:proofs}.\n\\end{proof}\n\nIn the next sections we shall use this result to analyze the asymptotic behaviour of the bound gap for an arbitrary link $l$ in $\\mathcal{L}$ as $d \\to +\\infty$, for networks whose node set is infinite and countable. In particular, we show in Section~\\ref{rec_sec} that this error vanishes asymptotically for the class of recurrent networks. The core idea to prove this result is to show that Term 1 vanishes. To generalize our analysis beyond recurrent networks, in Section~\\ref{beyond} we study both Term 1 and 2 and provide examples showing that all combinations in Table~\\ref{table_bounds} are possible. In particular, it is possible that the bound gap vanishes asymptotically for non-recurrent networks (for which Term 1 $\\nrightarrow 0$, see \\cite[Section 21.2]{levin2017markov}) if Term 2 $\\rightarrow 0$.\n\\begin{table}\n\t\\caption{All the four cases are possible, as shown in Section~\\ref{beyond}. Term 1 $\\rightarrow 0$ under the assumption that the network is recurrent, as proved in Section~\\ref{rec_sec}. \\label{table_bounds}}\n\t\\centering\n\t\t\\begin{tabular}{lll}\n\t\t\t& Term 2 $\\rightarrow 0$ & Term 2 $\\nrightarrow 0$ \\\\\n\t\t\t\\hline\n\t\t\tTerm 1 $\\rightarrow 0$\n\t\t\t& 2d grid\n\t\t\t& Ring \\\\\n\t\t\tTerm 1 $\\nrightarrow 0$\n\t\t\t& 3d grid\n\t\t\t& Double tree\\\\\n\t\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\\subsection{Recurrent networks} \n\\label{rec_sec}\nWe start by introducing the class of recurrent networks.\n\\begin{definition}[Recurrent random walk]\n\tA random walk is recurrent if, for every starting point, it visits its starting node infinitely often with probability one \\cite[Section 21.1]{levin2017markov}.\n\\end{definition}\n\n\\begin{definition}[Recurrent network]\n\tAn infinite resistor network $\\mathcal{G}_R=(\\mathcal{N},\\mathcal{L}, W)$ is recurrent if the random walk on the network is recurrent.\n\\end{definition}\n\nThe next theorem states that the bound gap vanishes asymptotically on recurrent networks if the degree of every node is finite. Note that the boundedness of the degree of all the nodes is guaranteed under Assumption~\\ref{sparse}.\n\\begin{theorem}\n\tLet $\\mathcal{G}_R=(\\mathcal{N},\\mathcal{L},W)$ be an infinite recurrent resistor network, and let $w^*<+\\infty$. Then, for every $l$ in $\\mathcal{L}$,\n\t\\begin{equation*}\n\t\t\\lim_{d \\rightarrow +\\infty} (r^{U_d}_{l}-r^{L_d}_{l})=0,\n\t\\end{equation*}\n\t\\label{recurrent}\n\\end{theorem}\n\\begin{proof}\nIt is proved in \\cite[Proposition 21.3]{levin2017markov} that a network is recurrent if and only if \n\\begin{equation}\n\t\\lim_{d \\rightarrow +\\infty}p_i(T_{\\mathcal{N}_d}0$ (recall that $i$ and $j$ are adjacent nodes), it follows\n\\begin{equation*}\n\t\\lim_{d \\rightarrow +\\infty} r^{U_d}_{l}-r^{L_d}_{l}\\le \\frac{w^*}{(W_{ij})^2} \\lim_{d \\rightarrow +\\infty} p_i(T_{\\mathcal{N}_d},shorten >=1pt,auto, node distance = 0.5cm, semithick]\n\t\t(1) edge node {$l_1$} (2) \n\t\t(2) edge node {$l_2$} (3)\n\t\t(3) edge node {$l_3$} (4)\n\t\t(4) edge node {$l_4$} (5)\n\t\t(1) edge node {$l_5$} (6)\n\t\t(6) edge node {$l_6$} (7)\n\t\t(7) edge node {$l_7$} (8)\n\t\t(8) edge node {$l_8$} (9)\n\t\t(9) edge node {$l_{9}$} (13)\n\t\t(2) edge node {$l_{10}$} (7)\n\t\t(3) edge node {$l_{11}$} (8)\n\t\t(3) edge node {$l_{12}$} (9)\n\t\t(4) edge node {$l_{13}$} (9)\n\t\t(5) edge node {$l_{14}$} (14)\n\t\t(6) edge node {$l_{15}$} (10)\n\t\t(10) edge node {$l_{16}$} (11)\n\t\t(10) edge[left] node {$l_{17}$} (15)\n\t\t(7) edge node {$l_{18}$} (10)\n\t\t(8) edge node {$l_{19}$} (11)\n\t\t(9) edge node {$l_{20}$} (12)\n\t\t(11) edge node {$l_{21}$} (12)\n\t\t(12) edge node {$l_{22}$} (13)\n\t\t(13) edge node {$l_{23}$} (14)\n\t\t(11) edge node {$l_{24}$} (15)\n\t\t(13) edge[left] node {$l_{25}$} (17)\n\t\t(14) edge node {$l_{26}$} (17)\n\t\t(15) edge[below] node {$l_{27}$} (16)\n\t\t(16) edge[below] node {$l_{28}$} (17);\n\t\\end{tikzpicture}\n\t\\caption{\\emph{Top}: the highway network in Los Angeles. \\emph{Bottom}: a graph representation of the network, where node $1$ (Santa Monica) and $17$ (Santa Ana) are respectively the origin and the destination. \\label{fig:mapLA}}\n\\end{figure} \n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=7cm]{quarta2-5links.png}\n\t\\caption{\\emph{Top}: Social cost variation for interventions in the form $u = 3 \\delta^{(e)}$ for a routing game on the graph of Figure~\\ref{fig:mapLA} with delay functions in the form $\\tau_e(f_e) = a_e (f_e)^4 + b_e$. The cost variation is computed by solving convex optimization (\\emph{exact}) and by adapting Theorem~\\ref{thm} to the case of non-linear delay functions (\\emph{approximated}), as explained in Section~\\ref{sec:relax_ass}. The plot illustrates the social cost variation for the five links that maximize the cost variation.\\label{fig:non_linear}}\n\\end{figure}\n\n\\section{Conclusion}\nIn this work we study a network design problem where a single link can be improved. Under the assumption that the support of the Wardrop equilibrium is not modified with an intervention, we reformulate the problem in terms of electrical quantities computed on a related resistor network, in particular in terms of the effective resistance of a link. We then provide a method to approximate such an effective resistance by performing only local computation, which may be of separate interest.\nBased on the electrical formulation and our approximation method for the effective resistance we propose an efficient algorithm to solve efficiently the network design problem. We then show by numerical examples that our method can be adapted to routing games with non-linear delay functions, and achieves good performance even if the support of the equilibrium is modified by the intervention.\n\nAn interesting direction for the future is a deeper analysis on tightness of the bounds on effective resistance for finite distance $d$. Future research lines also include extending the analysis to the case of multiple interventions. Indeed, the general problem is not submodular, thus guarantees on the performance of greedy algorithm are not given. A possible direction is to exploit the closed formula for the social cost derivative to implement gradient descents algorithms. Other directions include extending the theoretical framework to the case of multiple origin-destination pairs and heterogeneous preferences \\cite{Cianfanelli.ea:19,Cianfanelli.ea:22}.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCooling mesoscopic and microscopic resonators down to their minimum-energy state is fundamental to observe the classical-quantum transition and to exploit the quantum advantage in nanoscience~\\cite{Nanoscience,OptoReview}. The ground-state preparation is also a crucial and implicit step in quantum information processes, including but not limited to the continuous-variable quantum computations~\\cite{ContinuousVariables,YouSuperconductingCircuits,BosonSampling,\nPhononArithmetic}, the ultrahigh precision measurements~\\cite{ForceMeasureByOscillator1,ForceMeasureByOscillator2}, and the quantum interface constructions~\\cite{CoolingMagnon}. Various cooling strategies are designed to attain an effective temperature as low as possible in trapped atom and ion systems~\\cite{SidebandCooling,SelfCooling,BeatingSidebandCooling}. In atomic laser cooling, the popular strategies are consisted of the laser Doppler cooling~\\cite{CoolingMagnon,LaserCooling2013,LaserCooling1995}, the resolved-sideband cooling, and the electromagnetically induced transparency cooling (EIT)~\\cite{EITCoolingTheory,EITCoolingExperiment}.\n\nBeyond the paradigms extracting the system energy through the designed dissipation channels based on the blue-shifted (anti-Stokes) sidebands, nondeterministic methods of measurement-based cooling have been proposed in theory~\\cite{Purification,OneModeCooling} and demonstrated in experiment~\\cite{MeasurementCoolingExp}. Rather than providing a unidirectional decay channel for the target system, the nonunitary evolution induced by repeated measurements of the ground state of the detector system leads to the postselection of the ground state of the target system (typically modelled as a resonator) and the reduction of its high-energy distribution in the ensemble. In another word, the resonator is gradually steered by the outcomes of the conditional measurement (CM) to its ground state via dynamically filtering out its vibrational modes. Ranging from cooling the nonlinear mechanical resonators~\\cite{CoolingNonlinearOscillator}, cooling by one shot measurement~\\cite{OneShotMeasurement}, expanding cooling range by an external driving~\\cite{ExternalLevelCooling}, to accelerating cooling rate by optimizing the measurement intervals~\\cite{TwoModeCooling}, an unexplored weakness of all these CM strategies is their small successful probability inherited from the projective operation (postselection). An inevitable amount of cost rises with more samples in ensemble. The unconditional measurement (UM) strategy, in sharp contrast to CM, performs a nonselective and impulsive measurement in all the bases of the target bare Hamiltonian at the end of each round of the joint free evolution of the target and the detector systems~\\cite{CoolingQubit,FockState}. It is more likely to realize a unit-success-probability cooling but suffers from a much slower cooling-rate than CM, indicating more number of measurements towards the ground-state cooling. To compromise the cooling rate and the successful probability, the interpolating-configuration of the conditional and unconditional measurements becomes an optimization problem.\n\nThe integration of a small-scale quantum circuit with a classical optimizer (e.g., the neural network) provides a design paradigm by specifying a sequence of parametrized quantum operations that are well suited to implement robust and high-fidelity algorithms. Many reinforcement learning (RL) algorithms constructed by the neural network, that demonstrated remarkable capabilities in the board and video games~\\cite{GameGoNN,GameGoWithoutHuman,GameChess,HumanLevelControl}, have substantiated widely and timely interest in studying quantum physics~\\cite{MLAndPhysics}, such as quantum error correction~\\cite{ContinuousErrorCorrection,ErrorCorrectionFeedback}, quantum simulation~\\cite{DigitalQuantumSimulation,SimulationHybridNetwork}, and quantum state preparation~\\cite{PhaseTransition,DifferentPhaseControl,StatePreparationRL}, to name a few. The proximal policy optimization (PPO) algorithm, as a typical RL algorithm with a significant sample complexity, scalability, and robustness for hyperparameters, has proven to be a fruitful tool in quantum optimization control~\\cite{ModelFreeControlPPO,FeedbackControlPPO,ManyBodyStatePreparationPPO}.\n\nIn this work, we propose a measurement-based cooling architecture as a hybrid sequence of UM and CM strategies. It involves a double optimization: on each local step along the sequence, either UM or CM can be considerably improved by using the timely-optimized measurement intervals; and on the global efficiency of the sequence, its arrangement can be separably optimized through reinforcement learning. Particularly, in a typical measurement-based cooling model, i.e., the Jaynes-Cummings (JC) model, where a mechanical resonator (the target system) is coupled to a qubit (the detector system), the conditional and unconditional measurements are alternatively performed to cool down the resonator to its ground state. In parallel to the optimized measurement-interval obtained for CM~\\cite{TwoModeCooling}, we here analytically derive the optimized interval of UM. Then the free-evolution intervals between any neighboring measurements, either UM or CM, can be optimized for cooling. The global sequence of measurements or the implementing order of UM and CM can be further optimized with reinforcement learning. The optimizer is fed with the cooperative cooling performance, a function of the average population of the resonator, the successful probability of the detector in the measured subspace, and the fidelity of the resonator in the ground state, as we defined to rank the comprehensive efficiencies of various sequences of measurements. Eventually we find an optimal sequence holding an overwhelming advantage over all the others.\n\nThe rest of this work is structured as follows. We briefly revisit the general framework for the cooling protocols based on the conditional and unconditional measurements in Secs.~\\ref{CM} and \\ref{UM}, respectively. In Sec.~\\ref{UM}, an analytical expression of the optimized measurement-interval is obtained for the unconditional measurement. In Sec.~\\ref{Optimization}, we introduce the interpolation diagram for the cooling architecture based on these two measurements, define the cooperative cooling performance to comprehensively quantify various strategies, and present the optimized result through reinforcement learning. The PPO algorithm and the optimal-control procedure are provided in Appendixes~\\ref{PPOSec} and \\ref{OptSequence}, respectively. The whole work is discussed and summarized in Sec.~\\ref{Conclusion}.\n\n\\section{Conditional and unconditional measurements}\\label{CMandUM}\n\n\\subsection{Conditional Measurement}\\label{CM}\n\nThe cooling-by-measurement framework is typically established on the JC model, whose Hamiltonian in the rotating frame with respect to $H_0=\\omega_a(|e\\rangle\\langle e|+a^\\dagger a)$ reads\n\\begin{equation}\\label{Ham}\nH=\\Delta|e\\rangle\\langle e|+g(a^\\dagger\\sigma_-+a\\sigma_+).\n\\end{equation}\nHere $\\Delta\\equiv\\omega_e-\\omega_a$ is the detuning between the atomic level-spacing $\\omega_e$ and the to-be-cooled resonator frequency $\\omega_a$ and $|\\Delta|\\ll\\omega_e, \\omega_a$. $g$ is the coupling strength between the detector (qubit) and the target resonator. Pauli matrices $\\sigma_-$ and $\\sigma_+$ denote the transition operators of the qubit; and $a$ ($a^\\dagger$) represents annihilation (creation) operator of the resonator. The cooling process is described by a sequence of piecewise joint evolutions of the resonator and the detector, that are interrupted by instantaneous projective measurements on a particular subspace of the detector.\n\nThe conditional measurement-based cooling is characterised by fixing the subspace as the ground state $|g\\rangle$. Initially, the resonator is in a thermal-equilibrium state $\\rho_a^{\\rm th}$ with a finite temperature $T$, while the detector qubit starts from the ground state. Then the overall initial state has the form of\n\\begin{equation}\\label{InitialState}\n\\rho_{\\rm tot}(0)=|g\\rangle\\langle g|\\otimes\\rho_a^{\\rm th}.\n\\end{equation}\nTo cool down the resonator, a conditional or selective measurement $M_g=|g\\rangle\\langle g|$ is implemented on the detector after a free-evolution with interval $\\tau$, when the overall state becomes $\\rho_{\\rm tot}(\\tau)=\\exp(-iH\\tau)\\rho_{\\rm tot}(0)\\exp(iH\\tau)$. And then the conditional measurement yields a nondeterministic result:\n\\begin{equation}\n\\rho_a(\\tau)=\\frac{\\langle g|\\rho_{\\rm tot}(\\tau)|g\\rangle}{{\\rm Tr}\\left[\\langle g|\\rho_{\\rm tot}(\\tau)|g\\rangle\\right]}.\n\\end{equation}\nIn regard of the time-dependence of the interval $\\tau$, the conditional cooling protocols can be categorized into the equal-time-spacing and unequal-time-spacing strategies~\\cite{OneModeCooling,TwoModeCooling}. The unequal-time-spacing strategy has demonstrated a dramatic advantage on the cooling performance by setting the measurement interval as the inverse of the thermal Rabi frequency $\\tau_{\\rm opt}^c(t)=1\/\\Omega_{\\rm th}(t)$, where $\\Omega_{\\rm th}(t)\\equiv g\\sqrt{\\bar{n}(t)}=g\\sqrt{\\sum_nnp_n(t)}$ with $p_n(t)$ denoting the current population of the resonator on the Fock state $|n\\rangle$. To attain an optimal cooling performance, our cooling architecture in this work employs the unequal-time-spacing strategy. After $N$ rounds of free-evolution and instantaneous-measurement described by an ordered time sequence $\\{\\tau_1(t_1), \\tau_2(t_2), \\cdots, \\tau_N(t_N)\\}$ with $t_{i>1}=\\sum_{j=1}^{j=i-1}\\tau_j$ and $\\tau_1\\equiv1\/[g\\sqrt{{\\rm Tr}(\\hat{n}\\rho_a^{\\rm th})}]$, the resonator state becomes\n\\begin{equation}\\label{rhoace}\n\\rho_a\\left(t=\\sum_{i=1}^N\\tau_i\\right)=\\frac{\\sum_{n}\\prod_{i=1}^N|\\alpha_n(\\tau_i)|^2p_n|n\\rangle\\langle n|}{P_g(N)},\n\\end{equation}\nwhere $p_n=\\exp(-n\\hbar\\omega_a\/k_BT)\/Z$ with $Z\\equiv1\/[1-\\exp(-\\hbar\\omega_a\/k_BT)]$ is the initial population,\n\\begin{equation}\nP_g(N)=\\sum_n\\prod_{i=1}^N|\\alpha_n(\\tau_i)|^2p_n\n\\end{equation}\nis the survival or successful probability of CM, and\n\\begin{equation}\\label{CMCoolingCoeff}\n\\left|\\alpha_n(\\tau_i)\\right|^2=\\frac{\\Omega_n^2-g^2n\\sin^2(\\Omega_n\\tau_i)}{\\Omega_n^2}\n\\end{equation}\nis the cooling coefficient with $\\Omega_n=\\sqrt{g^2n+\\Delta^2\/4}$ denoting the $n$-photon Rabi frequency. The cooling coefficient in Eq.~(\\ref{rhoace}) determines the average population\n\\begin{equation}\n\\bar{n}(t)={\\rm Tr}\\left[\\hat{n}\\rho_a(t)\\right], \\quad \\hat{n}\\equiv a^{\\dagger}a,\n\\end{equation}\nby reshaping the population distributions over all the Fock states. Note in Eq.~(\\ref{CMCoolingCoeff}), the $0$th cooling coefficient is unit, $|\\alpha_0(\\tau_i)|^2=1$, meaning that the ground-state population is always under protection during the cooling process. The populations on the high-occupied Fock states are gradually reduced by $|\\alpha_n(\\tau_i)|^N<1$ with increasing $N$ unless $\\sin(\\Omega_n\\tau_i)=0$ or $\\Omega_n\\tau_i=j\\pi$ with integer $j$.\n\n\\subsection{Unconditional Measurement}\\label{UM}\n\nExpanding the measurement subspace to the whole space of the detector system, we can transform a conditional-measurement strategy into its unconditional-measurement counterpart. After a period of joint unitary evolution under the Hamiltonian~(\\ref{Ham}), the overall state can be written as\n\\begin{equation}\n\\rho_{\\rm tot}(\\tau)=\\bigoplus_np_n\\begin{pmatrix}|\\alpha_n(\\tau)|^2 & \\chi_n(\\tau)\\\\\n\\chi^*_n(\\tau)& |\\beta_n(\\tau)|^2\\end{pmatrix},\n\\end{equation}\nwhere\n\\begin{equation*}\n\\begin{aligned}\n&\\chi_n(\\tau)\\equiv\\frac{-g\\sqrt{n}[\\Delta\\sin^2(\\Omega_n\\tau)-i\\Omega_n\\sin(2\\Omega_n\\tau)]}{2\\Omega_n^2},\\\\\n&|\\beta_n(\\tau)|^2\\equiv\\frac{g^2n\\sin^2(\\Omega_n\\tau)}{\\Omega_n^2}.\n\\end{aligned}\n\\end{equation*}\nThe UM can be implemented by tracing out the degrees of freedom of the detector ${\\rm Tr}_d[\\rho_{\\rm tot}(\\tau)]$. Then the state of resonator reads\n\\begin{equation}\\label{umrhoa}\n\\rho_a(\\tau)=\\sum_{n\\geq0}\\left[|\\alpha_n(\\tau)|^2p_n+|\\beta_{n+1}(\\tau)|^2p_{n+1}\\right]|n\\rangle\\langle n|.\n\\end{equation}\nSo that after a nonselective measurement, i.e., a measurement without recording the output, a population transfer occurs in the target resonator as\n\\begin{equation}\\label{pn}\np_n\\rightarrow|\\alpha_n(\\tau)|^2p_n+|\\beta_{n+1}(\\tau)^2|p_{n+1}.\n\\end{equation}\nIn contrast to the CM strategy that is characterized by a single cooling coefficient $\\alpha_n$ in Eq.~(\\ref{CMCoolingCoeff}), the UM strategy depends subtly on an extra cooling coefficients $\\beta_n$. According to Eq.~(\\ref{pn}), the initial population of the ground state $p_0$ becomes $|\\alpha_0(\\tau)|^2p_0+|\\beta_1(\\tau)^2|p_1=p_0+|\\beta_1(\\tau)^2|p_1$. It indicates that partial population on the first excited state of the resonator is transferred to the ground state, whose original population is left untouched. Under rounds of nonselective measurements, it is intuitively to expect that the populations on the higher excited states of the resonator will keep moving to the lower states and eventually to the ground state. In practice, the cooling efficiency is however limited since the population on certain high excited states can also be fixed and even enhanced when $|\\alpha_n(\\tau)|^2=1$ and $|\\beta_{n+1}(\\tau)|^2\\geq0$, i.e., $\\Omega_n\\tau=1$ and $\\Omega_{n+1}\\tau\\geq0$. This problem can be addressed by employing the unequal-time-spacing strategy. A time-varying $\\tau$ could ensure that the populations on all the excited states are gradually reduced.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{OptimalInterval.eps}\n\\caption{The average populations of the resonator after a single unconditional measurement as a function of the measurement-interval $\\tau$ under various initial temperatures. (a) $T=0.01$ K, (b) $T=0.1$ K, (c) $T=1.0$ K and (d) $T=10$ K. The vertical black-dashed lines indicate the analytical results for the optimized intervals given by Eq.~(\\ref{Optimaltau}). The parameters for the blue-solid curves are set as $g=0.04\\omega_a$ and $\\Delta=0.01\\omega_a$. }\\label{OptimalInterval}\n\\end{figure}\n\nAnalogous to the CM case~\\cite{TwoModeCooling}, the cooling efficiency of the UM strategy depends severely on the choice of $\\tau$ between neighboring measurements. That could be observed in Fig.~\\ref{OptimalInterval} by the average populations of the resonator $\\bar{n}$ under a single measurement on the detector. With the initial temperatures across four orders in magnitude, the $\\tau$-dependence of $\\bar{n}$ demonstrates similar patterns. It is found that the average population declines gradually to a minimal point (the magnitude of the relative reduction decreases with increasing temperature) at an optimized measurement-interval $\\tau_{\\rm opt}^u$, then rebounds quickly and ends up with a stochastic fluctuation around a value slightly lower than its initial population $\\bar{n}_{\\rm th}\\equiv{\\rm Tr}(\\hat{n}\\rho_a^{\\rm th})$.\n\nTo make full use of the cooling strategy, it is desired to analytically find the optimized interval $\\tau_{\\rm opt}^u$ as a functional of the current state and the model parameters. By virtue of Eq.~(\\ref{umrhoa}) and under the resonant situation, the average population after a single unconditional measurement reads\n\\begin{equation}\\label{nbar}\n\\begin{aligned}\n\\bar{n}&=\\sum_{n\\geq0}n\\left(p_n\\cos^2\\Omega_n\\tau+p_{n+1}\\sin^2\\Omega_{n+1}\\tau\\right)\\\\\n&=\\eta+\\frac{1}{2Z}\\sum_{n\\geq0}ne^{-nx}(\\cos2\\Omega_n\\tau-e^{-x}\\cos2\\Omega_{n+1}\\tau),\n\\end{aligned}\n\\end{equation}\nwhere $\\eta\\equiv(\\bar{n}_{\\rm th}+2\\bar{n}^2_{\\rm th})\/(2+2\\bar{n}_{\\rm th})$ and $x\\equiv\\hbar\\omega_a\/k_BT$. Since the weight function $ne^{-nx}$ in Eq.~(\\ref{nbar}) is dominant around $n_d\\equiv k_BT\/\\hbar\\omega_a$, the variables $\\Omega_n$ and $\\Omega_{n+1}$ could thus be expanded around $n=n_d$. To the first order of $n-n_d$, we have\n\\begin{equation*}\n\\begin{aligned}\n&\\cos2\\Omega_n\\tau-e^{-x}\\cos2\\Omega_{n+1}\\tau \\\\\n&\\approx\\cos2\\Omega_{d}\\tau-e^{-x}\\cos2\\Omega_{d+1}\\tau+(n-n_d) \\\\\n&\\times\\left(-\\frac{\\Omega_d\\tau\\sin2\\Omega_d\\tau}{n_d}+e^{-x}\\frac{\\Omega_{d+1}\\tau\\sin2\\Omega_{d+1}\\tau}{n_d+1}\\right),\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation}\\label{Omegad}\n\\Omega_d\\equiv g\\sqrt{n_d}, \\quad \\Omega_{d+1}\\equiv g\\sqrt{n_d+1}.\n\\end{equation}\ndefine the dominant Rabi frequencies. Under the approximations that $e^{-x}=\\bar{n}_{\\rm th}\/(\\bar{n}_{\\rm th}+1)\\approx1$ and $\\Omega_{d+1}\/(n_d+1)\\approx\\Omega_d\/n_d$ appropriate for a moderate temperature, the average population in Eq.~(\\ref{nbar}) can be expressed by\n\\begin{equation}\\label{approxn1}\n\\bar{n}\\approx\\eta+\\sin\\Omega_-\\tau\\left(\\bar{n}_{\\rm th}\\sin\\Omega_+\\tau+\\eta'\\Omega_d\\tau\\cos\\Omega_+\\tau\\right),\n\\end{equation}\nwhere $\\Omega_{\\pm}\\equiv\\Omega_{d+1}\\pm\\Omega_d$ and $\\eta'\\equiv\\bar{n}_{\\rm th}(1+2\\bar{n}_{\\rm th}-n_d)\/n_d$. Note we have applied the formulas about the geometric series $\\sum_{n=0}^\\infty ne^{-nx}=e^x\/(e^x-1)^2$ and $\\sum_{n=0}^\\infty n^2e^{-nx}=e^x(1+e^x)\/(e^x-1)^3$. $\\bar{n}$ in Eq.~(\\ref{approxn1}) depends predominantly on the fast-frequency terms characterized by $\\Omega_+$ in a moderate time step $\\tau$. In the regime of $T\\sim 0.1-10$ K, the term weighted by $\\eta'\\Omega_d\\tau$ overwhelms that weighted by $\\bar{n}_{\\rm th}$. And this advantage expands with a larger $\\tau_{\\rm opt}^u$ given the initial or effective temperature of the resonator becomes lower, as evidenced by Fig.~\\ref{OptimalInterval}. We can therefore focus on the last term in Eq.~(\\ref{approxn1}) to determine how to minimize $\\bar{n}$. Consequently, we have\n\\begin{equation}\\label{Optimaltau}\n\\tau_{\\rm opt}^u=\\frac{\\pi}{\\Omega_d+\\Omega_{d+1}}.\n\\end{equation}\nThis result can be extended to the off-resonant situation by modifying the definition of $\\Omega_d$ in Eq.~(\\ref{Omegad}) to be $\\sqrt{g^2n_d+\\Delta^2\/4}$. The vertical black-dashed lines in Fig.~\\ref{OptimalInterval} denote the optimal measurement-intervals given by Eq.~(\\ref{Optimaltau}). It is found that the analytical expression is well suited to estimate the minimum values of average population in a wide temperature range. As demonstrated by both analytical and numerical results, a shorter measurement-interval is demanded to cool down a higher-temperature resonator. In the JC-like models, coupling a high-temperature resonator to a qubit could induce a faster transition from the ground state to the excited state of the qubit. A quick measurement would interrupt this process that has a negative effect on cooling.\n\nSimilar to the optimized interval $\\tau_{\\rm opt}^c(t)$ for the conditional measurement strategy~\\cite{TwoModeCooling}, here $\\tau_{\\rm opt}^u$ is also updatable by substituting the time-evolved $\\Omega_d$ and $\\Omega_{d+1}$ to Eq.~(\\ref{Optimaltau}). The dominant Fock-state-number $n_d$ determining $\\Omega_d$ in Eq.~(\\ref{Omegad}) could be understood as a function of the effective temperature during the cooling procedure, which relies uniquely on the time-varying $\\bar{n}(t)$ or $p_n(t)$.\n\n\\section{Measurement optimization}\\label{Optimization}\n\nThe thermal resonator could be steadily cooled down by the unconditional measurement strategy armed with the optimized measurement-interval in Eq.~(\\ref{Optimaltau}). And the cooling is performed with a unit successful probability in the absence of the postselection over the measurement outcome. As demonstrated by the blue-solid line with circle markers and the orange-dotted line in Figs.~\\ref{CoolingPerformance}(a) and \\ref{CoolingPerformance}(c), the complete UM and CM cooling strategies dominate in the cooling rate and the successful probability, respectively. It is therefore desired to find an optimized sequence of measurements, which is a hybrid of UM and CM, to compromise the cooling efficiency and the experimental cost. In this section, we present an algorithm that is assisted by the reinforcement learning to generate the optimized control sequence indicting when and which measurement is performed.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{Model.eps}\n\\caption{(a) RL-optimization diagram on cooling by measurement. An agent constructed by the neural network interacts with an environment. The agent chooses an action (CM or UM strategy) according to the current state of the resonator. Then the environment would take this action and return both the state under the measurement and the reward $R$ from the cooperative cooling performance $\\mathcal{C}$ in Eq.~(\\ref{cooperative}). (b) Circuit model for our cooling algorithm based on the optimized UM and CM measurements. Starting from the thermal state, the resonator (the upper line) would be gradually cooled down to its ground state with implementation of the measurement on the detector (the lower line), which starts from the ground state. The measurement sequence can be obtained by the reinforcement learning. }\\label{Model}\n\\end{figure}\n\nThe RL-optimization diagram is shown in Fig.~\\ref{Model}(a), consisting of the ``agent'' part based on a series of neural network and the ``environment'' part performing the cooling-by-measurement actions on the quantum system. In the reinforcement learning, the agent is fed with a cluster of to-be-trained parameters, which would be learned and updated using the data collected through its interaction with the environment. In our architecture, the agent would choose an action, i.e., the conditional or the unconditional measurement, on the resonator, given its current state. Then the environment takes this action and returns the updated resonator-state $\\rho_a$ and a ``reward'' $R$ after the measurement. The reward is generated by a function to estimate whether the action is good or bad, that would be used to update the parameters of the agent. During one ``episode'', the agent would interact with the environment for $N$ times, i.e., the number of measurements during the global sequence, which has been fixed from the beginning. A total reward is eventually counted. And the agent is trained to maximize the total reward through artificial episodes until the total reward converges. Then the agent could provide a realistic control sequence of the measurement strategies with their own (optimized) measurement intervals. The cooling-by-measurement sequence can be demonstrated by a circuit model in Fig.~\\ref{Model}(b). Rounds of free-evolutions and measurements are successively arranged. The evolution time between two neighboring measurements depends on the measurement strategy and the resonator state at the end of the last round. We follow the PPO algorithm in the agent structure, the data collecting methods, and the parameters updating, whose details can be found in Appendix~\\ref{PPOSec}. The interpolation algorithm of UM and CM and the implementation of the measurement sequence are illustrated by a pseudocode in Appendix~\\ref{OptSequence}.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{CoolingPerformance.eps}\n\\caption{(a) Average population, (b) Fidelity of the resonator in its ground state, (c) Successful probability, and (d) Cooperative cooling performance under various sequences of cooling-by-measurement. The blue-solid lines with circle markers labeled by $S_u$ and the orange-dotted lines labeled by $S_c$ indicate the sequences completely consisting of UM and CM strategies, respectively. The green-solid lines, the red-dashed lines, and the purple-dot-dashed lines describe the hybrid sequences shown in (e), (f), and (g), and labeled by $S_1$, $S_2$, and $S_4$, respectively. The brown-solid lines with triangle markers labeled by $S_{\\rm opt}$ is the RL-optimized sequence presented in (h). The parameters are set as $\\omega_a=1.4$ GHz, $T=0.1$ K, $g=0.04\\omega_a$, and $\\Delta=0.01\\omega_a$. }\\label{CoolingPerformance}\n\\end{figure}\n\nThe performance of any cooling-by-measurement strategy can be characterized or evaluated by the cooling ratio $\\bar{n}(t)\/\\bar{n}_{\\rm th}$, the successful probability $P_g$ of the detector in the measured subspace, and the fidelity of the resonator in its ground state $F=\\langle n=0|\\rho_a(t)|n=0\\rangle$~\\cite{OneModeCooling}. To compare various interpolation sequences of UM and CM in cooling performance and to evaluate the figure of merit for the reinforcement learning, we here define a cooperative cooling quantifier as\n\\begin{equation}\\label{cooperative}\n\\mathcal{C}=FP_g\\log_{10}{\\frac{\\bar{n}_{\\rm th}}{\\bar{n}(t)}}.\n\\end{equation}\nNotably, the logarithm function is used to obtain a positive value with almost the same order as $F$ and $P_g$ in magnitude. Then $\\bar{n}(t)\/\\bar{n}_{\\rm th}$, $P_g$, and $F$ could be considered in a balanced manner. In fact, the average population could be reduced by several (normally less than $10$) orders in magnitude under an efficient cooling protocol. In the EIT cooling~\\cite{EITCoolingExperiment2020}, $\\log_{10}[\\bar{n}(t)\/\\bar{n}_{\\rm th}]\\sim(2, 3)$; and in the resolved sideband cooling ~\\cite{SidebandCooling2016}, $\\log_{10}[\\bar{n}(t)\/\\bar{n}_{\\rm th}]\\sim(4, 5)$. It is instructive by Eq.~(\\ref{cooperative}) to find that a lower average population, a larger successful probability, and a higher ground-state fidelity yield a better cooling performance.\n\nWe consider to cool down a mechanical microresonator in gigahertz~\\cite{MechanicalResonator1,MechanicalResonator2} with various interpolation sequences of UM and CM. Using the resonator-frequency $\\omega_a=1.4$ GHz, the coupling strength between the resonator and the detector $g=0.04\\omega_a$ and the initial temperature of resonator $T=0.1$ K, it is found that the average population starts from $\\bar{n}_{\\rm th}=8.85$. The cooling performances under the sequences completely consisting of UM and CM are shown by the blue-solid lines with circle markers and the orange-dotted lines in Figs.~\\ref{CoolingPerformance}(a)-(d), labeled by $S_u$ and $S_c$, respectively. It is found that under the conditional measurement strategy with $N=16$, the average population $\\bar{n}$ is reduced by five orders in magnitude [see Fig.~\\ref{CoolingPerformance}(a)] and the ground-state fidelity is over $F>0.9999$ [see Fig.~\\ref{CoolingPerformance}(b)] at the cost of less than $10\\%$ of the successful probability [see Fig.~\\ref{CoolingPerformance}(c)]. In sharp contrast, under $N=16$ unconditional measurements, $\\bar{n}$ is merely reduced to $\\bar{n}\\approx 3.36$ with a moderate fidelity $F\\approx0.78$, despite with a unit successful probability. With respect to all the individual quantifiers, i.e., $\\bar{n}$, $F$, and $P_g$, the results under the hybrid sequences of UM and CM labelled by $S_k$, $k=1,2,4$, are among the former two limits $S_u$ and $S_c$. As illustrated by Figs.~\\ref{CoolingPerformance}(e), (f), and (g), the three sequences start from a CM strategies (indicated by $1$), switch to UM (indicated by $0$) after $k$ rounds of free-evolution and measurement, switch back to CM after a single round, and then continue the periodical arrangement. In comparison to the completed UM sequence, the interpolation with CM promotes the cooling efficiency in $\\bar{n}$. A larger $k$ gives rise to a smaller proportion of the unconditional measurements and the less probability $P_g$ that the detector remains in its measured subspace.\n\nIn terms of the cooperative cooling performance indicated by $\\mathcal{C}$ [see Fig.~\\ref{CoolingPerformance}(d)], it is found that $\\mathcal{C}(S_1)>\\mathcal{C}(S_2)>\\mathcal{C}(S_4)>\\mathcal{C}(S_u)$ and yet $\\mathcal{C}(S_2)\\approx\\mathcal{C}(S_c)$. The interpolation sequences could therefore have better cooperative cooling performance than the pure conditional-measurement-based protocol. The dependence of $\\mathcal{C}$ of arbitrary hybrid sequence on its proportion of CM strategies might not be monotonic. We are then motivated to find an optimized sequence with the help of the PPO algorithm. A typical optimized sequence of cooling strategies labeled by $S_{\\rm opt}$ is described in Fig.~\\ref{CoolingPerformance}(h). With four orders reduction in the average population (close to the cooling efficiency provided by $S_c$), an almost unit ground-state fidelity $F>0.9999$, and a moderate successful probability $P_g\\approx30\\%$ (much larger than that by $S_c$), the optimized sequence achieves an overwhelming cooperative cooling performance $\\mathcal{C}=2.73$ according to Eq.~(\\ref{cooperative}) over all the other measurement sequences. In other words, we attained the compromise of the cooling rate and the successful probability through the reinforcement learning. Notably, the optimized sequence is not unique, yet the current results of $\\bar{n}$, $F$, $P_g$, and $\\mathcal{C}$ are almost invariant after dozen measurements as long as there is one CM in the first $5$ rounds.\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{DifferentTemperature.eps}\n\\caption{(a) Average populations and (b) Cooperative cooling performance under the RL-optimized cooling algorithm with various initial temperatures. (c), (d), (e), and (f) describe the optimized sequences of UM and CM with $T=0.05$ K, $T=0.1$ K, $T=0.2$ K, and $T=0.3$ K, respectively. The other parameters are the same as those in Fig.~\\ref{CoolingPerformance}. }\\label{DifferentTemperature}\n\\end{figure}\n\nThe RL-optimized algorithm applies to a wide range of the initial temperature of the resonator. Starting from different $\\bar{n}_{\\rm th}$ determined by the temperature, the average populations could be reduced by three to five orders in magnitude under the optimized measurement sequences, as demonstrated in Fig.~\\ref{DifferentTemperature}(a). It is found that under a higher temperature, it is harder to suppress the transitions between the ground state and the excited states of the resonator. Then both the relative magnitude in population reduction [see Fig.~\\ref{DifferentTemperature}(a)] and the cooperative cooling performance [see Fig.~\\ref{DifferentTemperature}(b)] manifest a monotonically decreasing behavior with increasing temperatures.\n\nSimilar to Fig.~\\ref{CoolingPerformance}(h), here we present in Figs.~\\ref{DifferentTemperature}(c), (d), (e), and (f) the optimized sequences fully determined by the PPO algorithm, which still outperform any regular interpolated sequence in cooling quantifier $\\mathcal{C}$. Comparing these four sub-figures corresponding to various temperatures, it is interesting to find that a larger portion of the unconditional measurements is required in the optimized sequence for a higher temperature. It is consistent with the fact that under CM the successful probability $P_g$ to find a detector in its ground state decreases exponentially with increase of the temperature of the target resonator. Then more UMs are used to save a rapidly declining $P_g$ for obtaining a moderate $\\mathcal{C}$. In addition, for $T>0.05$ K, the RL-optimized sequence always starts from a conditional measurement, which is important to have a significant cooling rate for $\\bar{n}$ during the first several rounds of the whole sequence.\n\nIn our cooling algorithm, both UM and CM are performed on the detector following an optimized interval of unitary evolution. The nonunitary evolution induced by the conditional measurements could remarkably reduce the average population of the resonator, at the cost of a low successful probability [e.g., see the first several rounds in Fig~\\ref{CoolingPerformance}(c)]. The unconditional measurements could reduce the average population with a much slower rate yet with a unit successful probability. Thus in general we anticipate to see more UMs than CMs in the first several rounds in an RL-optimized sequence, and more CMs than UMs in the remaining rounds.\n\n\\section{Discussion and conclusion}\\label{Conclusion}\n\nWe emphasised again that the preceding hybrid cooling sequences based on the conditional and unconditional measurements could be optimized in both global and local manners. Globally, we use the reinforcement learning to find the optimized order for UM and CM. The local optimization depends on the selected measurement intervals to obtain a minimum average-population $\\bar{n}$ under a single measurement. Taking the unconditional case in Eq.~(\\ref{Optimaltau}) as an example, $\\tau_{\\rm opt}^u(t)$ is not necessarily obtained by an instant feedback mechanism during a realistic practice. The measurement sequence $\\{\\tau_1(t_1), \\tau_2(t_2), \\cdots, \\tau_N(t_N)\\}$ can be actually obtained prior to the performance of the cooling measurements. $\\tau_1(t_1)$ depends on the initial temperature as well as the population distribution $p_n$ and $\\tau_k(t_k)$, $k\\geq2$, can be calculated on the effective temperature that is uniquely determined by the dynamics of $p_n(t)$ through Eq.~(\\ref{Omegad}). In other words, we can avoid the feedback error and imprecision induced by detecting the resonator states during the experiment.\n\nIn summary, we present an optimized cooling architecture on the sequential arrangement of the conditional and unconditional measurements. We analyse and compare the advantages and disadvantages of both CM and UM on the cooling rate and the success probability. We obtain analytically for the first time an expression of the optimized unconditional measurement-interval $\\tau_{\\rm opt}^u=\\pi\/(\\Omega_d+\\Omega_{d+1})$ in parallel to that of the conditional measurement~\\cite{TwoModeCooling}. Here the dominant Rabi frequency $\\Omega_d$ depends on the dominant distribution of the resonator in the Fock state of $n_d=k_BT\/(\\hbar\\omega_a)$ and the coupling strength between the target system and the detector. The compromise of the advantages of both measurement strategies gives rise to an optimized hybrid cooling algorithm assisted by the reinforcement learning. That is justified by the cooperative cooling performance as we defined to quantify the comprehensive cooling efficiency for arbitrary cooling-by-measurement strategy. Our work therefore pushes the cooling-by-measurement to an unattained degree in regard of efficiency and feasibility. Also, it offers an appealing interdisciplinary application of quantum control and artificial intelligence.\n\n\\section*{Acknowledgments}\n\nWe acknowledge financial support from the National Science Foundation of China (Grants No. 11974311 and No. U1801661).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGalaxy ecology is an attempt to understand the environmental factors \nthat affect galaxy formation and evolution (e.g. Balogh et al. 2004, Wetzel et al. 2012). \nThere is strong evidence for\na relationship of galaxy properties and the physical conditions surrounding them\n(e.g. Martinez et al. 2008).\nAt least three important pieces of information tell us that the environment affects\ngalaxy evolution in clusters. First, the morphology-density relation shows that galaxy morphologies\ndepend on the density (or cluster-centric radius) -- the\nspiral fraction is larger at cluster outskirts than at the center (e.g. Dressler 1980; Whitmore, Gilmore \\& Jones 1993).\nFurthermore, the fraction of star-forming galaxies decreases with the increasing\ndensity of the cluster environment (e.g. Balogh et al. 2004; Kauffmann et al. 2004).\nFinally, there is a correlation between colors and environment -- the fraction of\nred galaxies decreases toward the outer part of clusters \n(e.g. Balogh et al. 2000; De Propis et al. 2004).\nAlthough these relations are connected, they correspond to\nindependent galaxy observables, and we can understand them as distinct\ngalaxy evolution indicators. Behind them, several physical mechanisms\nmay be present: tidal stripping, mergers, cannibalism,\nstrangulation, {\\it ram} pressure stripping, galaxy harassment, etc. (see e.g. Mo, van den Bosh \\& White 2010 for a good\ndescription of all these transformation processes). \n\nIn particular, the color-radius relation may be crucial for exploring the\nenvironmental effects on galaxies. Because color evolution is mainly completed\nat low redshifts (e.g. Goto et al. 2004), it would be interesting to relate the red galaxy\nfraction to the dynamical state of their host clusters.\nTo probe the color-radius relation one needs to consider \nan important aspect, that is, the very nature of the \nred galaxies, that is whether they are real\npassive objects with no (or little) star formation currently\ntaking place (e.g. Strateva et al. 2001; Baldry et al. 2004).\nUsually, it is assumed that all red galaxies are passive, but this \nneglects the possible contribution of dusty galaxies to the red sequence, which can be significant in an intermediate environment and for low-to-intermediate stellar masses (e.g. S\\'anchez-Bl\\'asquez et al. 2009).\n\nAn additional ingredient for galaxy ecology studies\nis the role of low-luminosity galaxies. In the hierarchical scenario,\nthey have formed from small peaks in the initial power spectrum (e.g. White \\& Rees 1978). \nIf this is the case, they are expected to be the building blocks of the whole formation process (e.g. Evstgneeva et al. 2004).\nWe would find them in great numbers, and they would be less clustered than bright galaxies.\nOn the other hand, some of them may also have formed from tidal debris from galaxy\nmergers or from galaxies that underwent {\\it ram} pressure stripping as they\nfell toward the center of clusters (e.g. Conselice et al. 2002) and, therefore, \nsome of them would be expected at cluster centers (e.g. Trentham 1998, Carrasco et al. 2006). \nHence, to determine the spatial\ndistribution and fraction of low-luminosity galaxies in clusters is important for \nunderstanding different aspects of galaxy formation and evolution models.\n\nIn this work, we present a study of the relation between colors and luminosities of cluster galaxies and\nthe dynamical state of the clusters.\nOur aim is to describe the interplay between galaxy properties and galaxy cluster evolution.\nFor this purpose, \nwe used 10,721 member galaxies of 183 clusters extracted from the Sloan Digital\nSky Survey (SDSS) from a list of NoSOCS and CIRS targets. First, we \napplied statistical tests to classify the clusters into two categories, Gaussian and non-Gaussian, according to their velocity distribution, which we took as an indicator of their dynamical state. \nThis was made irrespective of the triaxiality of the cluster halos. Even knowing that the orientation of the velocity ellipsoid is correlated with the large-scale structure and the \nanisotropic nature of infall into clusters (e.g. White et al. 2010), we assumed that these\neffects do not severely influence the statistical approach used in this work. \n\nAfter classifying clusters\ninto Gaussian and non-Gaussian according to their velocity distribution, we\nused objective criteria to split up the galaxies according to their luminosities, colors, and photometric mean stellar age. This information was finally used to evaluate how\ngalaxies evolve in their host clusters.\n\nThe paper is organized as follows: in Section 2 we describe our data; in Section\n3 we present the methods employed to assess the dynamical state of the clusters and to separate\nthe galaxies according to \ncolor and luminosity; in Section 4 we analyze our results, in Section 5 we discuss\nour findings, and in Section 6 we present our conclusion.\n\n\\section{Data}\n\nThis work is based on the supplemental version of the Northern Sky \nOptical Cluster Survey (NoSOCS, Lopes 2003; Lopes et al. 2004). This\nsupplemental version of NoSOCS goes deeper ($r=21$ and $z \\sim 0.5$), but covers a \nsmaller region ($\\sim 2,700$ square degrees) than the main NoSOCS catalog \n(Gal et al. 2003, 2009). NoSOCS was created from the digitized version of \nthe Second Palomar Observatory Sky Survey (DPOSS\/POSS-II, Djorgovski et \nal. 2003). The photometric calibration and object classification for DPOSS \nare described in Gal et al. (2004) and Odewahn et al. (2004), respectively.\nIn the first paper of this series (Lopes et al. 2009a) a subsample of\n7,414 systems from the NoSOCS supplemental was extracted from the \nSloan Digital Sky Survey (SDSS), data release 5 (DR5).\n\nThis sample was reduced to 179 low-redshift clusters ($z \\le 0.1$) \nwith enough spectra in SDSS (at least three galaxies within 0.50 h$^{-1}$ Mpc) \nfor spectroscopic redshift determination using the gap technique \n(Katgert et al. 1996; Lopes 2007). This technique separates \ngroups after the identification of gaps in the redshift distribution larger than a \ngiven value. To select members and exclude interlopers, the shifting gapper\ntechnique (Fadda et al. 1996) was applied to all galaxies with spectra available \nwithin a maximum aperture of 2.50 h$^{-1}$ Mpc. This method works through the \napplication of the gap technique in radial bins from the cluster center. The bin size \nis 0.42 h$^{-1}$ Mpc (0.60 Mpc for h = 0.7) or larger to force the selection of at least \n15 galaxies (consistent with Fadda et al. 1996). Galaxies not associated with the\nmain body of the cluster are eliminated. After removing the interlopers, the final sample\ncomprises 127 clusters (out of 179) with at least ten member galaxies within \n2.50 h$^{-1}$ Mpc. \n\nThe line-of-sight velocity dispersion ($\\sigma_P$) for these clusters was estimated \nand then a virial analysis was performed. The latter is analogous to the procedure \ndescribed in Girardi et al. (1998), Popesso et al. (2005, 2007), and Biviano et al. (2006).\nFirst, the projected, virial radius ($R_{PV}$) is derived and a first estimate of\nthe virial mass is obtained (using equation 5 of Girardi et al. 1998). The surface \npressure correction is applied to the mass estimate and a Navarro et al. (1997) \nprofile is assumed to obtain estimates of $R_{500}$, $R_{200}$, \n$M_{500}$ and $M_{200}$.\n\nThis low-redshift sample was complemented with 56\nmore massive systems from the cluster infall regions in the SDSS (CIRS) \nsample (Rines \\& Diaferio 2006). CIRS is a collection of $z \\le 0.1$ X-ray-selected \nclusters overlapping the SDSS DR4 footprint. In this redshift range, \nour NoSOCS sample therefore comprises only relatively poor systems. The same cluster \nparameters as listed above were determined for these 56 CIRS clusters. \nHence, the combined NoSOCS plus CIRS sample comprises 183 clusters.\nThe NoSOCS clusters have velocity dispersion estimates of $100 <\n\\sigma_P < 700$ km\/s, while the CIRS systems cover the range \n$200 < \\sigma_P < 900$ km\/s (only 23\\% of CIRS objects have \n$\\sigma_P < 400$ km\/s). Because from the original CIRS member selection, velocity\ndispersion and mass estimates are the product of the caustic\ntechnique (Rines \\& Diaferio 2006), CIRS was also used for comparing \nthese properties and for the cluster scaling relations (Lopes et al. 2009ab).\nFor all the objects in our final we also estimated the sample optical luminosity ($L_{opt}$), the richness \n(N$_{gals}$), and the X-ray luminosity ($L_X$) ($L_X$ is obtained with \nROSAT All Sky Survey data; see Lopes et al. 2009ab).\n\n\nThe centroid of each NoSOCS cluster is a luminosity, weighted estimate, \nwhich correlates well with the X-ray peak \n(see Lopes et al. 2006, and more details in \nLopes et al. 2004). Lopes et al. 2009b, showed that the scaling \nrelations based only on NoSOCS or CIRS objects agree, although in \ndifferent mass ranges, which also indicates that the centroid (optical or X-ray) \ndoes not create a bias in the cluster parameters (richness, $L_{opt}$, \n$\\sigma_P$, $R_{200}$, $M_{200}$), at least for our sample and analysis. Previous\nworks have also shown a good correlation between optical and X-ray centroids \n(Adami et al. 1998; Dai et al. 2006; Man \\& Ebeling 2012). Note that Man \\& Ebeling \n(2012) selected disturbed clusters in their search for objects with large offsets between \nthe X-ray emission and the brightest cluster galaxy (BCG). Recently, centroid offsets \nhave also been investigated using the Sunyaev-Zeldovich Effect (SZE, Sehgal et al. 2013) \nand gravitational lensing (Zitrin et al. 2012). Investigating the offsets between the\ndark matter (DM) projected center and the BCG, Zitrin et al. (2012) found that some of \nthe offsets are caused by misidentifications of the BCG.\n\n\nThe redshift limit of the sample ($z = 0.1$) is \ndue to incompleteness in the SDSS spectroscopic survey for higher \nredshifts, where galaxies fainter than $M^* + 1$ are lacked, which biases \nthe dynamical analysis (see the discussion in section 4.3 of \nLopes et al. 2009a). We considered the $M^* = -20.94$ value from \nPopesso et al. (2006), converted to our cosmology.\\footnote{\nAll quantities with cosmological dependence are computed in the concordance context,\ndefined by $\\Omega_m$ = 0.3, $\\Omega_\\lambda$ = 0.7, and $H_0 = 100~h~{\\rm km~s^{-1}Mpc^{-1}}$, with $h=0.7$.} Our galaxy sample consists of 10,721 members from these 183 clusters at\n$z \\le 0.1$.\n\nTo compute the absolute magnitudes of each galaxy (in $ugriz$ bands) we \nconsidered the following formula: $M_x = m_x - DM - k_{corr} - Qz$ ($x$ is one \nof the five SDSS bands we considered), where $DM$ is the distance modulus \n(considering the redshift of each galaxy), $k_{corr}$ is the $k-$correction\nand $Qz$ ($Q = -1.4$; Yee \\& Lopez-Cruz 1999) is a mild evolutionary \ncorrection applied to the magnitudes. The \n$k-$corrections are obtained directly from the SDSS database for every \nobject in each band. Rest-frame colors are also derived for all objects.\nMore details regarding the sample (value of $M^*$, centroid \ndetermination, member selection, virial analysis, $L_X$, $L_{opt}$, and \nN$_{gals}$ estimates) can be found in Lopes et al. (2009ab).\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=84mm]{f1.pdf}\n\\caption{Gaussian mixture model for the $M_z$ distribution. Dashed lines depict the individual components. \nThe vertical red lines mark the intervals for\nthese modes: $M_z < -22.59$ for the bright population, $M_z > -19.03$ for faint population, and intermediate values of $M_z$ for the intermediate brightness population of galaxies. }\n\\end{figure}\n\n\n\\section{Methods}\n\\subsection{Gaussian and non-Gaussian composite clusters}\n\nOur aim is to find a connection between galaxy evolution and the dynamical \nstate of clusters. We started by defining two classes for the galaxy environments according to their velocity distributions: Gaussian velocity distribution and non-Gaussian velocity distribution clusters. Hereafter we refer to them as Gaussian and non-Gaussian clusters for simplicity, which represent the relaxed and less or unrelaxed samples. Although the theoretical line-of-sight velocity distribution expected for galaxy clusters\nis not exactly Gaussian (e.g. Merrit 1987),\nphenomenological evidence has suggested for a long time that normality can\nbe assumed for clusters in dynamical equilibrium (e.g. Yahil \\& Vidal 1977). Hence, distinguishing\ngalaxy clusters according to their velocity distributions may be an objective way to assess \ntheir dynamics through the simple use of statistical tests (see e.g. Hou et al. 2009, Ribeiro et al. 2010, 2011). \n\nDifferently from our previous works that were only based on the Anderson-Darling (AD) test (usually appropriate for\nsmall samples -- see Hou et al. 2009), we now used three more normality tests:\nthe Kolmogorov-Smirnov, the Shapiro-Wilk, and the Robust Jarque-Bera test (see Thode 2002 for\na comprehensive review on normality tests). We also used the dip test to probe the modality\nof the velocity distributions: recent works have indicated that multimodal velocity distributions \nmay be very common in galaxy clusters.\nThey probably even correspond to the\n main cause of non-normality in the velocity distribution in clusters\n(e.g. Ribeiro et al. 2011, Hou et al. 2012, Einasto et al. 2012ab, Krause et al. 2013). Therefore, the dip method \nhelp us to better estimate which the intrinsically Gaussian velocity distributed clusters are and which are not.\nThe dip method is based on the cumulative distribution of the variable of interest (Hartigan \\& Hartigan 1985). The dip statistic is the maximum distance between the cumulative input distribution and the best-fitting unimodal distribution. In some sense, this test is similar to the Kolmogorov-Smirnov test, but the dip test specifically searches for a flat step in the cumulative distribution function, which corresponds to a ``$dip$\" in the histogram representation. The dip test has the benefit of being insensitive to the assumption of Gaussianity and is therefore a true test of modality (e.g. Pinkney et al 1996; Muratov \\& Gnedin 2010).\n\nWe defined a non-Gaussian cluster when the dip test rejected unimodality\nand at least one of the remaining tests rejected normality, taking member galaxies out to 2$R_{200}$.\nFor ten clusters with fewer than\neight members within this limit, we obtained no reliable results. These clusters were removed from\nthe sample. For the remaining clusters,\nwe found 146 Gaussian (84\\%) and 27 non-Gaussian (16\\%) clusters\nwith a total of 9,113 galaxies. This proportion is\napproximately consistent with that found by Ribeiro et al. (2010).\n\nWe then built two stacked clusters, Gaussian and non-Gaussian, containing\n6,478 and 2,635 galaxies, respectively. The distances of the \ngalaxies in these composite groups are normalized to the distances to the group centers by $R_{200}$ and their velocities refer to the cluster median velocities and are scaled by the cluster velocity dispersions\n\n\\begin{equation}\nu_i={{v_i - \\langle v \\rangle_j}\\over \\sigma_j},\n\\end{equation}\n\n\\noindent where $i$ and $j$ are the galaxy and the cluster indices, respectively. The velocity \ndispersions of the composite clusters, $\\sigma_u$, refer to the dimensionless quantity $u_i$\n(see Ribeiro et al. 2010, 2011).\n\nThe virial properties of non-Gaussian clusters were corrected after an iteratively removing galaxies without which\nthe clusters are Gaussian. On average, 15\\% of the galaxies in non-Gaussian clusters need to be\nremoved in this process.The corrected properties are just those the cluster would have if it consisted only\nof galaxies consistent with the normal velocity distribution (see Perea et al. 1990; Ribeiro et al. 2011).\nThis correction allows one to approximately compare virial properties of Gaussian and non-Gaussian clusters. After\ncomputing the corrected \nvirial properties, we returned the removed galaxies to non-Gaussian clusters for the subsequent photometric\nanalysis of the work.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=84mm]{f2.pdf}\n\\caption{Variation of the red galaxy fraction for the luminosity classes with\nnormalized clustercentric distance $R\/R_{200}$ for\nG (filled circles) and NG (open circles) clusters. Error-bars are obtained from a bootstrap technique with 1,000 resamplings.\nThe horizontal dashed lines indicate the RGF mean behaviour for the galaxy field. Vertical dotted lines indicate the stabilization radius for each component.}\n\\end{figure}\n\n\\subsection{Color and luminosity classification}\n\nAfter setting the composite systems, we classified galaxies into blue or red objects.\nSince the distribution of galaxies in the $g - r$ versus $u - g$ color-color diagram is strongly bimodal,\nwe simply classified galaxies \nabove and below the separator developed by Strateva et al. (2001) as red and blue, respectively. \nStrateva et al. (2001) found that the blue galaxies are dominated by late types (spirals), while the red galaxies are dominated by early types (ellipticals), and that the $u - r > 2.22$ color-selection criterion for early types results in a completeness of 98\\%. \n\nWe also classified galaxies according to their \nluminosities. For this we fitted the absolute magnitude distribution in the $z-$band ($M_z$) \nas a Gaussian mixture with $n$\ncomponents. We used the $z$-band absolute magnitude as the closest approximation to the galaxy mass\namong the five SDSS color bands. It is estimated that the variation in\nthe stellar mass-to-light ratio is only a factor of $\\sim$3 for galaxies in \ndifferent ranges of the $z$-band absolute magnitude (Kauffmann\net al. 2003). In Figure 1, we see the $M_z$ histogram and \nthe Gaussian mixture, fitted\nafter using a Bayesian regularization for normal mixture estimation to find the\nmixture parameters (Fraley \\& Raftery 2007). The best fit is found for a mixture of\nthree normal distributions. In Figure 1, we used red vertical lines to mark the intervals for\nthese populations, corresponding to the first\nquantile of the brightest component and the third quantile of the faintest component. \nThis choice was made to avoid overlaps of the low- and high-luminosity ends\nwith the central and dominant luminosity population.\n\nWe considered a loose definition for each luminosity class, calling \nobjects with $M_z < -22.59$ the bright population (corresponding to 15\\% of all\ngalaxies), those with $M_z > -19.03$ as the faint population (7\\% of all galaxies), \nand objects with intermediate \nvalues of $M_z$ correspond to the intermediate-brightness population of galaxies (78\\% of all galaxies).\nFinally, we find that 78\\% of the bright, 72\\% of the intermediate-brightness, and 56\\% of the faint galaxies \nare in the stacked Gaussian cluster.\n\n\n\\section{Analysis}\n\n\\subsection{The red galaxy fraction}\n\nTo study the connection between galaxy evolution and the dynamical state of \nclusters, we looked for differences in the galaxy subsamples according to the\ncluster dynamical classification, i.e., a Gaussian or non-Gaussian cluster. First, we used the \nred galaxy fraction (RGF) as an indicator of galaxy evolution\nin the stacked clusters. We found that red galaxies correspond to 74\\% of all galaxies.\nIn the stacked Gaussian cluster red galaxies correspond to 76\\% of the total, \nwhile we found a somewhat lower fraction of red galaxies in the stacked non-Gaussian cluster, 70\\%.\n\nThe mean RGF for a sample of field galaxies \nwas estimated for each luminosity population and used for comparison.\nThe field sample, containing 68,375 objects, was constructed as follows. From the whole SDSS DR7 data set,\nwe selected galaxies that are not associated to a group or cluster,\nconsidering the cluster catalog from Gal et al. (2009). To be conservative,\nwe defined a galaxy to belong to the field if it is not found within 4.0 Mpc\nand does not have a redshift offset smaller than 0.06 of any cluster from\nGal et al. (2009). Although this definition may include void galaxies in the sample,\nthis should not be a major problem, given that several studies have shown that \ncolor bimodality is remarkably similar between \nvoid and field galaxies (e.g. Patiri et al. 2006; Hoyle et al. 2012).\nFinally, we divided the galaxies into the luminosity populations defined in Section 3.2, finding\n4\\%, 92\\%, and 4\\% of field galaxies in the bright, intermediate and faint populations, respectively.\n\nIn Figure 2, we observe\nthe emergence of the color-radius relation in all cases. The \nRGF decreases from the center toward the outskirts, with a slight \nrecovering and\/or stabilization to the outer radii, when\nthe behavior is approaching the field. \n In each case we found a significant relationship between the RGF and the clustercentric distance,\nwith a Pearson correlation coefficient $\\gtrsim 0.70$ at the 95\\% confidence level.\nThis result is expected for Gaussian clusters,\nwhich are supposed to be virialized. But we found the same behavior for non-Gaussian clusters.\nThis indicates that the color-radius (density) relation is already\nset for non-Gaussian clusters, suggesting that a strong galaxy evolution happens\nbefore the dynamical equilibrium is reached. \n Since non-Gaussian clusters (as defined in this work) should be multi-modal,\nour findings suggest that galaxies may have been pre-processed in the individual modes \nthat constitute the clusters (i.e., \nthe subunits we found in their velocity distribution -- see Ribeiro et al. 2011), a scenario consistent \nwith that of Zabludoff \\& Mulchaey (1998), where galaxies were pre-processed in\ngroup environments before accretion into large clusters.\n\nWe also note in Figure 2 important differences with respect to the\npoint at which the RGF crosses the line that depicts the mean fraction of red galaxies in the field.\nOur stacked analysis shows that larger portions of fainter red galaxies are found, on average, in\nsmaller radii.\n This result is probably related to the fact that the fraction of red galaxies heavily depends\non their own stellar mass (e.g. Baldry et al. 2006; Haines et al. 2006).\nThe most massive galaxies have resided within group-sized halos for a longer time\n(e.g. MacGee et al. 2008), which could lead to a less steep trend for the RGF.\n\n\n\n\n\\subsection{The stellar population properties}\n\n To investigate the stellar \npopulation properties of galaxies in Gaussian\/non-Gaussian groups, \nwe used the stellar population synthesis\ncode STARLIGHT (Cid Fernandes et al. 2005; Asari et al. 2007).\nThis code fits the observed spectrum with a linear combination of a \nnumber of template spectra with known properties. We built a\ntemplate sample with single stellar population spectra from Bruzual \\&\nCharlot (2003) models, using a Chabrier (2003) IMF and the Padova 1994\nstellar evolution tracks. Our template sample is composed of 45\nspectra with three different metallicities ($Z=0.004$, $Z=0.02$, or\n$Z=0.05$) and 15 ages ranging from 1\\,Myr to 13\\,Gyr. Prior to the\nfits, we shifted the SDSS galaxy spectra to the restframe and\nlinearly resampled them to 1\\,\\AA~ (the spectra are expressed with\nfixed resolution in logarithmic, not linear, scale). Our results are expressed\nin terms of two parameters proposed by Cid Fernandes et al. (2005). The first\nis the photometric mean stellar age,\n\n\\begin{equation}\n<\\log{t}>=\\sum_j x_j \\log t_j,\n\\end{equation}\n\n\\noindent where $x_j$ is the fractional contribution to the galaxy\ntotal flux of the template $j$, and $t_j$ is the age of that\ncomponent (irrespective of its metallicity). The second is a measure of the\nphotometric age dispersion,\n\n\\begin{equation}\n\\sigma (\\log{t})=\\sqrt{\\sum_j x_j (\\log t_j - <\\log t>)^2},\n\\end{equation}\n\n\\noindent which is higher for galaxies with extended star formation\nhistory and zero for a single burst\nof age $t$. Notice that due to the small SDSS fibre size (3\\,arcsec)\nthe spectra measure the central\nregion of each galaxy, so that the inferred stellar population\ndistribution is not representative of the whole galaxy. \nHowever, even if a synthesis result cannot be translated into a\ndirect meaning because of this light sampling bias, we can\nnevertheless identify average differences of the galaxies in the full\nsample.\n\n\n\\begin{table}\n \\caption[]{Mean properties of galaxies in each\ncomponent identified in the distribution of the photometric mean stellar age.}\n$$\n \\begin{array}{p{0.25\\linewidth}ccccc}\n\t\t\\hline\n \\noalign{\\smallskip}\n Component & N & M_z & \\log{t} & \\sigma(\\log{t}) & u-r \\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\n Old & 6094 & -21.43 & 9.90 & 1.07 & 2.69\\\\\n Transition & 774 & -21.08 & 9.32 & 1.52 & 2.42\\\\\n Young & 2561 & -20.70 & 8.27 & 1.58 & 1.86\\\\\n \\noalign{\\smallskip}\n \\hline\n\\end{array}\n$$\n\\end{table}\n\n\nThe distribution of the photometric mean stellar age \ncorresponds to a mixture of three components, identified after using the Bayesian regularization for normal mixture estimation (the same\nprocedure we used in Section 3.2 to study the luminosity distribution; see Fraley \\& Raftery 2007).\nThe components present the following\nmean parameters: $\\log{t}=9.90\\pm 0.13$ (for the old component), \n$\\log{t}=8.27\\pm 0.27$ (for the young component) and\n$\\log{t}=9.32\\pm 0.75$ (for what we call here the transition component).\nSome properties of these components are presented in Table 1. The columns are: (1) the number\n$N$ of galaxies; (2) the mean absolute magnitude in the $z$ band; (3) the\nmean age in Gyr; (3) the mean age dispersion; and (4) the mean $u-r$ color.\nThese properties indicate that the old component is redder, more luminous\nand has the lowest photometric age dispersion, on average,\nshowing that this component hosts a more homogeneous stellar population.\n\nGalaxies in the color-age diagram have a clear bimodal distribution in this plane\n-- see Figure 3 --, as expected for a mixture of blue and red galaxies. The horizontal green dashed line depicts the color separator\ndeveloped by Strateva et al. (2001), the vertical green dashed line depicts the\nmean age of the transition component. We used these lines to roughly classify galaxies into\npassive ($u-r > 2.22$ and $\\log{t} > 9.32$) and star-forming objects \n($u-r \\leq 2.22$ and $\\log{t} \\leq 9.32$).\nThe blue and red 1-$\\sigma$ ellipses encompass the peaks identified in the color-age space\nwith the MCLUST code, \na contributed R package for multivariate normal mixture modeling and model-based clustering.\n\\footnote{R is a language and environment for statistical computing and graphics. It is a GNU project that is similar to the S language and environment and was developed at Bell Laboratories -- R Development Core Team (2011).}\nIt provides functions for parameter estimation via the expectation-maximization (EM) algorithm for normal mixture models with a variety of covariance structures (Fraley \\& Raftery 2007).\nThe ellipses indicate the central regions of star-forming (blue) and passive (red) galaxies.\nWe found that passive galaxies correspond to 84\\% of all red objects (5877 out of 7020).\nThe remaining red galaxies are probably composed of dusty, star-forming\ngalaxies (e.g. Popesso et al. 2007), and were classified in the transition component.\nFinally, star-forming objects correspond to 93\\% of all blue objects (2231 out of 2409).\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=84mm]{f3.pdf}\n\\caption{Bimodal distribution of galaxies in the color-age diagram showing the \n1$\\sigma$ ellipse around the star-forming (blue) and passive (red) peaks. The\nhorizontal and vertical green dashed lines indicate the red\/blue and star-forming\/passive\nseparations, respectively.}\n\\end{figure}\n\n\n\n\n\\subsection{Galaxy evolution in the stacked clusters}\n\n\nWe now study some properties of the stellar and luminosity populations in\nthe stacked clusters. \n\n\\begin{figure}\n\\centering\n \\includegraphics[width=84mm]{f4.pdf}\n\\caption{Normalized clustercentric distance of galaxies as a function of their $z$-band\nabsolute magnitude. Red circles and magenta diamonds represent passive galaxies in Gaussian and\nnon-Gaussian clusters, respectively. Blue squares and cyan stars represent star-forming galaxies\nin Gaussian and non-Gaussian clusters, respectively. The vertical dashed lines indicate the separation into\nluminosity populations. \n}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=84mm]{f5.pdf}\n\\caption{Velocity dispersions of galaxies as a function of their $z$-band\nabsolute magnitude. Red circles and magenta diamonds represent passive galaxies in Gaussian and\nnon-Gaussian clusters, respectively. Blue squares and cyan stars represent star-forming galaxies\nin Gaussian and non-Gaussian clusters. The vertical dashed lines indicate the separation into\nluminosity populations.}\n\\end{figure}\n\n\n\\subsubsection{$M_z~\\times~r\/r_{200}$}\n\nFigure 4 shows the normalized clustercentric distance of galaxies \nas a function of their $z$-band absolute magnitude. Passive galaxies in Gaussian clusters always\nhave smaller clustercentric distances than star-forming galaxies in both Gaussian and non-Gaussian clusters.\nFor galaxies in the range $-22.5 0$ the potential\ndevelops an infinite set of degenerate minima. We choose one ground state, namely\n\\begin{equation}\n\\langle\\bm\\pi\\rangle=0,~~\\langle\\sigma\\rangle\\neq 0.\\label{eq:gs_aa}\n\\end{equation}\nThe ground state~\\ref{eq:gs_aa}\nbreaks the $O(4)$ symmetry down to $O(3)$ since the vacuum is invariant only under the rotations of the pion fields.\nThis is how chiral symmetry is spontaneously broken in this model.\nBesides the spontaneous breaking, chiral symmetry is broken softly but explicitly by the term\n$h\\sigma$ in the lagrangian density; in fact, the pion mass is $m_\\pi^2 = h\/F_\\pi$.\nAt zero temperature and in the chiral limit $\\langle\\sigma\\rangle=F_\\pi\\approx 93$ MeV \nwhere $F_\\pi$ denotes the pion decay constant in the vacuum.\n\n\nThe quark sector of the QM model is described by the lagrangian density\n\\begin{equation}\n{\\cal L}_\\mathrm{quarks} = \\bar\\psi\\left(\ni\\partial_\\mu\\gamma^\\mu - g(\\sigma +i\\gamma_5 \\bm\\pi\\cdot\\bm\\tau)\n\\right)\\psi,\\label{eq:qlg_aaa}\n\\end{equation}\nwhere $\\bm\\tau$ are Pauli matrices in the flavor space. In the ground state~\\ref{eq:gs_aa} quarks get a dynamical (that is,\na constituent) mass given by\n\\begin{equation}\nM = g\\langle\\sigma\\rangle.\\label{eq:pppAAA}\n\\end{equation}\n We notice that in Eq.~\\ref{eq:qlg_aaa} there is no explicit mass term for the quarks. As a matter of fact,\nin this effective model the explicit breaking of chiral symmetry is achieved by\n $h\\neq 0$ in Eq.~\\ref{eq:ls1_aa}.\n Although in Eq.~\\ref{eq:qlg_aaa} there is no explicit mass term, quarks get a constituent mass because of the spontaneous\nbreaking of the $O(4)$ symmetry in the meson sector: this implies that the quark chiral condensate can be nonzero.\nThe total lagrangian density is given by\n\\begin{equation}\n{\\cal L}_\\mathrm{QM} = {\\cal L}_\\mathrm{quarks} + {\\cal L}_\\mathrm{mesons}.\n\\end{equation}\nIn the following, we will use the notation $\\sigma$ to denote both the field and its expectation value,\nunless from the context it is not clear which of the two we write about.\n\n\n\\subsection{Renormalized thermodynamic potential in the infinite volume limit}\n\n \n\nThe mean field effective potential of the QM model in the infinite volume is given by\n\\begin{equation}\n\\Omega = U + \\Omega_{0,\\infty} + \\Omega_T,\\label{eq:ep1aa}\n\\end{equation}\nwhere\n\\begin{equation}\nU = \\frac{\\lambda}{4}\\left(\\sigma^2 + \\bm\\pi^2 - v^2\\right)^2 - h\\sigma\\label{eq:ls1_aaMMM}\n\\end{equation}\nis the classical potential of the meson fields as it can be read from Eq.~\\ref{eq:ls1_aa}, and\n\\begin{equation}\n\\Omega_{0,\\infty} = -2N_c N_f\\int\\frac{d^3p}{(2\\pi)^3} E_p \\label{eq:ls1_aaMMMa}\n\\end{equation}\nis the one-loop quark contribution, with\n\\begin{equation}\nE_p = \\sqrt{p^2 +M^2},~~~M=g\\sigma.\n\\end{equation}\nFinally, $\\Omega_T$ corresponds to the finite temperature quarks contribution that we specify later.\nFor regularization and renormalization we can limit ourselves to consider the zero temperature limit of $\\Omega$,\n\\begin{equation}\n\\Omega_0 = U + \\Omega_{0,\\infty}.\\label{eq:ep1}\n\\end{equation}\nEquation~\\ref{eq:ep1} represents the effective potential for the $\\sigma$ field computed at one-loop\nand after renormalization it corresponds to the renormalized condensation energy, namely the difference between\nthe energy of the state with $\\langle\\sigma\\rangle\\neq 0$ and $\\langle\\sigma\\rangle= 0$ at $T=\\mu=0$.\n\nIt is instructive to present the renormalization of $\\Omega_0$ in the infinite volume system firstly.\nIn order to do this, we have to regularize the momentum integral in Eq.~\\ref{eq:ls1_aaMMMa}.\nWe have found that for problems with quantized momenta, in which summations replace integrals, \nrenormalization based on the Pauli-Villars (PV) method is the most convenient one.\nTherefore, also in the infinite volume limit we use PV regularization and renormalization.\nThis is the same method used in \\cite{Xu:2019gia}.\n\nIn the PV scheme we replace Eq.~\\ref{eq:ls1_aaMMMa} with\n\\begin{equation}\n\\Omega_{0,\\infty} = -2N_c N_f\\int\\frac{d^3p}{(2\\pi)^3} \n\\sum_{j=0}^3c_j \\left(E_p^2 + j \\xi^2\\right)^{1\/2}, \\label{eq:ls1_aaMMMaPV}\n\\end{equation}\nwhere $\\{c_j\\}$ is a set of PV coefficients and $\\xi$ is the renormalization scale. \nThe integration is understood cut at the scale $p=\\Lambda$.\nThe coefficient $c_0=1$ by convention;\nthe additional three coefficients are needed to remove the quartic, quadratic and log-type divergences of $\\Omega_{0,\\infty}$\nthat appear in the limit $\\Lambda\\rightarrow\\infty$.\nThe PV coefficients\nwill be chosen so the aforementioned divergences cancel and the final expression does not depend on $\\Lambda$.\nPerforming the integration, it is an easy exercise to see that the choice $c_1=-3$, $c_2=3$ and $c_3=-1$ is\nenough to cancel all the divergences, in agreement with \\cite{Xu:2019gia}. The resulting finite expression\nis\n\\begin{eqnarray}\n\\Omega_{0,\\infty} &=& \\frac{3N_c N_f}{16\\pi^2}\n\\xi^4\\log\\frac{(M^2 + \\xi^2)(M^2 + 3\\xi^2)^3}{(M^2 +2 \\xi^2)^4}\\nonumber\\\\\n&+&\\frac{6N_c N_f}{16\\pi^2}\n\\xi^2 M^2\\log\\frac{(M^2 + \\xi^2)(M^2 + 3\\xi^2)}{(M^2 +2 \\xi^2)^2}\\nonumber\\\\\n&+&\\frac{N_c N_f}{16\\pi^2}\nM^4\\log\\frac{(M^2 + \\xi^2)^3(M^2 + 3\\xi^2)}{M^2(M^2 +2 \\xi^2)^3}.\\nonumber\\\\\n&&\\label{eq:desperate12}\n\\end{eqnarray}\n\nAlthough Eq.~\\ref{eq:desperate12} is finite, it potentially can shift the location of the minimum of the classical potential\nas well as the mass of the $\\sigma-$meson in the vacuum, $m_\\sigma$.\nWhile this would be not a problem since it would require a mere change of the parameters $\\lambda$ and $v$,\nit is easier to work assuming that the quark loop does not shift these quantities.\nTo this end, we add two counterterms,\n\\begin{equation}\n\\Omega_\\mathrm{c.t.} = \\frac{\\delta v}{2}M^2 + \\frac{\\delta\\lambda}{4}M^4,\\label{eq:ctUUU}\n\\end{equation}\nand we impose the renormalization conditions\n\\begin{eqnarray}\n&&\\left.\\frac{\\partial (\\Omega_{0,\\infty} + \\Omega_\\mathrm{c.t.})}{\\partial M}\\right|_{M=gF_\\pi}=0,\\label{eq:sh4aF}\\\\\n&&\\left.\\frac{\\partial^2 (\\Omega_{0,\\infty} + \\Omega_\\mathrm{c.t.})}{\\partial M^2}\\right|_{M=gF_\\pi}=0.\\label{eq:sh5aF}\n\\end{eqnarray}\nThe first condition imposes that the quark loop does not change the\nlocation of the minimum of the classical potential, $\\sigma=F_\\pi$,\nwhile the second states that the loop does not shift $m_\\sigma$.\nThe coefficients of the two counterterms can be computed easily,\n\\begin{eqnarray}\n\\delta v &=& -\\frac{3N_c N_f}{4\\pi^2}\\xi^2\\log\\frac{(g^2F_\\pi^2 + \\xi^2)(g^2F_\\pi^2 + 3\\xi^2)}{(g^2F_\\pi^2 + 2\\xi^2)^2},\n\\label{eq:ctdv1}\\\\\n\\delta\\lambda &=&-\\frac{N_c N_f}{4\\pi^2}\n\\log\\frac{(g^2F_\\pi^2 + \\xi^2)^3(g^2F_\\pi^2 + 3\\xi^2)}{g^2F_\\pi^2(F_\\pi^2 + 2\\xi^2)^3}.\\label{eq:ctdl1}\n\\end{eqnarray}\nThe renormalized quark loop is thus given by\n\\begin{equation}\n\\Omega_{0,\\infty}^\\mathrm{ren} = \\Omega_{0,\\infty} + \\Omega_\\mathrm{c.t.}.\n\\label{eq:rty}\n\\end{equation}\n\n\\subsection{Renormalized thermodynamic potential in the finite volume case}\nIn a finite volume $V=L^3$ the $i-$component of momentum is quantized according to\n\\begin{equation}\np_i = \\frac{2\\pi}{L}n_i,~~~n_i=0,\\pm1,\\pm2,\\dots; \\label{eq:mom_quant}\n\\end{equation}\nthis leads to the obvious replacements\n\\begin{eqnarray}\n\\int \\frac{d^3p}{(2\\pi)^3} &\\rightarrow &\\frac{1}{V}\\sum_{n_x,n_y,n_z,}, \\label{eq:mom_repl}\\\\\nE_p &\\rightarrow &E_n = \\left(\nM^2 + \\frac{4\\pi^2}{L^2}n\n\\right)^{1\/2},\\label{eq:Ep_repl}\n\\end{eqnarray}\nwith $n=n_x^2 + n_y^2 + n_z^2$.\nInstead of $\\Omega_{0,\\infty}$ we have \n\\begin{eqnarray}\n\\Omega_{0,L}(\\Lambda) &=&-\\frac{2N_c N_f}{L^3}|M|\n\\nonumber\\\\\n&& -\\frac{2N_c N_f}{L^3}\\sum_{n=1}^{a}r_3(n)\\sqrt{\\frac{4\\pi^2}{L^2}n+M^2},\n\\label{eq:alpha_2}\n\\end{eqnarray}\nwhere the subscript $L$ reminds that the potential is computed assuming quantization in a box with volume $L^3$;\nthe first addendum on the right hand side of Eq.~\\ref{eq:alpha_2} is the zero mode contribution.\nWe have put $a=\\Lambda^2 L^2\/4\\pi^2$ \nwhere $\\Lambda$ is an UV cutoff that will disappear after \nPV renormalization; \n$r_3(n)$ denotes the sum-of-three-squares function, \nthat counts how many ways it is possible to form $n$ as the sum of the squares of three integers: \nit corresponds to the degeneracy of the level with a given $n$.\nSimilarly, instead of Eq.~\\ref{eq:ep1} we have\n\\begin{equation}\n\\Omega_0(\\Lambda) = U + \\Omega_{0,L}(\\Lambda).\\label{eq:ep1_L}\n\\end{equation}\nThis is the bare potential that is divergent and needs renormalization:\nto make this evident we have made the dependence of $\\Lambda$ explicit. \n\n \n \n \nThe renormalization of the thermodynamic potential is performed\nfollowing the PV method delineated in the infinite volume case.\nFirstly, we introduce a set of PV coefficients and \nreplace Eq.~\\ref{eq:alpha_2} with\n\\begin{eqnarray}\n\\Omega_{0,L} &=& -\\frac{2N_c N_f}{L^3}|M|\\nonumber\\\\\n&&-\\frac{2N_c N_f}{L^3}\\sum_{n=1}^{a}r_3(n)\n\\sum_{j=0}^3 c_j\\sqrt{E_n^2 + j \\xi^2}.\n\\label{eq:alpha_2a}\n\\end{eqnarray}\nUsing the coefficients determined in the infinite volume limit is enough to get a finite expression in the $\\Lambda\\rightarrow\\infty$\nlimit. This can be proved by brute force numerically, but can also be understood analytically as follows.\nFor studying the UV divergence, we put $X_j^2 = M^2 + j\\xi^2$ \nand we extract the $O(X_j^2)$ and $O(X_j^4)$ terms from $\\Omega_{0,L}$,\nthat will bring a quadratic and log-type divergence respectively. Thus we can write\n\\begin{eqnarray}\n\\Omega_{0,L} &=& -\\frac{2N_c N_f}{L^3}|M|\\nonumber\\\\\n&&-\\frac{2N_c N_f}{L^3}\\sum_{n=1}^{a}r_3(n)\n\\sum_{j=0}^3 c_j\\left[a_2 X_j^2 + a_4 X_j^4\\right]\\nonumber\\\\\n&&+~\\mathrm{UV~finite~terms},\n\\label{eq:alpha_2a2}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\na_{2} &=&\\frac{L}{4\\pi n^{1\/2}},\\\\\na_{4} &=& -\\frac{L^3}{64\\pi^3 n^{3\/2}}.\n\\end{eqnarray}\nBy virtue of a numerical calculation we prove that in the large $a$ limit\n\\begin{eqnarray}\n\\sum_{n=1}^a \\frac{r_3(n)}{4\\pi n^{1\/2}} &\\approx &\\frac{a}{2},\\\\\n\\sum_{n=1}^a \\frac{r_3(n)}{4\\pi n^{3\/2}} &\\approx & \\frac{\\log a}{2},\n\\end{eqnarray}\nwhich allow to write\n\\begin{eqnarray}\n\\Omega_{0,L} &=& -\\frac{2N_c N_f}{L^3}|M|\\nonumber\\\\\n&&-\\frac{2N_c N_f}{L^3} \n\\sum_{j=0}^3 c_j\\left[a_2 X_j^2\\frac{L a}{2} - a_4 X_j^4\\frac{L^3 \\log a}{32\\pi^2}\\right]\\nonumber\\\\\n&&+~\\mathrm{UV~finite~terms}.\n\\label{eq:alpha_2a3}\n\\end{eqnarray}\nThe above equation shows that $a_2$ and $a_4$ multiply the quadratic and log-type divergence respectively. \nUsing the PV coefficients it is easy to prove that \n\\begin{eqnarray}\n\\sum_{j=0}^3 c_j X_j^2 &=& 0,\\\\\n\\sum_{j=0}^3 c_j X_j^4 &=& 0,\n\\end{eqnarray}\nwhile the $O(X_j^6)$ term is nonzero and UV-finite. \nTherefore, the PV regulator cancels the UV divergence of $\\Omega_{0,L}$\nleaving a UV-finite, $\\xi-$dependent term.\n\nThe counterterms that we have fixed in the infinite volume case can be used here as well:\nthey will implement the conditions that in the large volume limit, we recover\n$\\sigma=F_\\pi$ and the $m_\\sigma$ fixed by the classical potential.\nOn the other hand, for a finite $L$ the $\\Omega_{0,L}$ can shift both the location\nof the minimum of the total potential and $m_\\sigma$. Therefore,\nfor finite $L$ we will use\n\\begin{equation}\n\\Omega_{0,L}^\\mathrm{ren} = \\Omega_{0,L} + \\Omega_\\mathrm{c.t.},\n\\label{eq:rtyAA}\n\\end{equation}\nwith $\\Omega_\\mathrm{c.t.}$ specified bt Eq.~\\ref{eq:ctUUU} with counterterms\ngiven by Eqs.~\\ref{eq:ctdv1} and~\\ref{eq:ctdl1}. \nTaking into account the classical potential, the renormalized thermodynamic potential\nin the vacuum at finite $L$ is\n\\begin{equation}\n\\Omega^\\mathrm{ren} = U + \\Omega_{0,L}+\\Omega_\\mathrm{c.t.}.\n\\label{eq:renpot_1}\n\\end{equation}\n\nWe close this subsection with a short comment on the choice of $\\xi$. \nIn principle, we could change this arbitrarily at a given $L$\nby requiring that the total derivative of $\\Omega$ with respect to $\\xi$, $d\\Omega\/d\\xi$, is zero. \nThis would amount to solve a \nRenormalization Group-like equation in which the $\\partial\\Omega\/\\partial\\xi$ is balanced by terms\nproportional to $\\partial\\lambda\/\\partial\\xi$ and $\\partial g\/\\partial\\xi$ so that $d\\Omega\/d\\xi=0$. \nSolving this equation is well beyond the purpose of the study we want to do here.\nTherefore, for a given $L$ we have limited ourselves to inspect the ranges of $\\xi$ that do not change $\\Omega$ \ntoo much. For large $L$ we have found that $\\xi$ can be arbitrarily large. \nOn the other hand, for small $L$ we have found that $\\Omega$ is quite insensitive to the specific value of $\\xi$ \nas long as $\\xi \\lesssim \\gamma F_\\pi $ with $\\gamma=O(1)$. Therefore, we fix $\\xi=F_\\pi$ in this work.\n\n\n\n\\subsection{The total thermodynamic potential}\nThe finite temperature thermodynamic potential does not need any particular treatment:\nin infinite volume it is given by the standard relativistic fermion gas contribution, namely\n\\begin{equation}\n\\Omega_T = -2N_c N_f T\\sum_{s=\\pm 1} \\int\\frac{d^3p}{(2\\pi)^3}\\log\\left(1 + e^{-\\beta(E_p-s\\mu)}\\right),\n\\end{equation}\nwhere $\\mu$ corresponds to the chemical potential. \nIn finite volume we replace the above equation with\n\\begin{equation}\n\\Omega_T = -2N_c N_f \\frac{T}{L^3}\\sum_{s=\\pm 1}\\sum_{n}r_3(n)\n\\log\\left(1 + e^{-\\beta(E_n-s\\mu)}\\right).\\label{eq:iow}\n\\end{equation}\nPutting all together, we get the renormalized thermodynamic potential of the QM model in a volume $V=L^3$\nwith periodic boundary conditions, that is\n\\begin{equation}\n\\Omega = U + \\Omega_{0,L}+\\Omega_\\mathrm{c.t.}+ \\Omega_T.\n \\label{eq:tot_om_fs}\n\\end{equation}\nFor each value of $\\mu$ and $T$ we determine $\\sigma$ by looking for the global minimum of $\\Omega$. \n\n\\subsection{Parameters of the classical potential}\nThe \nrenormalization procedure outlined above has the advantage that does not require a shift of the \nparameters of the classical potential both in the infinite volume and in the finite size cases. \nTherefore, these parameters can be computed from $U$ and are not affected by $L$.\nThese can be computed easily from the conditions that $\\partial U\/\\partial\\sigma=0$ and\n$\\partial^2 U\/\\partial\\sigma^2 = m_\\sigma^2$, where the derivatives are understood \ncomputed at $\\sigma=F_\\pi$.\nLimiting ourselves to write concrete expressions in the limit $h\\rightarrow 0$ we find\n\\begin{eqnarray}\nv &=& F_\\pi - \\frac{h}{m_\\sigma^2},\\\\\n\\lambda &=& \\frac{m_\\sigma^2 }{2 F_\\pi^2 } - \\frac{h}{2F_\\pi^3 }.\n\\end{eqnarray} \n \n\n\n\n \n\n\n\n\n\n\\section{Chiral phase transition at small temperature\\label{Sec:llp}}\n\nIn this section we present the results for the \ncondensate and phase structure at low temperature\nand high density, which is the domain in which the most interesting finite size effects appear. \nOur parameters set is $m_\\sigma=700$ MeV, $F_\\pi=93$ MeV,\n$m_\\pi=138$ MeV, $h=F_\\pi m_\\pi^2$.\nFinally, we take $\\xi=F_\\pi$ and we assume $M=g F_\\pi=335$ MeV at $\\mu=T=0$ which gives $g=3.6$.\n\n\n\n\n\n\\subsection{Condensate, $m_\\sigma$ and $m_\\pi$ versus size in the vacuum} \n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{vac_collect.png}\n\\end{center}\n\\caption{\\label{Fig:vacvac}Quark mass \nand $m_\\sigma$ versus $L$ in the vacuum. \nBoth quantities are measured in units of the infinite volume cases.\nRenormalization scale is $\\xi=F_\\pi$.}\n\\end{figure} \n\n\nTo begin with, we present the behavior of the condensate, of the $\\sigma-$meson mass and pions mass\nversus $L$ in the vacuum: the results are summarized in Fig.~\\ref{Fig:vacvac}.\nAll quantities are measured in units\nof the infinite volume cases. We find that both $M$ and $m_\\sigma$ increase with lowering $L$,\nwhile $m_\\pi$ decreases.\nThe effects of a finite $L$ on the physical quantities become noticeable for $L\\lesssim 3$ fm.\nQualitatively the results of the renormalized QM model agree with \nthe NJL model calculations \\cite{Xu:2019gia}. \n\n\n\n\n \n\n\n\\subsection{Transitions at small temperature}\n\n \n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{cond_m_L3.png}\\\\\n\\includegraphics[width=0.45\\textwidth]{nb_m_L3.png}\n\\end{center}\n\\caption{\\label{Fig_cc} Condensate (upper panel)\nand $n_B\/\\rho_0$ (lower panel) versus chemical potential, for several values of $T$.\nCalculations correspond to $L=3$ fm.}\n\\end{figure}\n\nIn the upper panel of Fig.~\\ref{Fig_cc} we plot the condensate versus $\\mu$ for several temperatures\nand for $L=3$ fm. The condensate has been computed by a global minimization procedure of $\\Omega$\nfor any $(\\mu,T)$.\nFor $T=0$ $\\sigma$ has a discontinuity for $\\mu\\equiv\\mu_1\\approx 340$ MeV and drops down \nfrom its value in the vacuum \nto a smaller, still substantial value $\\sigma_1\\approx 93$ MeV.\nThis discontinuity agrees with the one found in \\cite{Xu:2019gia} where it has been shown to be driven by the zero mode\n(a direct calculation within the QM model confirms this).\nDespite the discontinuity of $\\sigma$, chiral symmetry is still spontaneously broken \nfor $\\mu>\\mu_1$ since the value of the condensate\nis still large. \nIncreasing $\\mu$ up to a second critical value $\\mu\\equiv\\mu_2\\approx 526$ MeV there is another\njump of $\\sigma$ to a $\\sigma_2\\approx 5$ MeV:\nit is fair to identify this discontinuity with the restoration of chiral symmetry. \nIt is easy to verify that the phase transition happens when the chemical potential is large enough\nto populate the first excited state, $n=1$: in fact,\nusing $\\sigma =\\sigma_2$ the energy of this state is\n$E_1 = \\sqrt{g^2\\sigma^2 + 4\\pi^2\/L^2}\\approx 415$ MeV, therefore the state $n=1$ can be populated\nfor $\\mu \\gtrsim \\mu_2$.\nIncreasing the temperature, the chiral phase transition and the jump of the condensate approach each other;\nmoreover, the discontinuity of $\\sigma$ at $\\mu=\\mu_1$ is smoothed by temperature becoming a crossover.\n\n\nAt low enough $L$ and low $T$ an intermediate phase appears between the chiral symmetry breaking phase\nat low $\\mu$, namely the hadron gas, and the high density phase that is quark matter in which chiral symmetry\nis restored. Even though the passage from the chiral symmetry broken phase to the intermediate one is not \na phase transition because symmetries are broken in the same way in the two phases, for the sake of simplicity we \nadopt the term transition to discuss also the change of the condensate for $\\mu=\\mu_1$.\nIn fact, we aim to interpret this transition as a gas-to-liquid phase transition.\n\nIt is interesting to examine the behavior of number density around the two transitions.\nTo this end we define the baryon density, $n_B$, as\n\\begin{equation}\nn_B = \\frac{n_u + n_d}{3},\\label{eq:bnm3} \n\\end{equation}\nwhere $n_{u}$ and $n_d$ denote the densities of $u$ and $d$ quarks respectively; it is straightforward to prove that \n\\begin{equation}\nn_B = -\\frac{1}{3}\\frac{\\partial\\Omega}{\\partial\\mu}.\\label{eq:bnm_inter}\n\\end{equation}\nIn the lower panel of Fig.~\\ref{Fig_cc} we plot $n_B$ versus $\\mu$ for several values of $T$ and $L=3$ fm;\nbaryon density is measured in units of the nuclear saturation density, $\\rho_0= 0.16$ fm$^{-3}$.\nAt small temperature, $n_B$ experiences a first jump from zero to $n_B\\approx 0.95\\rho_0\\equiv n_B^{(1)}$\nfor $\\mu=\\mu_1$, stays constant then experiences another jump to $n_B\\approx 6.46\\rho_0\\equiv n_B^{(2)}$\nfor $\\mu=\\mu_2$.\n \nThe dependence of $n_B$ on $\\mu$ at small temperature can be easily understood.\nAs a matter of fact, at zero (as well as very small but finite) temperature,\nif $\\mu$ is large enough to excite the zero as well as the first mode,\n we have\n\\begin{equation}\nn_B \\approx \\frac{2N_c N_f}{3V}\n\\left[\n\\theta(\\mu-M) + 6\\theta(\\mu-\\sqrt{M^2 + 4\\pi^2\/L^2})\n\\right];\\label{eq:bnm5}\n\\end{equation}\nthe first addendum in the right hand side of the above equation is the contribution of the zero mode, \nwhile the second addendum\ncorresponds to the first excited state \ncounted with its degeneracy $r_3(1)=6$. \nBaryon density is constant for \n$M <\\mu < \\sqrt{M^2 + 4\\pi^2\/L^2}$ where only the zero mode contributes;\nanalogously, density is constant also for larger values of $\\mu$ until the second excited state can be populated.\nThis is qualitatively different from the behavior $n_B \\propto (\\mu-M)^{3\/2}$ of a \nrelativistic ideal massive gas, because for a finite size system there are only discrete modes in the spectrum,\nand if temperature is low enough only few of them can be occupied giving rise to $\\Omega_T\\propto\\mu$.\nOnly when the degeneracy becomes large one can approach the continuum limit and eventually recover the aforementioned\ndependence of the density on the chemical potential.\n\n\n\n\n\nWe suggest a similitude between the jump of the chiral condensate and a liquid-gas phase transition.\nAs a matter of fact, chiral symmetry is not restored at $\\mu_1$,\ntherefore the pattern of symmetry breaking is the same at low and intermediate $\\mu$.\nIn addition to this,\nthe quark number density at the first jump has a net increase, \nand the transition is sharp at low temperature then becomes smooth at higher temperatures.\nThese aspects characterize a liquid-gas phase transition,\nwhich corresponds to a change in density and not to a change in the pattern of spontaneous symmetry breaking.\n\n\n\\subsection{Correlation length of the fluctuations of the order parameter}\n\n\n\nWe can further characterize this liquid-gas-like jump of the condensate at $\\mu_1$ by means of the correlation length\nof the static fluctuations of the condensate, that are carried by the $\\sigma-$meson. \nTo do this, firstly we have to compute the in-medium masse of the $\\sigma-$meson.\nComputing this within the quark-meson model is a well established procedure, see for example \\cite{Castorina:2020vbh}\nand references therein, and is straightforward when two-loop contributions are neglected and \nthe Hatree approximation is used to compute the effective 2-particle-irreducible potential. Within these approximations we have\n$M_\\sigma^2 = \\partial^2\\Omega\/\\partial\\sigma^2$,\nwhere the second derivative is understood at the global minimum of $\\Omega$.\nA similar equation holds for the $\\pi-$mesons, $M_\\pi^2 = \\partial^2\\Omega\/\\partial\\pi^2$,\nwhere $\\Omega$ can be augmented with the pion field by the obvious replacement $\\sigma^2\\rightarrow\\sigma^2 + \\pi^2$\nin all but the $h\\sigma$ terms and in the zero mode contribution $M\\rightarrow \\sqrt{M^2 + g^2\\pi^2}$. \n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{3fm.png}\n\\end{center}\n\\caption{\\label{Fig:3fm}In-medium $\\pi-$meson and $\\sigma-$meson masses\nversus $\\mu$, for several temperatures and $L=3$ fm. Thin lines correspond to $M_\\pi$ while thick lines\ndenote $M_\\sigma$.\n}\n\\end{figure}\n\nIn Fig.~\\ref{Fig:3fm} we plot $M_\\sigma$ and $M_\\pi$ versus $\\mu$,\nfor several values of $T$. At $\\mu=\\mu_1$ $M_\\sigma$ drops down. However, \n$M_\\pi$ is almost insensitive to the jump of the condensate. This confirms that\nthe first jump at $\\mu_1$ should not be identified with a real phase transition.\nAt $\\mu=\\mu_2$ where the condensate drops down to almost zero,\n$M_\\sigma$ and $M_\\pi$ join and increase, signaling that the $O(4)$ symmetry is restored and these particles\nbecome heavy enough that decouple from the low energy spectrum dominated by the quarks. \nThis confirms that it is $\\mu_2$ that has to be identified with chiral symmetry restoration.\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{correlation2fm.png}\\\\\n\\includegraphics[width=0.45\\textwidth]{correlation3fm.png}\\\\\n\\includegraphics[width=0.45\\textwidth]{correlation6fm.png}\n\\end{center}\n\\caption{\\label{Fig:3fmCOR}Correlation length of the fluctuations of the order parameter versus $\\mu$, for several temperatures.\n}\n\\end{figure}\n\n\nWe can compute the correlation length of the time independent fluctuations of the condensate, \n$\\lambda$, that are transported by the $\\sigma-$meson. In fact, for time independent fluctuations the effective action\nof the $\\sigma-$meson is formally equivalent to that of a Ginzburg-Landau theory\nfor a scalar order parameter with positive squared mass, and it is textbook matter that in this case the correlation length\nof fluctuations is nothing but $\\lambda=1\/M_\\sigma$. We show $\\lambda\/L$ versus $\\mu$ in \nFig.~\\ref{Fig:3fmCOR} for three representative values of $L$.\nFor $L=2$ fm, at $T=10$ MeV and $\\mu=\\mu_1$ the correlation length increases and stays constant\nup to $\\mu=\\mu_2$; for larger values of $\\mu$ it decreases and approaches the value it has in the hadron gas phase.\nThe correlation length is frozen in the intermediate phase, due to the fact that only the zero mode is excited. \nQualitatively, this happens also for $T=20$ MeV and $T=40$ MeV.\nAlso notice that in this phase $\\lambda\/L\\approx 0.2$ meaning that one correlation volume occupies\nabout the twenty percent of the volume of the system; around the two transitions $\\lambda$\ndevelops two peaks; in particular, for $T=40$ MeV we find that $\\lambda\/L=O(1)$ that implies\nthat the system is close to criticality. This is what we would expect at a critical endpoint.\nFinally, for $T=60$ MeV there is only one peak of $\\lambda$ in agreement with the fact that\nthe transition to the intermediate phase is smoothed by the temperature;\nnevertheless, $\\lambda$ experiences a net increase in comparison with the value at small $\\mu$,\nthen again $\\lambda\/L=O(1)$ at the chiral phase transition.\nThe qualitative picture is the same at $L=3$ fm, see the middle panel of Fig.~\\ref{Fig:3fmCOR},\nwhile for a larger value of $L$ the double peak structure as well as the intermediate phase disappear,\nsee the lower panel of Fig.~\\ref{Fig:3fmCOR}.\n\nThe results summarized in Fig.~\\ref{Fig:3fmCOR} allow to understand better the intermediate phase.\nAs a matter of fact, we learn that beside the characterization of this phase in terms of the baryon density\ndiscussed in the previous subsection,\nwe can distinguish it from the vacuum and the high density normal quark matter also looking at\nthe fluctuations of the order parameter.\nIn particular, the correlations of the order parameters are substantially larger than those in the vacuum \nand in the quark matter phase at high $\\mu$, and do not change by changing $\\mu$\nin this region: correlation volumes are frozen due to the fact that only the zero mode\nis excited. Moreover, for small $L$ \nthe correlation volumes occupy a substantial portion of the total volume of the system.\nThese facts, together with the increase of density and\nthe symmetry pattern that is unchanged at $\\mu=\\mu_1$, \nsuggest the name of {\\it subcritical liquid} for this intermediate phase.\n\n\n \n\n\n\n\\subsection{Catalysis of chiral symmetry breaking at low temperature}\n\n\n \n\n \n \n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{moresize.png}\n\\end{center}\n\\caption{\\label{Fig:moresize} Condensate versus chemical potential, for several values of $L$\nand $T=10$ MeV.}\n\\end{figure} \n\nIn Fig.~\\ref{Fig:moresize} we plot the condensate versus $\\mu$ at $T=10$ MeV, for several values of $L$.\nFinite size effects are noticeable up to $L\\approx 5$ fm although in this case the transition to the subcritical liquid phase\nis minor. For $L=6$ fm no sign of the subcritical liquid is found, and comparing the results of $L=6$ fm and $L=8$ fm\nwe notice that the effect of the finite size is almost gone and a continuum limit is reached.\nThe results collected in Fig.~\\ref{Fig:moresize} show that lowering the size catalyzes the spontaneous chiral symmetry \nbreaking by enlarging the subcritical liquid region. \n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{nbsl.png}\n\\end{center}\n\\caption{\\label{Fig:nbsl}$n_B^\\mathrm{s.l.}\/\\rho_0$ versus $L$ at $T=0$.\n}\n\\end{figure}\n\n\nFor completeness, we report on\nthe baryon density in the subcritical liquid phase, $n_B^\\mathrm{s.l.}$,\nat $T=0$.\nThis can be estimated quickly because only the zero mode is populated therefore\nwe read its value from Eq.~\\ref{eq:bnm5}, namely\n\\begin{equation}\nn_B^\\mathrm{s.l.} = \\frac{2N_c N_f}{3L^3};\\label{eq:nbsl}\n\\end{equation}\nthis amounts to put $2N_c N_f$ quarks in the volume $L^3$. \nThe results are collected in Fig.~\\ref{Fig:nbsl} in which we show $n_B^\\mathrm{s.l.}\/\\rho_0$ versus $L$,\nwhere $\\rho_0$ is the nuclear saturation density. In particular, $n_B\\approx \\rho_0$ for $L\\approx 3$ fm.\n\n \n \n \n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{pdl2.png}\n\\end{center}\n\\caption{\\label{Fig:pdl2} Critical lines for $L=2$ fm. \nDotted and dot-dashed lines correspond to smooth crossovers and solid lines to first order phase transitions.\nThe green dot denotes the critical endpoint of the chiral phase transition.\nThe indigo dot corresponds to the critical endpoint for the liquid-gas-like transition to the subcritical liquid phase.\n$\\chi$SR and $\\chi$SB denote the regions in which chiral symmetry is restored and broken respectively.\nWe have shown by blue lines the critical lines for $L=10$ fm for comparison.}\n\\end{figure} \n\nThe results discussed in this section can be summarized in the form of a phase diagram in the $\\mu-T$ plane.\nIn Fig.~\\ref{Fig:pdl2} we plot the transition lines for the case $L=2$ fm, and for comparison we also show a portion of the\ncritical lines at $L=10$ fm that correspond to the continuum limit.\nThe regions denoted with $\\chi$SR and $\\chi$SB denote the portions of the phase diagram \nin which chiral symmetry is restored and broken respectively. \nThe dots denote critical endpoints. In the figure we focus on the subcritical region phase\nthat appears as an intermediate phase between $\\chi$SB and $\\chi$SR phases.\nComparing the critical lines for $L=2$ fm and $L=10$ fm the catalysis of symmetry breaking is evident. \nWe also notice that the critical endpoint for chiral symmetry restoration moves towards higher values of $\\mu$\nand lower $T$ with the lowering of $L$.\n\nWe have verified the stability of our results by changing the number of colors:\nin particular, for $N_c=2$ the picture is unchanged. Since QCD with $N_c=2$\nand finite $\\mu$ can be simulated on the lattice, the predictions of this article can be tested by means\nof first principle calculations.\n\n \n\\section{Chiral phase transition at high temperature}\nThe chiral phase transition at finite temperature and low $\\mu$ has been more studied in the literature,\ntherefore we limit ourselves to present a few results and compare them with those of\nother effective models. In particular, our results agree with those of \\cite{Xu:2019gia,Xu:2019kzy}\nwhere the NJL model with PV regulators has been used.\n\n\\subsection{Numerical computation of $T_c$ versus $L$}\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{masses_comparison.png}\\\\\n\\includegraphics[width=0.45\\textwidth]{masses_comparison_m300.png}\n\\end{center}\n\\caption{\\label{Fig_mm}Condensate $\\sigma$ versus temperature, for several values of $L$\nand two representative values of $\\mu$.\n}\n\\end{figure}\n\nIn Fig.~\\ref{Fig_mm} we plot the condensate, $\\sigma$, versus temperature, for $\\mu=0$ (upper panel) and $\\mu=300$ MeV\n(lower panel) and several values of $L$. \nWe can define a pseudo-critical temperature, $T_c$, by looking at the location of the maximum variation of\n$d\\sigma\/d\\beta$. At $\\mu=0$ the condensate increases with $1\/L$ and this pattern remains stable in the whole\ntemperature range examined. We conclude that our picture is consistent with the catalysis of chiral symmetry breaking\ninduced by lowering $L$. The catalysis remains also for higher values of $\\mu$, see the lower panel \nof Fig.~\\ref{Fig_mm}, in agreement with the results presented in section~\\ref{Sec:llp}.\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{tctc.png}\n\\end{center}\n\\caption{\\label{Fig:tctc}Critical temperature for chiral symmetry restoration versus size at $\\mu=0$.\nThe line represents a crossover for any value of $L$.\n}\n\\end{figure} \n \nWe show $T_c$ versus $L$ at $\\mu=0$ in Fig.~\\ref{Fig:tctc}.\nThe behavior for other values of $\\mu$ can be easily guessed from the results that we have shown before.\nIn the infinite volume limit $T_c\\approx 180$ MeV. The catalysis of chiral symmetry breaking is\nclear in Fig.~\\ref{Fig:tctc}. For example, for $L=3$ fm the increase of critical temperature is $\\approx 23\\%$,\nwhile it becomes $\\approx 55\\%$ for $L=2$ fm. The qualitative behavior of $T_c$ is in agreement \nwith previous studies within the QM model~\\cite{Palhares:2009tf,Magdy:2019frj}\nas well as the NJL model with periodic boundary conditions \\cite{Xu:2019gia,Xu:2019kzy,Wang:2018ovx}.\n\n\n\n\n\n \n\n\\subsection{Critical temperature from the Ginzburg-Landau potential}\nIn this subsection we clarify the role of the zero mode \non the behavior of $T_c$ versus $L$. We do this in the chiral limit, $h=0$, and at $\\mu=0$, \nto make the discussion more transparent. \nIn these conditions the chiral transition is of the second order and the critical temperature is given by the zero\nof the coefficient $\\alpha_2$ of the Ginzburg-Landau potential,\n\\begin{equation}\n\\alpha_2 = \\left.\\frac{\\partial^2\\Omega}{\\partial\\sigma^2}\\right|_{\\sigma=0},\n\\end{equation}\nwhere $\\Omega$ is given by Eq.~\\ref{eq:tot_om_fs}. \nThe coefficient $\\alpha_2$ corresponds to the curvature of $\\Omega$ at $\\sigma=0$.\nWe feel that this discussion is necessary because in the literature some confusion arises when different models \nare compared to each other.\nWe keep the parameters of the classical potential,\n$v$ and $\\lambda$, unchanged by $L$. This is natural within the renormalization scheme \nthat we have adopted in this article, because the finite size corrections to $\\Omega$\nare finite and do not require any additional renormalization condition. \n\n\n \nStarting from Eq.~\\ref{eq:tot_om_fs} we get\n\\begin{eqnarray}\n\\alpha_2^\\mathrm{QM} &=& C_2^\\mathrm{QM} \\nonumber\\\\\n&&+\\frac{4 N_c N_f g^2}{2\\pi L^2}\\sum_{n=1}^\\infty r_3(n)\\frac{e^{-\\beta\\varepsilon_n}}{\\sqrt{n}(1+e^{-\\beta\\varepsilon_n})}\n\\nonumber\\\\\n&&\n-\\frac{4 N_c N_f }{ L^3 }\\frac{g^2}{4T},\\label{eq:a2QM}\n\\end{eqnarray} \nwhere we have put $\\varepsilon_n = 2\\pi\\sqrt{n}\/L$,\nand $C_2^\\mathrm{QM}$ denotes the curvature of $\\Omega$ at $T=0$ and $\\sigma=0$,\n\\begin{eqnarray}\nC_2^\\mathrm{QM} &=& -v^2\\lambda + g^2\\delta v \\nonumber\\\\\n&&-\\frac{2 N_c N_f g^2}{L^3}\n\\sum_{n=1}^\\infty r_3(n)\\sum_{j=0}^3\\frac{c_j}{\\sqrt{j\\xi^2 + 4\\pi^2n\/L^2}}.\\nonumber\\\\\n&&\\label{eq:C2qm}\n\\end{eqnarray} \nWe remind that $m_\\sigma^2=2v^2\\lambda$ corresponds to the $\\sigma-$meson mass\nin the vacuum and in the infinite volume limit.\nThe last addendum on the right hand side of \nEq.~\\ref{eq:a2QM} is the zero mode contribution to $\\alpha_2$,\nwhile the summation represents the contribution of the higher modes.\nThe zero mode contribution is negative while the sum over the higher modes is positive:\nwhile thermal fluctuations increase $\\alpha_2$ making the broken phase less stable,\nthe zero mode lowers $\\alpha_2$ causing the broken phase to be more stable.\nA numerical inspection shows that $C_2^\\mathrm{QM}$ is quite insensitive of $L$\nbecause it is dominated by the classical contribution. Moreover $C_2^\\mathrm{QM}<0$.\n\n\n \n\nLowering $L$, the zero mode contribution grows up in magnitude, therefore it is necessary to increase\n$T_c$ to get a positive contribution from the higher modes that overcomes both the zero mode and $C_2^\\mathrm{QM}$\nto satisfy $\\alpha_2^\\mathrm{QM}(T_c)=0$. Thus $T_c$ increases by lowering $L$.\n \n\nIf we removed the zero mode from Eq.~\\ref{eq:a2QM}, for example by imposing antiperiodic boundary condtions\nor an infrared cutoff,\nwe would be left with\n\\begin{eqnarray}\n\\alpha_{2,\\mathrm{nzm}}^\\mathrm{QM} &=& C_2^\\mathrm{QM} \\nonumber\\\\\n&&+\\frac{4 N_c N_f g^2}{2\\pi L^2}\\sum_{n=1}^\\infty r_3(n)\\frac{e^{-\\beta\\varepsilon_n}}{\\sqrt{n}(1+e^{-\\beta\\varepsilon_n})}.\n\\label{eq:a2QMapbc}\n\\end{eqnarray}\nEven in this case, the requirement $\\alpha_{2,\\mathrm{nzm}}^\\mathrm{QM}(T_c)=0$\nimplies that\nlowering $L$ has to be balanced by the increase of $T_c$,\nbecause $ C_2^\\mathrm{QM}<0$ and almost insensitive to $L$.\nThus in the QM model even without the zero mode, $T_c$ has to increase with $1\/L$,\nin agreement with~\\cite{Palhares:2009tf,Magdy:2019frj}.\nThe arguments above would apply also if we had not implemented\nrenormalization and had regularized the divergent quark loop via an effective cutoff: in this case,\nas long as $-v^2\\lambda <0$, $C_2^\\mathrm{QM}<0$ for any $L$. \n\n \n\nSummarizing, we have shown that within the QM model in the chiral limit and at $\\mu=0$,\n$T_c$ increases with $1\/L$ regardless of the presence of the zero mode in the spectrum or not.\nThis happens because $C_2^\\mathrm{QM}<0$ for any $L$.\nThis behavior of $T_c$ and its explanation has not been stressed enough in the literature.\n\nThe increase of $T_c$ with $1\/L$ when periodic boundary conditions are implemented is in agreement with\nthe results of the NJL model, see for example \\cite{Xu:2019gia,Wang:2018ovx}. On the other hand,\nwhen antiperiodic boundary conditions are implemented within the NJL model, $T_c$ is found to decrease with $1\/L$.\nThis is in qualitative disagreement with the QM model and needs to be clarified. \nThe very reason of the different behavior cannot be traced back to the lack of the zero mode only:\nafter all, in the QM model $T_c$ increases with $1\/L$ even when the zero mode is removed from the spectrum.\nIn fact, a difference between the QM and NJL models is the curvature of the potential\nat $\\sigma=0$ and $T=0$, due to the different classical potentials in the two models.\nIn the QM model $C_2^\\mathrm{QM}<0$.\nOn the other hand, in the NJL model there are no mesons at the tree level and the classical potential\nis merely the mean field term $\\sigma^2\/4G$. \nThe divergent quark loop is necessary to make $\\alpha_2$ negative and break chiral symmetry:\n while this is enough to guarantee a negative curvature in the infinite volume limit,\nit is not guaranteed that it remains negative for any $L$. \n\nTo keep the treatment simple, we use the common hard cutoff scheme for the NJL model;\nin fact, the results we find here agree with those obtained within\nPV regularization \\cite{Xu:2019gia}.\nUsing a cutoff $\\Lambda$ the curvature at $T=0$ in NJL is\n\\begin{equation}\nC_2^\\mathrm{NJL} = \\frac{1}{2G} -\\frac{N_c N_f}{\\pi L^2}\\sum_{n=1}^a \\frac{r_3(n)}{\\sqrt{n}},\n\\end{equation}\nwhere $a=\\Lambda^2 L^2\/4\\pi^2$ and $G$ is the NJL coupling. The contribution of the thermal fluctuations to $\\alpha_2$ in NJL\nis formally equivalent to that of QM model and is not repeated here.\nWe notice that $C_2^\\mathrm{NJL}$\nbecomes positive for small enough $L$ because the quark loop shrinks. \nWhen the zero mode is removed from the spectrum, \nthe quark loop in $C_2^\\mathrm{NJL}$ is the only source for a negative $\\alpha_2$:\nsince lowering $L$ shrinks the loop, the contribution of the excited states at $T_c$ has to be lowered to get a vanishing $\\alpha_2$\nwhich implies that $T_c$ decreases when $L$ decreases. \n\n \nThe message of this section is that the zero mode alone is not enough to explain the behavior of $T_c$ versus $L$\nin chiral models: the difference between QM and NJL appears even when the zero mode is absent in the\nspectrum of both models. The curvature of the potential is another necessary ingredient for $T_c$ and it is precisely the\ndifferent curvature that leads to different predictions of $T_c$ versus $L$ in the two models.\n\nWe remark that if we had assumed \na dependence of the classical potential of $L$, the behavior of $T_c$ versus $L$ might have been more difficult to predict\nwithin the Ginzburg-Landau coefficient because $T_c(L)$ would have depended also on \nthe additional functions $v=v(L)$ and $\\lambda=\\lambda(L)$.\n \n \n \n\n\n\n\n\n\n\\section{Summary and Conclusions}\nWe have studied the effect of periodic boundary conditions on chiral symmetry breaking and its restoration in QCD.\nAs an effective model of the effective potential for the quark condensate, we have used the quark-meson model \nwhich couples quarks to background meson fields.\nWe have implemented periodic boundary conditions on the effective potential for a cubic box of size $L^3$,\nthen we have performed the renormalization of the divergent vacuum term in the box;\nwe have computed the behavior of the condensate at finite temperature, $T$, and quark chemical potential, $\\mu$.\nFor the implementation of the renormalization conditions at finite $L$ we have \nadopted the Pauli-Villars regulators as in \\cite{Xu:2019gia}, that are enough to cancel the divergent contributions\nin the infinite volume as well as at finite $L$.\n\nThe most interesting effects happen for the chiral phase transition at small temperature and finite chemical potential.\nWe have found that for $L\\lesssim 5$ fm, increasing $\\mu$ up to a critical value, $\\mu_1$, \nresults in a jump of the condensate\nto lower but finite values. This jump is due to the population of the zero mode.\nThe contribution of the zero mode at such moderate size is not very strong, therefore its excitation is not enough\nto restore chiral symmetry. Increasing $\\mu$ to higher values, the first mode is excited eventually\nand chiral symmetry is restored at $\\mu=\\mu_2$.\n\nWe have suggested a similitude between the jump of the condensate at $\\mu=\\mu_1$ and a liquid-gas phase transition.\nIn fact, chiral symmetry is not restored at $\\mu_1$,\ntherefore symmetries are broken in the same way at low and intermediate $\\mu$.\nMoreover, the quark number density at the first jump has a net increase.\nBoth these aspects are common to the liquid-gas phase transition.\nWe have further characterized the jump of the condensate at $\\mu_1$ by means of the correlation length\nof the fluctuations of the condensate, that are carried by the $\\sigma-$meson; in particular,\nwe have identified $\\lambda=1\/M_\\sigma$ with $\\lambda$ the correlation length and $M_\\sigma$ the in-medium\nmass of the $\\sigma-$meson. We have found that at low temperature and $\\mu=\\mu_1$ the correlation length\nincreases then stays constant up to $\\mu=\\mu_2$ where chiral symmetry is restored. Increasing temperature\nbrings the system close to criticality and this is confirmed by the increase of $\\lambda$.\n\nWe name the intermediate phase as {\\em subcritical liquid phase} because even though\nthe system is not critical in the whole $(\\mu-T)$ window, the correlation domains in this phase\nare larger than those in the hadron gas and quark matter phases, respectively at small and large $\\mu$,\nas if the system was approaching a critical point; in addition to this, baryon density is finite due to the occupation\nof the zero mode, and symmetries are broken in the same way of the hadron phase, as it would happen in the gas-to-liquid\ntransition. \n\n\nOverall, we have found that lowering $L$ and imposing periodic boundary conditions\ncatalyzes the spontaneous breaking of chiral symmetry. This has been understood as the result of the excitation \nof the zero mode at intermediate values of $\\mu$, that lowers a bit the value of the condensate without restoring chiral symmetry, \nand the need of a large $\\mu$ to excite the first mode\nthat leads to the definitive lowering of the condensate; the smaller $L$ the larger is the value of $\\mu$ needed to excite the first mode,\nthus leading to the catalysis of chiral symmetry breaking.\n\n\nWe have completed the study by computing \nthe critical temperature,\n$T_c(L)$, versus $L$. We have found that $T_c$ decreases with $L$ \nthus supporting the catalysis of chiral symmetry breaking \nfound in previous studies where periodic boundary conditions, or effective infrared\ncutoffs in the QM model, have been implemented \\cite{Xu:2019gia,Palhares:2009tf,Wang:2018ovx,Magdy:2019frj}. \n\nThere are several ways to continue the work presented here. A straightforward extension of the work is the \ninclusion of meson fluctuations on the same line of \\cite{Castorina:2020vbh}.\nIt would be interesting to include the possibility of inhomogeneous condensates\n\\cite{Abuki:2018iqp,Takeda:2018ldi,Abuki:2013pla,\nAbuki:2013vwa,Buballa:2020xaa,Carignano:2019ivp,Buballa:2018hux,Carignano:2017meb,Nickel:2009ke,Nickel:2009wj}.\nThe inclusion of the Polyakov loop to take trace of confinement-deconfinement phase transition via a collective field\nwould also be possible \\cite{Fukushima:2003fw,Ratti:2005jh,Abuki:2008nm}. \nFinally, it is of a certain interest to analyze the thermodynamic geometry\n\\cite{Weinhold:1975get,Weinhold:1975gtii,\nRuppeiner:1979trg,Ruppeiner:1983ntp,Ruppeiner:1983tcf,Ruppeiner:1985tcv,Ruppeiner:1995rgf,Ruppeiner:1998rgc,\n Castorina:2019jzw,Castorina:2020vbh,Zhang:2019neb} \nof the effective models of chiral symmetry breaking with finite size.\nWe plan to report on these topics in the near future.\nWe have also investigated the stability of our results by changing the number of colors:\nin particular, we have verified that for $N_c=2$ the picture is unchanged. Since QCD with $N_c=2$\nand finite $\\mu$ can be simulated on the lattice, the predictions of this article can be tested by means\nof first principle calculations.\n\n \n\n\n \n\n\n\n\n\n\n\\section*{ACKNOWLEDGEMENTS}\nM. R. acknowledges John Petrucci for inspiration.\nThe work of the authors is supported by the National Science Foundation of China (Grants No.11805087 and No. 11875153).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe goal of information theory is to find the fundamental limits imposed on information processing and transmission by the laws of physics. One of the early breakthroughs in quantum information theory was the characterisation of the capacity of a classical-quantum (c-q) channel to transmit classical information by Holevo~\\cite{holevo98,holevo73b} and Schumacher--Westmoreland~\\cite{schumacher97}. The \\emph{classical capacity} of a quantum channel is defined as the maximal rate (in bits per channel use) at which we can transmit information such that the decoding error vanishes asymptotically as the length of the code increases. However, for many practical applications there are natural restrictions on the code length imposed, for example, by limitations on how much quantum information can be processed coherently. Therefore it is crucial to go beyond the asymptotic treatment and understand the intricate tradeoff between decoding error {probability}, code rate and code length. \n\nFor this purpose, we will study families of codes that have {\\em both} a rate approaching the capacity and an {error probability} that vanishes asymptotically as the code length $n$ increases. The following tradeoff relation gives a rough illustration of our main result: if the code rate approaches capacity as $\\Theta(n^{-t})$ for some $t \\in (0,1\/2)$, then the decoding error cannot be smaller than $\\exp(- \\Theta(n^{1-2t}))$. In fact, we will show that the constants implicit in the $\\Theta$ notation are determined by a second channel parameter {beyond} the capacity, called the \\emph{channel dispersion}. We will also show that this relation is tight, i.e., there exist families of codes achieving equality {asymptotically}. \n\nOur work thus complements previous work on the boundary cases corresponding to {$t \\in \\{0, 1\/2\\}$}. The error exponent (or reliability function) of c-q channels (see, e.g., Refs.~\\cite{holevo00,hayashi07,dalai13}) corresponds to the case $t = 0$ where the rate is bounded away from capacity and the error probability vanishes exponentially in $n$. This is also called the \\emph{large deviations} regime. Moreover, the second-order asymptotics of c-q channels were evaluated by Tomamichel and Tan~\\cite{tomamicheltan14}. They correspond to the case $t = 1\/2$ where the rate approaches capacity as $\\Theta(n^{-1\/2})$ and the error probability is non-vanishing. This is also called the \\emph{small deviations} regime. \n\nIn the present work, we consider the entire regime in between, which is dubbed the \\emph{moderate deviation} regime.\\footnote{In the technical analysis, we are considering moderate deviations from the mean of a sum of independent {log-likelihood} ratios, thus justifying the name emanating from {statistics~\\cite[Theorem~3.7.1]{dembo98}}.} The different parameter regimes are illustrated in Fig.~\\ref{fig:regimes}.\n\n\\begin{figure}[t!]\n\t\\centering\n\n\t\\includegraphics{figure-compressed.pdf}\n\t\\begin{tabular}{|l|c|c|c|c|c|}\n\t\t\\hline\n\t\t& (I) & (II) & (III) & (IV) & (V) \\\\\n\t\t\\hline\n\t\t\\multirow{2}{*}{regime} & error & \\!moderate deviation\\! & constant error & moderate deviation & strong converse \\\\\n\t\t& exponent & (below capacity) & (second-order) & (above capacity) & exponent \\\\\n\t\t\\hline\n\t\terror prob.\\ & \\footnotesize $\\!\\exp(-\\Theta(n))\\!$ & \\footnotesize $\\exp(-o(n))$ \\& $\\omega(1)$ & \\footnotesize $\\Theta(1)$ & \\footnotesize \\!$1 - \\exp(-o(n))$ \\& \\footnotesize $1 - \\omega(1)$\\! & \\footnotesize $1 - \\exp(-\\Theta(n))$ \\\\\n\t\t\\hline\n\t\tcode rate & \\footnotesize $C - \\Theta(1)$ & \\footnotesize $C - o(1)$ \\& $C - \\omega\\big(n^{-\\frac12}\\big)$ & \\footnotesize $C - \\Theta\\big(n^{-\\frac12}\\big)$ & \\footnotesize $C + o(1)$ \\& $C + \\omega\\big(n^{-\\frac12}\\big)$ & \\footnotesize $C + \\Theta(1)$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{The figure shows the optimal error probability as a function of the rate, for different block lengths. Darker lines correspond to longer block lengths, and the capacity is denoted by $C$. The table shows the asymptotics in each region, as the blocklength $n$ goes to infinity. The functions of $n$ implicit in the $\\Theta$, $o$, and $\\omega$ notation are assumed to be positive-valued.}\n\t\\label{fig:regimes}\n\\end{figure}\n\n\\paragraph*{Main results.} \nBefore we present our main results, let us introduce the notion of a \n\\emph{moderate sequence} of real numbers, $\\{ x_n\\}_n$ for $n \\in \\mathbb{N}$, \nwhose defining properties are that $x_n \\searrow 0$ and $\\sqrt{n}\\, x_n \\to \n+\\infty$ as $n \\to \\infty$.\\footnote{As mentioned above an archetypical \n\tmoderate sequence is $x_n = \\Theta(n^{-t})$ for some $t \\in (0, \\frac12)$. The \n\tboundary cases are not included\\,---\\,in fact $t = 0$ requires a large \n\tdeviation analysis whereas $t = \\frac12$ requires a small deviation analysis.} \nOur two main results concern binary asymmetric quantum hypothesis testing and \nc-q channel coding.\n\n\\begin{enumerate}\n\t\\item The first result, presented in detail in Sect.~\\ref{sec:hypo}, concerns binary quantum hypothesis testing between a pair of quantum states $\\rho$ and~$\\sigma$. We show that for any moderate sequence $x_n$, there exists a sequence of tests $\\{ Q_n \\}_n$ such that the two kinds of errors satisfy\n\t\\begin{align}\n\t\t\\Tr \\rho^{\\otimes n} (1 - Q_n) = e^{-nx_n^2} &~\\textrm{and}~ \n\t\t\\Tr \\sigma^{\\otimes n} Q_n = \\exp\\Big(-n \\Big( D(\\rho\\|\\sigma) - \\sqrt{2 V(\\rho\\|\\sigma) }\\, x_n + o(x_n) \\Big)\\Big) \\,,\n\t\t\\intertext{and another sequence of tests $\\{ Q_n' \\}_n$ such that the errors satisfy\n\t\t}\n\t\t\\Tr \\rho^{\\otimes n} (1-Q_n') = 1-e^{- n x_n^2} &~\\textrm{and}~ \n\t\t\\Tr \\sigma^{\\otimes n} Q_n' = \\exp\\Big(-n \\Big( D(\\rho\\|\\sigma) + \\sqrt{2 V(\\rho\\|\\sigma) }\\, x_n + o(x_n) \\Big)\\Big) \\,,\n\t\\end{align}\n\twhere $D(\\cdot\\|\\cdot)$ and $V(\\cdot\\|\\cdot)$ denote the relative entropy~\\cite{umegaki62} and relative entropy variance~\\cite{tomamichel12,li12}, respectively. (The reader is referred to the next section for formal definitions of all concepts discussed here.)\n\tMost importantly, we show that both of these tradeoffs are in fact optimal.\n\t\n\t\\item The main result, covered in Sect.~\\ref{sec:channels}, concerns coding over a memoryless classical-quantum channel $\\mathcal{W}$. Let us denote by $M^*(\\mathcal{W};n,\\varepsilon)$ the maximum $M \\in \\mathbb{N}$ such that there exists a code transmitting one out of $M$ messages over $n$ uses of the channel $\\mathcal{W}$ such that the average probability of error does not exceed $\\varepsilon$. For any sequence of tolerated error probabilities $\\{\\varepsilon_n\\}_n$ vanishing sub-exponentially with $\\varepsilon_n = e^{-nx_n^2}$, we find that\n\t\\begin{align}\n\t\t\\label{eq:moderatesecondorder1}\n\t\t\\frac{1}{n} \\log M^*(\\mathcal{W};n, \\varepsilon_n) &= C(\\mathcal{W}) - \\sqrt{2 V_{\\min}(\\mathcal{W})}\\, x_n + o(x_n) \\,,\\\\\n\t\t\\label{eq:moderatesecondorder2}\n\t\t\\frac{1}{n} \\log M^*(\\mathcal{W};n, 1-\\varepsilon_n) &= C(\\mathcal{W}) + \\sqrt{2 V_{\\max}(\\mathcal{W})}\\, x_n + o(x_n) \\,,\n\t\\end{align}\n\twhere $C(\\cdot)$ denotes the channel capacity and~$V_{\\min}(\\cdot)$ and $V_{\\max}(\\cdot)$ denote the minimal and maximal channel dispersion as defined in~Ref.~\\cite{tomamicheltan14}, respectively.\n\tThis result holds very generally for channels with arbitrary input alphabet and without restriction on the channel dispersion, strengthening also the best known results for classical channels. Moreover, as in~Ref.~\\cite{tomamicheltan14}, this generality allows us to lift the above result to a statement about coding classical information over image-additive quantum channels and general channels as long as the encoders are restricted to prepare separable states.\n\\end{enumerate}\n\nSince quantum hypothesis testing underlies many other quantum information processing tasks such as entanglement-assisted classical communication as well as private and quantum communication, we expect that our techniques will have further applications in quantum information theory.\n\n\\paragraph*{Related work.}\n\nFor classical channels, Alt\\u{u}g and Wagner~\\cite{altug14} first established the best decay rate of the average error probability for a class of discrete memoryless channels (DMCs) when the code rate approaches capacity at a rate slower than $\\Theta(n^{-1\/2})$. Shortly after the conference version of Ref.~\\cite{altug14}, Polyanskiy and Verd\\'u~\\cite{polyanskiy10c} relaxed some of the conditions on the class of DMCs and also established the moderate deviations asymptotics for other important classical channels such as the additive white Gaussian noise channel. The other main contributions to the analysis of hypothesis testing, channel coding, quantum hypothesis testing, and c-q channel coding in the different parameter regimes are summarised in Table~\\ref{tb:relatedwork}. \n\nFrom a technical perspective the moderate deviations regime can be approached via a refined large deviations analysis (as was done in Ref.~\\cite{altug14}) or via a variation of second-order analysis via the information spectrum method (as was proposed in Ref.~\\cite{polyanskiy10c}). In our work, we mostly follow the latter approach, interspersed with ideas from large deviation theory. In particular, we build on bounds from one-shot information theory by Wang and Renner~\\cite{wang10} and use techniques developed for the second-order asymptotics in Ref.~\\cite{tomamicheltan14}. In concurrent work, Cheng and Hsieh~\\cite{cheng17} provide a moderate deviation analysis for c-q channels via a refined error exponent analysis. Their result holds for c-q channels with finite input alphabets and their techniques are complementary to ours.\n\n\\begin{table}\n\t\\begin{tabular}{|l|c|c|c|c|}\n\t\t\\hline\n\t\t& asymmetric binary & channel coding & quantum hypothesis & classical-quantum \\\\\n\t\t& hypothesis testing & & testing & channel coding \\\\\n\t\t\\hline\n\t\tlarge deviation ($<$) & \\cite{hoeffding65} & \\cite{gallager68, csiszar11} & \\cite{hayashi07,nagaoka06} & unknown\\footnotemark \\\\\n\t\t\\hline\n\t\tmoderate deviation ($<$) & \\cite{Sas11} & \\cite{altug14, polyanskiy10c} & \\em this work & \\em this work \\\\\n\t\t\\hline\n\t\tsmall deviation & \\cite{strassen62} & \\cite{strassen62,hayashi09,polyanskiy10} & \\cite{li12,tomamichel12} & \\cite{tomamicheltan14} \\\\\n\t\t\\hline\n\t\tmoderate deviation ($>$) & \\em this work & \\em this work & \\em this work & \\em this work \\\\\n\t\t\\hline\n\t\tlarge deviation ($>$) & \\cite{csiszar71,han89} & \\cite{arimoto73, dueck79} & \\cite{mosonyiogawa13,mosonyi14} & \\cite{mosonyi14-2} \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Exposition of related work on finite resource analysis of hypothesis testing and channel coding problems. The rows correspond to different parameter regimes, labelled by the deviation from the critical rate (i.e., the relative entropy for hypothesis testing and the capacity for channel coding problems).}\n\t\\label{tb:relatedwork}\n\\end{table}\n\n\\footnotetext{In contrast to classical channels a tight characterisation of the error exponent of c-q channels remains elusive to date even for high rates. See, e.g., Refs.~\\cite{holevo00,hayashi07,dalai13} for partial progress.}\n\n\n\\section{Preliminaries}\n\n\\subsection{Notation and classical coding over quantum channels}\n\nLet $\\mathcal{H}$ be a finite-dimensional Hilbert space and denote by $\\mathcal{S}:=\\lbrace \\rho\\in\\mathcal{H}\\,|\\,\\Tr\\rho=1,\\rho\\geq 0\\rbrace$ the quantum states on $\\mathcal{H}$. We take $\\exp(\\cdot)$ and $\\log(\\cdot)$ to be in an arbitrary but compatible base (such that they are inverses), and denote the natural logarithm by $\\ln(\\cdot)$. For convenience, we will consider the dimension of this Hilbert space to be a fixed constant, and omit any dependence constants may have on this dimension. For $\\rho, \\sigma \\in \\mathcal{S}$ we write $\\rho \\ll \\sigma$ if the support of $\\rho$ is contained in the support of $\\sigma$. For any closed subset $\\mathcal{S}_{\\circ}\\subseteq \\mathcal{S}$, we will denote by $\\mathcal{P}(\\mathcal{S}_{\\circ})$ the space of probability distributions supported on $\\mathcal{S}_{\\circ}$. We equip $\\mathcal{S}$ with the trace metric $\\delta_{\\Tr}(\\rho,\\rho'):=\\frac{1}{2}\\norm{\\rho-\\rho'}_1$ and $\\mathcal{P}(\\mathcal{S})$ with a weak-convergence metric\\footnote{An example of which is the induced L\\'evy--Prokhorov metric (see, e.g., Section 6 and Theorem 6.4 in Ref.~\\cite{partha67}).} $\\delta_\\text{wc}$, such that both are compact metric spaces with\n\\begin{align}\nf:\\mathcal{S}\\to \\mathbb{R}\\text{ continuous} \\qquad\\implies\\qquad \\mathbb{P}\\mapsto \\int\\mathrm{d}\\mathbb{P}(\\rho)\\,f(\\rho)\\text{ continuous}.\n\\end{align}\n\nWe will use the \\emph{cumulative standard normal distribution} function $\\Phi$ is defined as\n\\begin{align}\n\t\\Phi(a)&:=\\int_{-\\infty}^{a}\\frac{1}{\\sqrt{2\\pi}}\\, e^{-\\frac{x^2}2}\\, \\mathrm{d}x.\n\\end{align}\n\nFollowing Ref.~\\cite{tomamicheltan14}, we consider a general \\emph{classical-quantum channel} $\\mathcal{W}: \\mathcal{X} \\to \\mathcal{S}$ where $\\mathcal{X}$ is any set (without further structure). We define the \\emph{image of the channel} as the set $\\mathop{\\mathrm{im}} \\mathcal{W} \\subset \\mathcal{S}$ of all quantum states $\\rho$ such that $\\rho = \\mathcal{W}(x)$ for some $x \\in \\mathcal{X}$. For convenience we assume that our Hilbert space satisfies\n\\begin{align}\n\t\\mathcal{H} = \\mathop{\\mathrm{Span}}_{\\rho \\in \\mathop{\\mathrm{im}} \\mathcal{W}} \\mathrm{supp}(\\rho)\n\\end{align}\nsuch that $\\sigma>0$ is equivalent to $\\rho\\ll\\sigma$ for all $\\rho\\in \\mathop{\\mathrm{im}} \\mathcal{W}$.\n\nFor $M, n \\in \\mathbb{N}$, an \\emph{$(n,M)$-code} for a classical-quantum \nchannel $\\mathcal{W}$ is comprised of an encoder and a decoder. The \\emph{encoder} is a \nmap $E: \\{1, 2, \\ldots, M \\} \\to \\mathcal{X}^n$ and the \\emph{decoder} is a positive \noperator-valued measure $\\{ D_m \\}_{m=1}^M$ on $\\mathcal{H}^{\\otimes n}$. Moreover, an \n\\emph{$(n,M,\\varepsilon)$-code} is an \\emph{$(n,M)$-code} that satisfies\n\\begin{align}\n\t\\frac{1}{M} \\sum_{m=1}^M \\Tr \\bigg( \\bigotimes_{i=1}^n \\mathcal{W}\\bigl(E_i(m)\\bigr) \n\tD_m \\bigg) \\geq 1 - \\varepsilon \\,,\n\\end{align}\ni.e.\\ the average probability of error does not exceed $\\varepsilon$.\nThe finite blocklength achievable region for a channel $\\mathcal{W}$ is the set of triples $(n,M,\\varepsilon)$ for which there exists an \\emph{$(n,M,\\varepsilon)$-code} on $\\mathcal{W}$. We are particularly interested in the boundary\n\\begin{align}\n\tM^*(\\mathcal{W};n,\\varepsilon) := \\max \\big\\{ M \\in \\mathbb{N} : \\exists \\textrm{ a $(n,M,\\varepsilon)$-code for } \\mathcal{W} \\big\\} \\,.\n\\end{align}\nSpecifically we are going to be concerned with the behaviour of the \\emph{maximum rate}, which is defined as $R^*(\\mathcal{W};n,\\varepsilon) := \\frac{1}{n}\\log M^*(\\mathcal{W};n,\\varepsilon)$.\n\n\\subsection{Channel parameters}\n\nAn important parameter of a channel is the largest rate such that there exists a code of vanishing error probability in the large blocklength limit. This critical rate is known as the \\emph{capacity} of a channel $C(\\mathcal{W})$, which is defined as\n\\begin{align}\n\tC(\\mathcal{W}):=\\inf_{\\epsilon>0}\\liminf\\limits_{n\\to\\infty}R^*(\\mathcal{W};n,\\epsilon).\n\\end{align}\nFor classical-quantum channels there exists a \\emph{strong converse} bound, which states that the capacity described the asymptotic rate not just for vanishing error probability, but those for non-zero fixed error probabilities as well~\\cite{winter99,ogawa99}. Together with the original channel coding theorem~\\cite{schumacher01,holevo98}, this yields\n\\begin{align}\n\t\\lim\\limits_{n\\to\\infty}R^*(\\mathcal{W};n,\\epsilon)=C(\\mathcal{W})\\quad \\text{for all }\\epsilon\\in (0,1).\n\\end{align}\n\nIn essence the strong converse tells us that the capacity entirely dictates the asymptotic behaviour of the maximum rate at a fixed error probability. How quickly the rate approaches this asymptotic value for arbitrarily low and high error probabilities are described by the channel \\emph{min-dispersion} $V_{\\min}(\\mathcal{W})$ and \\emph{max-dispersion} $V_{\\max}(\\mathcal{W})$, which are defined respectively as\n\\begin{align}\n\tV_{\\min}(\\mathcal{W})&:=\\inf_{\\epsilon> 0}\\limsup_{n\\to\\infty}\\left(\\frac{C(\\mathcal{W})-R^*(\\mathcal{W};n,\\epsilon)}{\\Phi^{-1}(\\epsilon)\/\\sqrt{n}}\\right)^2,\\\\\n\tV_{\\max}(\\mathcal{W})&:=\\sup_{\\epsilon<1}\\limsup_{n\\to\\infty}\\left(\\frac{C(\\mathcal{W})-R^*(\\mathcal{W};n,\\epsilon)}{\\Phi^{-1}(\\epsilon)\/\\sqrt{n}}\\right)^2.\n\\end{align}\nAs with the strong converse, the min and max-dispersions also describe the dispersion at other fixed error probabilities~\\cite{tomamicheltan14}:\n\\begin{align}\n\t\\lim_{n\\to\\infty}\\left(\\frac{C(\\mathcal{W})-R^*(\\mathcal{W};n,\\epsilon)}{\\Phi^{-1}(\\epsilon)\/\\sqrt{n}}\\right)^2=\\begin{dcases}\n\t\tV_{\\min}(\\mathcal{W}) &\\epsilon\\in(0,1\/2)\\\\\n\t\tV_{\\max}(\\mathcal{W}) &\\epsilon\\in(1\/2,1)\n\t\\end{dcases}.\n\\end{align}\n\n\\subsection{Information quantities}\n\nClassically, for two distributions $P$ and $Q$, the \\emph{relative entropy} $D(P\\|Q)$ and \\emph{relative entropy variance} $V(P\\|Q)$ are both defined as the mean and variance of the log-likelihood ratio $\\log \\bigl(P\/Q\\bigr)$ with respect to the distribution $P$. In the non-commutative case, for $\\rho,\\sigma \\in \\mathcal{S}$ with $\\rho \\ll \\sigma$, these definitions are generalised as~\\cite{umegaki62,li12,tomamichel12}\n\\begin{align}\n\tD(\\rho\\|\\sigma)&:=\\Tr\\rho\\left(\\log\\rho-\\log\\sigma\\right), \\\\\n\tV(\\rho\\|\\sigma)&:=\\Tr\\rho\\bigl(\\log\\rho-\\log\\sigma-D\\left(\\rho\\|\\sigma\\right)\\cdot\\mathrm{id}\\bigr)^2\\,.\n\\end{align}\nIf $\\rho \\not\\ll \\sigma$ both quantities are set to $+\\infty$. \n\nFollowing Ref.~\\cite{tomamicheltan14}, for a closed set $\\mathcal{S}_{\\circ}\\in \\mathcal{S}$, the \\emph{divergence radius}\\footnote{Whilst \\cref{eqn:radius} characterises the divergence radius, we will mostly rely on a more useful form presented in \\cref{defn:radius}.} $\\chi(\\mathcal{S}_{\\circ})$ is given by\n\\begin{align}\n\t\\label{eqn:radius}\n\t\\chi(\\mathcal{S}_{\\circ})\n\t=\\sup_{\\mathbb{P}\\in \\mathcal{P}(\\mathcal{S}_{\\circ})}\\int\\mathrm{d}\\mathbb{P}(\\rho)\\,D\\left( \\rho\\, \\middle\\| \\, \\int\\mathrm{d}\\mathbb{P}(\\rho')\\,\\rho' \\right).\n\\end{align}\nwhere $\\mathcal{P}(\\mathcal{S}_{\\circ})$ denotes the space of distributions on $\\mathcal{S}_{\\circ}$. If we let $\\Pi(\\mathcal{S}_{\\circ})$ denote the distributions which achieve the above supremum, we also define the \\emph{minimal and maximal peripheral variance}, $v_{\\min}(\\mathcal{S}_{\\circ})$ and $v_{\\max}(\\mathcal{S}_{\\circ})$, as\n\\begin{align}\n\tv_{\\min}(\\mathcal{S}_{\\circ}):=\\inf_{\\mathbb{P}\\in \\Pi(\\mathcal{S}_{\\circ})} \\int\\mathrm{d}\\mathbb{P}(\\rho)\\,V\\left( \\rho\\, \\middle\\| \\, \\int\\mathrm{d}\\mathbb{P}(\\rho')\\,\\rho' \\right),\\\\\n\tv_{\\max}(\\mathcal{S}_{\\circ}):=\\sup_{\\mathbb{P}\\in \\Pi(\\mathcal{S}_{\\circ})} \\int\\mathrm{d}\\mathbb{P}(\\rho)\\,V\\left( \\rho\\, \\middle\\| \\, \\int\\mathrm{d}\\mathbb{P}(\\rho')\\,\\rho' \\right).\n\\end{align}\n\nFor the image of a quantum channel, the above three information quantities correspond exactly to the three previously defined channel parameters~\\cite{tomamicheltan14}. Specifically, for $\\mathcal{S}_{\\circ}=\\overline{\\mathop{\\mathrm{im}} \\mathcal{W}}$, we have\n\\begin{align}\n\tC(\\mathcal{W})=\\chi(\\mathcal{S}_{\\circ}),\\qquad\\quad\n\tV_{\\min}(\\mathcal{W})=v_{\\min}(\\mathcal{S}_{\\circ}),\\qquad\\quad\n\tV_{\\max}(\\mathcal{W})=v_{\\max}(\\mathcal{S}_{\\circ}).\n\\end{align}\n\n\\subsection{Moderate deviation tail bounds}\n\\label{subsec:moddev}\n\nWe now discuss the relevant tail bounds we will require in the moderate deviation regime. Let $\\lbrace X_{i,n}\\rbrace_{i\\leq n}$ be independent zero-mean random variables, and define the average variance as\n\\begin{align}\n\tV_n:=\\frac{1}{n}\\sum_{i=1}^{n}\\Var[X_{i,n}].\n\\end{align}\n\nRecall that a sequence $\\lbrace t_n\\rbrace_n$ is moderate if $x_n\\searrow 0$ \nand $\\sqrt nx_n\\to+\\infty$ as $n\\to\\infty$. Given certain bounds on the moments \nand cumulants \nof these variables, which we will make explicit below, we will see that the \nprobability that the average variable $\\frac{1}{n}\\sum_{i=1}^{n}X_{i,n}$ \ndeviates from the mean by a moderate sequence $\\lbrace t_n\\rbrace_n$ decays \nasymptotically as\n\\begin{align}\n\t\\label{eqn:asymp}\n\t\\ln\\Pr\\left[\\frac{1}{n}\\sum_{i=1}^{n}X_{i,n}\\geq t_n\\right]=-\\bigl(1+o(1)\\bigr)\\frac{nt_n^2}{2V_n}.\n\\end{align}\n\n\\begin{lem}[Moderate deviation lower bound]\n\t\\label{lem:moddev lower}\n\tIf there exist constants $\\nu>0$ and $\\tau$ such that $\\nu\\leq V_n$ and \n\t\\begin{align}\n\t\\frac{1}{n}\\sum_{i=1}^{n}\\mathbb{E}\\left[\\abs{X_{i,n}}^3\\right]\\leq \\tau\n\t\\end{align} \n\tfor all $n$, then for any $\\eta>0$ there exists a constant $N(\\lbrace t_i\\rbrace,\\nu,\\tau,\\eta)\n\t$ such that, for all $n\\geq N$, the probability of a moderate deviation is lower bounded as\n\t\\begin{align}\n\t\\ln\\Pr\\left[\\frac{1}{n}\\sum_{i=1}^nX_{i,n}\\geq t_n\\right]\\geq \n\t-(1+\\eta)\\frac{nt_n^2}{2V_n}.\n\t\\end{align}\n\\end{lem}\n\n\\begin{lem}[Moderate deviation upper bound]\n\t\\label{lem:moddev upper}\n\tIf there exists a constant $\\gamma$ such that \n\t\\begin{align}\n\t\\frac{1}{n}\\sum_{i=1}^{n}\\sup_{s\\in[0,1\/2]}\\abs{\\frac{\\mathrm{d}^3}{\\mathrm{d}s^3}\\ln\\mathbb{E}\\left[e^{sX_{i,n}}\\right]}\\leq \\gamma,\n\t\\end{align}\n\tfor all $n$, then for any $\\eta>0$ there exists a constant $N(\\lbrace t_i\\rbrace, \\gamma,\\eta)\n\t$ such that, for all $n\\geq N$, the probability of a moderate deviation is upper bounded as\n\t\\begin{align}\n\t\\ln \\Pr\\left[\\frac{1}{n}\\sum_{i=1}^{n}X_{i,n}\\geq t_n\\right]\\leq -\\frac{nt_n^2}{2V_n+\\eta}.\n\t\\end{align}\n\\end{lem}\n\nIf $V_n$ has a uniform lower bound, then as $\\eta\\searrow 0$ the above two bounds sandwich together, giving the two-sided asymptotic scaling of Eq.~\\ref{eqn:asymp}. In this case we can see that\n\\begin{align}\n\t\\sigma\\left[\\frac{1}{n}\\sum_{i=1}^{n}X_i\\right]=\\sqrt{V_n\/n}=\\Theta(1\/\\sqrt{n})\n\t\\qquad\\text{and}\\qquad\n\t\\sqrt{\\frac{1}{n}\\sum_{i=1}^{n}\\sigma^2\\left[X_i\\right]}=\\sqrt{V_n}=\\Theta(1),\n\\end{align}\nwhere $\\sigma\\left[\\cdot\\right]$ denotes the standard deviation. If we interpret the standard deviation as setting the `length-scale' on which a distribution decays, then the above two quantities---the deviation of the average, and average\\footnote{More specifically the root-mean-square} of the deviation---set the length-scales of small and large deviation bounds respectively. Using this intuition, we can generalise moderate deviation bounds to give tight two-sided bounds for distributions with arbitrary normalisation, in which $V_n$ is no longer bounded. To do this we will tail bound for deviations which are moderate, \\emph{in units of }$\\sqrt{V_n}$.\n\n\\begin{cor}[Dimensionless moderate deviation bound]\n\t\\label{cor:moddev nondim}\n\tIf there exists a $\\gamma$ such that\n\t\\begin{align}\n\t\t\\frac{1}{nV_n^{3\/2}}\\sum_{i=1}^{n}\\sup_{s\\in[0,1\/2]}\\abs{\\frac{\\mathrm{d}^3}{\\mathrm{d}s^3}\\ln\\mathbb{E}\\left[e^{sX_{i,n}}\\right]}\\leq \\gamma,\n\t\\end{align}\n\tfor all $n$,\n\tthen there exists a constant $N(\\lbrace t_i\\rbrace,\\gamma)$ such that, for all $n\\geq N$, we have the two-sided bound \n\t\\begin{align}\n\t\t-(1+\\eta)\\frac{nt_n^2}{2}\\leq \\ln \\Pr\\left[\\frac{1}{n}\\sum_{i=1}^{n}X_{i,n}\\geq t_n\\sqrt{V_n}\\right]\\leq -(1-\\eta)\\frac{nt_n^2}{2}.\n\t\\end{align}\n\\end{cor}\n\nWe present proofs of these lemmas in \\cref{app:tailbounds}.\n\n\\subsection{Reversing lemma}\n\nIntuitively one might expect that moderate deviation bounds can be `reversed' e.g.\\ that the bound on the probability given the deviation (see Lemmas \\ref{lem:moddev lower} and \\ref{lem:moddev upper}) of the form\n\\begin{align}\n\t\\lim\\limits_{n\\to \\infty}\\frac{V_n}{nt_n^2}\\ln\\Pr\\left[\\frac{1}{n}\\sum_{i=1}^{n}X_i\\geq t_n\\right]=-\\frac{1}{2},\n\\end{align}\nis equivalent to a bound on the deviation given the probability\n\\begin{align}\n\t\\lim\\limits_{n\\to\\infty}\\frac{1}{t_n}\\inf\\left\\lbrace t\\in\\mathbb{R} \\,\\middle|\\, \\frac{V_n}{nt_n^2}\\ln\\Pr\\left[\\frac{1}{n}\\sum_{i=1}^{n}X_i\\geq t\\right]\\leq -\\frac{1}{2} \\right\\rbrace=\n\\end{align}\n\nWe will now see that such an ability to `reverse' moderate deviation bounds is generic. We do this by considering two quantities $A$ and $B$ defined on the same domain, and considering the infimum value of each quantity for a fixed value of the other. \n\n\\begin{lem}[Reversing Lemma]\n\t\\label{lem:reverse}\n\tLet $\\lbrace A_i\\rbrace_i$ and $\\lbrace B_i\\rbrace_i$ be sequences of real functions with $\\inf_{t} A_i(t)\\leq 0$ and $\\inf_{t}B_i(t)\\leq 0$ for all $i$. If we define $\\hat{A}_n(b):=\\inf_{t} \\left\\lbrace A_n(t) \\middle| B_n(t)\\leq b \\right\\rbrace$ and $\\hat{B}_n(a):=\\inf_{t} \\left\\lbrace B_n(t) \\middle| A_n(t)\\leq a \\right\\rbrace$, then\n\t\\begin{align}\n\t\t\\lim\\limits_{n\\to\\infty}\\frac{\\hat{A}_n(b_n)}{b_n}=1, \\quad\\forall \\lbrace b_n\\rbrace\\text{ moderate}\n\t\t~~\\qquad&\\Longleftrightarrow\\qquad~~ \n\t\t\\lim\\limits_{n\\to\\infty}\\frac{\\hat{B}_n(a_n)}{a_n}=1, \\quad\\forall \\lbrace a_n\\rbrace\\text{ moderate}.\n\t\\end{align}\n\\end{lem}\n\\begin{proof}\n\tSee \\cref{app:reverse}.\n\\end{proof}\n\n\n\\section{Hypothesis testing}\n\\label{sec:hypo}\n\nWhilst the divergence radius characterises the channel capacity, one-shot channel bounds are characterised by a quantity known as the \\emph{$\\epsilon$-hypothesis testing divergence}~\\cite{wang10}. As the name suggests, as well as being relevant to one-shot channel coding bounds, the hypothesis testing divergence also has an operational interpretation in the context of hypothesis testing of quantum states. We will start by considering a moderate deviation analysis of this quantity.\n\n\\subsection{Hypothesis testing divergence}\n\nConsider a hypothesis testing problem, in which $\\rho$ and $\\sigma$ correspond to the null and alternative hypotheses respectively. A test between these hypotheses will take the form of a POVM $\\lbrace Q,I-Q\\rbrace$, where $0\\leq Q\\leq I$. For a given $Q$, the type-I and type-II error probabilities are given by \n\\begin{align}\n\t\\alpha(Q;\\rho,\\sigma):=\\Tr (I-Q)\\rho,\\qquad\\qquad\n\t\\beta(Q;\\rho,\\sigma):=\\Tr Q\\sigma.\n\\end{align}\nIf we define the smallest possible type-II error given a type-I error at most $\\epsilon$ as\n\\begin{align}\n\t\\beta_\\epsilon(\\rho\\|\\sigma):=\\min_{0\\leq Q\\leq \\mathbb{I}}\\left\\lbrace \\beta(Q;\\rho,\\sigma) \\,\\middle|\\, \\alpha(Q;\\rho,\\sigma)\\leq\\epsilon \\right\\rbrace,\n\\end{align}\nthen the $\\epsilon$-hypothesis testing divergence is defined as\n\\begin{align}\n\tD^\\epsilon_{\\mathrm{h}}(\\rho\\|\\sigma):=-\\log\\frac{\\beta_{\\epsilon}(\\rho\\|\\sigma)}{1-\\epsilon}.\n\\end{align}\nWe note that the denominator of $1-\\epsilon$ follows the normalisation in~\\cite{dupuis12} such that $D_{\\mathrm{h}}^\\epsilon(\\rho\\|\\rho)=0$ for all $\\rho$.\n\nAn obvious extension of this hypothesis problem is to the case of $n$ copies of each state, i.e.\\ a hypothesis test between $\\rho^{\\otimes n}$ and $\\sigma^{\\otimes n}$, or more generally between two product states $\\otimes_{i=1}^n\\rho_i$ and $\\otimes_{i=1}^n\\sigma_i$. A second-order analysis of the $\\epsilon$-hypothesis testing divergence for a non-vanishing $\\varepsilon$ was given in~\\cite{li12,tomamichel12}.\n\n\\begin{thm}[Moderate deviation of the hypothesis testing divergence]\n\tFor any moderate sequence $\\lbrace a_n\\rbrace_n$ and states $\\lbrace \\rho_n\\rbrace_n$ and $\\lbrace \\sigma_n\\rbrace_n$ such that both $\\lambda_{\\min}(\\sigma_i)$ and $V(\\rho_i\\|\\sigma_i)$ are both uniformly bounded away from zero, the $\\epsilon_n$- and $(1-\\epsilon_n)$-hypothesis testing divergences of non-uniform product states for $\\epsilon_n=e^{-na_n^2}$ scale as\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&=D_n-\\sqrt{2V_n}\\,a_n+o(a_n), \\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&=D_n+\\sqrt{2V_n}\\,a_n+o(a_n),\n\t\\end{align}\n\twhere $D_n:=\\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma_i)$ and $V_n:=\\frac{1}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma_i)$. More specifically for any $\\rho$ and $\\sigma$ such that $\\rho\\ll \\sigma$, the hypothesis testing divergences of uniform product states scale as\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\rho^{\\otimes n} \\middle\\|\\sigma^{\\otimes n}\\right)&=D(\\rho\\|\\sigma)-\\sqrt{2V(\\rho\\|\\sigma)}\\,a_n+o(a_n), \\label{eq:hypo-mod}\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\rho^{\\otimes n} \\middle\\|\\sigma^{\\otimes n}\\right)&=D(\\rho\\|\\sigma)+\\sqrt{2V(\\rho\\|\\sigma)}\\,a_n+o(a_n).\n\t\\end{align}\n\\end{thm}\n\nIn Sect.~\\ref{subsec:inward} we will bound the regularised hypothesis testing divergences towards the relative entropy (the \\emph{inward bound}), and in Sect.~\\ref{subsec:outward} we will bound them away (the \\emph{outward bound}).\n\n\n\\begin{remark}\n\tFor sequences $\\varepsilon_n$ bounded away from zero and one the second-order expansion in Refs.~\\cite{li12,tomamichel12} yields \n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\rho^{\\otimes n} \\middle\\|\\sigma^{\\otimes n}\\right) = D(\\rho\\|\\sigma) + \\sqrt{\\frac{V(\\rho\\|\\sigma)}{n}}\\, \\Phi^{-1}(\\varepsilon_n) + O\\left(\\frac{\\log n}{n}\\right) , \\label{eq:hypo-so}\n\t\\end{align}\n\twhere $\\Phi$ denotes the cumulative distribution function of the standard normal.\n\tAs already pointed out in Ref.~\\cite{polyanskiy10c}, for small $\\varepsilon_n$ we have $\\Phi^{-1}(\\varepsilon_n) \\approx \\sqrt{- 2 \\ln \\varepsilon_n}$. Ignoring all higher order terms, the substitution $\\varepsilon_n = e^{-n a_n^2}$ into~\\eqref{eq:hypo-so} then recovers the expression in~\\eqref{eq:hypo-mod}. In this sense the two results thus agree at the boundary between small and moderate deviations.\n\\end{remark}\n\n\\begin{remark}\n\tA similar argument can be sketched at the boundary between moderate and large \n\tdeviations.\n\tThe quantum Hoeffding bound~\\cite{nagaoka06,hayashi07} states that if $\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}(\\rho^{\\otimes n}\\|\\sigma^{\\otimes n})\\leq D(\\rho\\|\\sigma) - r$ for some small $r > 0$ then $\\varepsilon_n$ drops exponentially in $n$ with the exponent given by\n\t\\begin{align}\n\t\t\\sup_{0\\leq\\alpha<1}\\frac{\\alpha-1}{\\alpha}\\Bigl[D(\\rho\\|\\sigma)-r-D_{\\alpha}(\\rho\\|\\sigma)\\Bigr] , \\label{eqn:expr1}\n\t\\end{align}\n\twhere $D_{\\alpha}(\\rho\\|\\sigma)$ is the Petz' quantum R\\'enyi relative entropy~\\cite{petz86}. For sufficiently small $r$, the expression in~\\eqref{eqn:expr1} attains its supremum close to $\\alpha = 1$ and we can thus approximate $D_{\\alpha}(\\rho\\|\\sigma) \\approx D(\\rho\\|\\sigma) + \\frac{\\alpha-1}{2} V(\\rho\\|\\sigma)$ by its Taylor expansion~\\cite{lintomamichel14}. Evaluating this approximate expression yields \n\t\\begin{align}\n\t\t\\varepsilon_n = e^{- n\\frac{r^2}{2V(\\rho\\|\\sigma)}} \\,. \\label{eqn:expr2}\n\t\\end{align} \n\tup to leading order in $r$. Substituting $r = \\sqrt{2V(\\rho\\|\\sigma)}a_n$ into~\\eqref{eqn:expr2} then recovers~\\eqref{eq:hypo-mod}.\n\tAn essentially equivalent argument is also applicable to the strong converse exponent derived in Ref.~\\cite{mosonyiogawa13}.\n\\end{remark}\n\n\\subsection{Nussbaum--Szko\\l a distributions}\n\nTo allow us to apply a moderate deviation analysis to the quantum hypothesis testing divergence, we leverage the results of Ref.~\\cite{tomamichel12} which allow us to reduce the hypothesis testing divergence of quantum states to a quantity known as the information spectrum divergence of certain classical distributions, known as the Nussbaum--Szko\\l a distributions. \n\n\\begin{defn}[Nussbaum--Szko\\l a distributions~\\cite{NussbaumSzkola2009}]\n\tThe \\emph{Nussbaum--Szko\\l a distributions} for a pair of states $\\rho$ and $\\sigma$ are given by\n\t\\begin{align}\n\t\tP^{\\rho,\\sigma}(a,b)=r_a\\abs{\\braket{\\phi_a}{\\psi_b}}^2\\qquad\\text{and}\\qquad Q^{\\rho,\\sigma}(a,b)=s_b\\abs{\\braket{\\phi_a}{\\psi_b}}^2\n\t\\end{align}\n\twhere the states are eigendecomposed as $\\rho=\\sum_ar_a\\ketbra{\\phi_a}{\\phi_a}$ and $\\sigma=\\sum_bs_b\\ketbra{\\psi_b}{\\psi_b}$. \n\\end{defn}\n\nThe power of the Nussbaum--Szko\\l a distributions lies in their ability to reproduce both the divergence and variance of the underlying quantum states\n\\begin{align}\n\tD(\\rho\\|\\sigma)=D(P^{\\rho,\\sigma}\\|Q^{\\rho,\\sigma})\n\t,\n\t\\qquad \\text{and}\\qquad \n\tV(\\rho\\|\\sigma)=V(P^{\\rho,\\sigma}\\|Q^{\\rho,\\sigma})\n\t.\n\\end{align}\nAs well as capturing these asymptotic quantities, the hypothesis testing relative entropy, which arises one-shot channel coding bounds, can also be captured by the Nussbaum--Szko\\l a distributions. Specifically this is done via the \\emph{information spectrum divergence}, which is defined for two classical distributions $P$ and $Q$ by a tail bound on the log-likelihood ratio as\n\\begin{align}\n\tD_{\\mathrm{s}}^{\\epsilon}(P\\|Q):=\\sup\\left\\lbrace R ~\\middle|~ \\Pr_{X\\leftarrow P}\\left[\\log \\frac{P(X)}{Q(X)}\\leq R\\right]\\leq \\epsilon \\right\\rbrace.\n\\end{align}\nInserting the Nussbaum--Skzo\\l a distributions, we find that the (classical) information spectrum divergence approximates the (quantum) hypothesis testing divergence.\n\\begin{lem}[Thm.~14, Ref.~\\cite{tomamichel12}]\n\t\\label{lem:infospecdiv}\n\tThere exists a universal constant $K$ such that for any states $\\rho$ and $\\sigma$ with $\\lambda_{\\min}(\\sigma)\\geq \\lambda$ and $\\epsilon< 1\/2$, we find that $D_{\\mathrm{h}}^\\epsilon(\\rho\\|\\sigma)$ is bounded as\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^\\epsilon(\\rho\\|\\sigma)&\\leq D_{\\mathrm{s}}^{2\\epsilon}(P^{\\rho,\\sigma}\\|Q^{\\rho,\\sigma})+\\log\\frac{1-\\epsilon}{\\epsilon^3(1-2\\epsilon)}+\\log K \\lceil \\ln(1\/\\lambda)\\rceil\\\\\n\t\tD_{\\mathrm{h}}^\\epsilon(\\rho\\|\\sigma)&\\geq D_{\\mathrm{s}}^{\\epsilon\/2}(P^{\\rho,\\sigma}\\|Q^{\\rho,\\sigma})-\\log\\frac{1}{\\epsilon(1-\\epsilon)}-\\log K \\lceil \\ln(1\/\\lambda)\\rceil,\n\t\\end{align}\n\tand $D_{\\mathrm{h}}^{1-\\epsilon}(\\rho\\|\\sigma)$ is bounded as\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{1-\\epsilon}(\\rho\\|\\sigma)&\\leq D_{\\mathrm{s}}^{1-\\epsilon\/2}(P^{\\rho,\\sigma}\\|Q^{\\rho,\\sigma})+\\log\\frac{1-\\epsilon\/2}{\\epsilon^4}+\\log K \\lceil \\ln(1\/\\lambda)\\rceil\\\\\n\t\tD_{\\mathrm{h}}^{1-\\epsilon}(\\rho\\|\\sigma)&\\geq D_{\\mathrm{s}}^{1-2\\epsilon}(P^{\\rho,\\sigma}\\|Q^{\\rho,\\sigma})-\\log\\frac{1}{\\epsilon^2}-\\log K \\lceil \\ln(1\/\\lambda)\\rceil.\n\t\\end{align}\n\\end{lem}\n\nAs the information spectrum divergence is defined in terms of a tail bound, we will bound these quantities using the moderate deviation tail bounds of Sect.~\\ref{subsec:moddev}. To do this, we will start by showing that the log-likelihood ratio of Nussbaum--Skzo\\l a distributions is sufficiently well behaved, specifically that its cumulant generating function has bounded derivatives.\n\n\\begin{lem}[Bounded cumulants]\n\t\\label{lem:boundedcumulants}\n\tFor $\\lambda>0$, there exists constants $C_k(\\lambda)$ such that the cumulant generating function $h(t):=\\ln \\mathbb{E}\\left[e^{tZ}\\right]$ of the log-likelihood ratio $Z:=\\log P^{\\rho,\\sigma}\/Q^{\\rho,\\sigma}$ for $\\lambda_{\\min}(\\sigma)\\geq \\lambda$ is smooth and has uniformly bounded derivatives in a neighbourhood of the origin\n\t\\begin{align}\n\t\\sup_{\\abs{t}\\leq 1\/2}\\abs{\\frac{\\partial^k}{\\partial t^k}h(t)} \\leq C_k.\n\t\\end{align}\n\\end{lem}\nWe present a proof of this lemma in \\cref{app:cumulant}.\n\n\\subsection{Inward bound}\n\\label{subsec:inward}\n\\begin{prop}[Inward bound]\n\t\\label{prop:HTD-inward}\n\tFor any constants $\\lambda,\\eta>0$, there exists a constant $N(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that, for $n\\geq N$, the hypothesis testing divergence can be bounded for any states $\\lbrace\\rho_i\\rbrace_i$ and $\\lbrace \\sigma_i\\rbrace_i$ with\n\t$\\lambda_{\\min}(\\sigma_i)\\geq\\lambda$ as\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n-\\sqrt{2V_n}a_n-\\eta a_n,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n+\\sqrt{2V_n}\\,a_n+\\eta a_n.\n\t\\end{align}\n\twhere $D_n:=\\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma_i)$ and $V_n:=\\frac{1}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma_i)$.\n\\end{prop}\n\\begin{proof}\n\tFirstly, let $Z_i$ be the log-likelihood ratios \n\t\\begin{align}\n\tZ_i:=\\log\\frac{P^{\\rho_i,\\sigma_i}(A_i,B_i)}{Q^{\\rho_i,\\sigma_i}(A_i,B_i)}, \\qquad (A_i,B_i)\\leftarrow P^{\\rho_i,\\sigma_i}.\n\t\\end{align}\n\tIn terms of these log-likelihood ratios, the lower and upper bound on the $\\epsilon_n$- and $(1-\\epsilon_n)$-hypothesis testing divergences respectively from \\cref{lem:infospecdiv} become\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq \\sup\\left\\lbrace R~\\middle|~ \\Pr\\Biggl[\\sum_{i=1}^n Z_i\\leq R\\Biggr]\\leq \\epsilon_n\/2 \\right\\rbrace-\\log \\frac{1}{\\epsilon_n(1-\\epsilon_n)}-\\log Kn\\lceil\\ln(1\/\\lambda) \\rceil,\\\\\n\t\tD_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq \\sup\\left\\lbrace R~\\middle|~ \\Pr\\Biggl[\\sum_{i=1}^n Z_i\\leq R\\Biggr]\\leq 1-\\epsilon_n\/2 \\right\\rbrace+\\log \\frac{1-\\epsilon_n\/2}{\\epsilon_n^4}+\\log Kn\\lceil\\ln(1\/\\lambda) \\rceil.\n\t\\end{align}\n\t\n\tRecalling that $\\epsilon_n:=e^{-na_n^2}$, we can see that in both cases the error terms scale like $\\Theta(na_n^2)$ and $\\Theta(\\log n)$ respectively, which are both $o(na_n)$. As such, there must exist an $N_1(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that, for $n\\geq N_1$, these error terms are bounded by $\\eta na_n\/2$ as\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq \\frac{1}{n}\\sup\\left\\lbrace R~\\middle|~ \\Pr\\Biggl[\\sum_{i=1}^n Z_i\\leq R\\Biggr]\\leq \\epsilon_n\/2 \\right\\rbrace-\\eta a_n\/2,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq \\frac{1}{n}\\sup\\left\\lbrace R~\\middle|~ \\Pr\\Biggl[\\sum_{i=1}^n Z_i\\leq R\\Biggr]\\leq 1-\\epsilon_n\/2 \\right\\rbrace+\\eta a_n \/2.\n\t\\end{align}\n\t\n\tNext we want to apply the tail bounds of Sect.~\\ref{subsec:moddev}. To this end, we will start by defining zero-mean variables $X_i:=Z_i-D(\\rho_i\\|\\sigma_i)$. In terms of these variables, the above bounds take the form\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n-\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (-X_i)\\geq t\\Biggr]\\leq \\epsilon_n\/2 \\right\\rbrace-\\eta a_n\/2,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n+\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n \\left(+X_i\\right)\\geq t\\Biggr]\\leq \\epsilon_n\/2 \\right\\rbrace+\\eta a_n\/2.\n\t\\end{align}\n\t\n\tBy \\cref{lem:boundedcumulants} there exists constants $\\bar V(\\lambda)$ and $\\gamma(\\lambda)$, such that $V_i\\leq \\bar V$ and\n\t\\begin{align}\n\t\\sup_{t\\in[0,1\/2]}\\abs{\\frac{\\mathrm{d}^3}{\\mathrm{d}s^3}\\ln\\mathbb{E}\\left[e^{s(\\pm X_i)}\\right]}\\leq \\gamma\n\t\\end{align}\n\tfor all $i$. If we let $t_n:=\\left(\\sqrt{2V_n}+\\eta\/2\\right)a_n$, then \\cref{lem:moddev upper} gives an $N_2(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that, for $n\\geq N_2$, the tail probability is bounded as\n\t\\begin{align}\n\t\t\\ln\\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\\geq t_n\\Biggr]\n\t\t&\\leq \\frac{-nt_n^2}{2V_n+\\eta^2 \/3}\\\\\n\t\t&\\leq -\\frac{\\left(\\sqrt{2V_n}+\\eta\/2\\right)^2}{2V_n+\\eta^2\/5}na_n^2\\\\\n\t\t&\\leq -\\frac{2V_n+\\eta^2\/4}{2V_n+\\eta^2\/5}na_n^2\\\\\n\t\t&= -\\left(1+\\frac{\\eta^2}{40V_n+4\\eta^2}\\right)na_n^2\\\\\n\t\t&\\leq -\\left(1+\\frac{\\eta^2}{40\\bar{V}+4\\eta^2}\\right)na_n^2.\n\t\\end{align}\n\tAs $\\eta^2\/(40\\bar{V}+4\\eta)$ is a constant and $na_n^2\\to \\infty$, there must exist a constant $N_3(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such $n\\geq N_3$ implies\n\t\\begin{align}\n\t\t\\ln\\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\\geq t_n\\Biggr]\\leq -na_n^2-1=\\ln (\\epsilon_n\/2),\n\t\\end{align}\n\tand therefore that \n\t\\begin{align}\n\t\t\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\\geq t\\Biggr]\\leq \\epsilon_n\/2 \\right\\rbrace\\leq t_n.\n\t\\end{align}\n\tPutting everything together, we get that for any $n\\geq N(\\lbrace a_i\\rbrace,\\lambda,\\eta):=\\max\\lbrace N_1,N_2,N_3\\rbrace$ we have\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n-\\sqrt{2V_n}\\,a_n-\\eta a_n,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n+\\sqrt{2V_n}\\,a_n+\\eta a_n.\n\t\\end{align}\n\tas required.\n\\end{proof}\n\n\\subsection{Outward bound}\n\\label{subsec:outward}\n\n\\begin{prop}[Outward bound]\n\t\\label{prop:HTD-outward}\n\tFor any constants $\\lambda,\\eta>0$, there exists a constant $N(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that, for $n\\geq N$, the hypothesis testing divergence can be bounded for any states $\\lbrace\\rho_i\\rbrace_i$ and $\\lbrace\\sigma_i\\rbrace_i$ with\n\t$\\lambda_{\\min}(\\sigma_i)\\geq\\lambda$ as\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n+\\eta a_n,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n-\\eta\\,a_n.\n\t\\end{align}\n\twhere $D_n:=\\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma_i)$. Moreover, if we \n\tlet $V_n:=\\frac{1}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma_i)$, and there also \n\texists a constant $\\nu>0$ such that $V_i\\geq \\nu$ for all $i$, then there \n\texists an $N'(\\lbrace a_i\\rbrace,\\lambda,\\nu,\\eta)$ such that, for $n\\geq \n\tN'$, the hypothesis testing divergence is more tightly bounded as\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n-\\sqrt{2V_n}a_n+\\eta a_n,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n+\\sqrt{2V_n}\\,a_n-\\eta a_n.\n\t\\end{align}\n\\end{prop}\n\\begin{proof}\n\tSimilar to \\cref{prop:HTD-inward}, we will start by taking the upper and lower bounds on the $\\epsilon_n$- and $(1-\\epsilon_n)$-hypothesis testing divergences respectively from \\cref{lem:infospecdiv}. \n\tThis gives that there exists an $N_1(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that, for $n\\geq N_1$, we have\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n-\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (-X_i)\\geq t\\Biggr]\\leq 2\\epsilon_n \\right\\rbrace+\\eta a_n\/2,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n+\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n \\left(+X_i\\right)\\geq t\\Biggr]\\leq 2\\epsilon_n \\right\\rbrace-\\eta a_n\/2.\n\t\\end{align}\n\twhere $X_i:=Z_i-D(\\rho_i\\|\\sigma_i)$.\n\t\n\tFirstly, applying Chebyshev's inequality two standard deviations below the mean gives us that\n\t\\begin{align}\n\t\t\\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\\geq -2\\sqrt{V_n\/n}\\Biggr]\\geq 3\/4\\geq2\\epsilon_n,\n\t\\end{align}\n\tand so we conclude that\n\t\\begin{align}\n\t\t\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n \\left(\\pm X_i\\right)\\geq t\\Biggr]\\leq 2\\epsilon_n \\right\\rbrace\\geq -2\\sqrt{V_n\/n}.\n\t\\end{align}\n\tBy \\cref{lem:boundedcumulants}, $V_n$ must be bounded $V_n\\leq \\bar{V}(\\lambda)$, and thus $\\sqrt{V_n\/n}=\\mathcal{O}(1\/\\sqrt{n})=o(a_n)$. As such, there must exist an $N_2(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that $n\\geq N_2$ implies $2\\sqrt{V_n\/n}\\leq \\eta a_n\/2$. Inserting this tail bound, we get that for any $n\\geq N(\\lbrace a_i\\rbrace,\\lambda,\\eta):=\\max\\lbrace N_1,N_2\\rbrace$ that\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n-\\eta a_n,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n+\\eta a_n,\n\t\\end{align}\n\tas required.\n\t\n\tIf there also exists an $\\nu > 0$ such that $V_i\\geq \\nu$, then we can use a more refined moderate deviation bound. Specifically, \\cref{lem:boundedcumulants} gives us a bound on the absolute third moment of $X_i$, which allows us to apply \\cref{lem:moddev lower}. If we let $t_n:=(\\sqrt{2V_n}-\\eta\/2)a_n$ and assume $\\eta<\\sqrt{8\\nu}$ such that $\\lbrace t_n\\rbrace_n$ is moderate, then this gives us that there exists an $N_3(\\lbrace a_i\\rbrace, \\lambda,\\nu,\\eta)$ such that, for any $n\\geq N_3$, the tail probabilities are bounded\n\t\\begin{align}\n\t\t\\ln \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\n\t\t\\geq t_n\\Biggr]\n\t\t&\\geq -\\Bigl(1+\\eta\/\\sqrt{2\\bar{V}}\\,\\Bigr)\\frac{nt_n^2}{2V_n} \\\\\n\t\t&\\geq -\\frac{\\left(1+\\eta\/\\sqrt{2\\bar{V}}\\right)\\left(\\sqrt{2V_n}-\\eta\/2\\right)^2}{2V_n}na_n^2 \\\\\n\t\t&\\geq -\\frac{\\left(1+\\eta\/\\sqrt{2V_n}\\right)\\left(\\sqrt{2V_n}-\\eta\/2\\right)^2}{2V_n}na_n^2 \\\\\n\t\t&\\geq -\\left(1-\\frac{5\\eta^2}{8\\bar V}\\right)na_n^2.\n\t\\end{align}\n\tOnce again, the second term in the parenthesis is a non-zero constant, and thus there must exist an $N_4(\\lbrace a_i\\rbrace,\\lambda,\\eta)$ such that\n\t\\begin{align}\n\t\t\\log \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\n\t\t\\geq t_n\\Biggr]\\geq -na_n^2+1=\\ln 2\\epsilon_n,\n\t\\end{align}\n\tallowing us to conclude $\\Pr\\left[\\frac{1}{n}\\sum_{i=1}^n (\\pm X_i)\n\t\\geq t_n\\right]\\geq 2\\epsilon_n$, and therefore \n\t\\begin{align}\n\t\t\\inf\\left\\lbrace t~\\middle|~ \\Pr\\Biggl[\\frac{1}{n}\\sum_{i=1}^n X_i\\geq t\\Biggr]\\leq 2\\epsilon_n \\right\\rbrace\\geq t_n.\n\t\\end{align}\n\tInserting this into the above bounds, we find that for any $n\\geq N'(\\lbrace a_i\\rbrace,\\lambda,\\nu,\\eta):=\\max\\lbrace N_1,N_3,N_4\\rbrace$, we have the desired final bound\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\leq D_n-\\sqrt{2V_n}\\,a_n+\\eta a_n,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\|\\,\\bigotimes_{i=1}^n\\sigma_i\\right)&\\geq D_n+\\sqrt{2V_n}\\,a_n-\\eta a_n.\n\t\\end{align}\n\\end{proof}\n\n\n\\section{Channel Coding}\n\\label{sec:channels}\n\nWe are now going to show how the above moderate deviation bounds can be applied to the capacity of a classical-quantum channel. \n\n\\begin{thm}[Moderate deviation of c-q channels]\n\t\\label{thm:moddevcode}\n\tFor any moderate sequence $\\lbrace a_n\\rbrace_n$ and memoryless c-q channel \n\t$\\mathcal{W}$ with capacity $C(\\mathcal{W})$ and min-dispersion $V_{\\min}(\\mathcal{W})$, being \n\toperated at error probability no larger than $\\epsilon_n:=e^{-na_n^2}$, the \n\toptimal rate deviates below the capacity as\n\t\\begin{align}\n\t\tR^*\\left(\\mathcal{W} ;n,\\epsilon_n\\right)=C(\\mathcal{W})-\\sqrt{2V_{\\min}(\\mathcal{W})}\\,a_n+o(a_n).\n\t\\end{align}\n\tConversely, if the channel has max-dispersion $V_{\\max}$ and is operated at \n\terror probability no larger than $1-\\epsilon_n$, then the optimal rate \n\tdeviates above the capacity as\n\t\\begin{align}\n\t\tR^*\\left(\\mathcal{W};n,1-\\epsilon_n\\right)=C(\\mathcal{W})+\\sqrt{2V_{\\max}(\\mathcal{W})}\\,a_n+o(a_n).\n\t\\end{align}\n\\end{thm}\nIf either the min- or max-dispersion is non-zero, an application of \\cref{lem:reverse} gives an equivalent formulation in terms of the minimal error probability at a given rate.\n\n\\begin{cor}\n\tFor any moderate sequence $\\lbrace s_n\\rbrace_n$, the error probability for a code with min-dispersion $V_{\\min}>0$ deviating below capacity by $s_n$ scales as\n\t\\begin{align}\n\t\t\\lim\\limits_{n\\to \\infty}\\frac{1}{ns_n^2}\\ln\\epsilon^*(\\mathcal{W};n,C-s_n)=-\\frac{1}{2V_{\\min}}.\n\t\\end{align}\n\tSimilarly, for a code with max-dispersion $V_{\\max}>0$ deviating above capacity by $s_n$, the error probability scales\n\t\\begin{align}\n\t\\lim\\limits_{n\\to \\infty}\\frac{1}{ns_n^2}\\ln\\bigl( 1-\\epsilon^*(\\mathcal{W};n,C + s_n)\\bigr)=-\\frac{1}{2V_{\\max}}.\n\t\\end{align}\n\\end{cor}\n\n\\begin{remark}\n\tRecall that our definition of c-q channels does not put any restriction on the input set. In particular, this set may be comprised of quantum states itself such that the c-q channel is just a representation of a quantum channel. Hence, as pointed out in Ref.~\\cite{tomamicheltan14}, our results immediately also apply to classical communication over general image-additive channels~\\cite{wolf14} as well as classical communication over quantum channels with encoders restricted to prepare separable states. We refer the reader to Corollaries 6 and 7 of Ref.~\\cite{tomamicheltan14} for details. \n\\end{remark}\n\nWe will split the proof of \\cref{thm:moddevcode} in two, in Sect.~\\ref{subsec:coding1} we will prove a lower bound on the maximum rate (`achievability'), followed in Sect.~\\ref{subsec:coding2} by a corresponding the upper bound (`optimality'). For the rest of this section, we will fix the channel $\\mathcal{W}$, and omit any dependencies on $\\mathcal{W}$ from here on for notational convenience.\n\n\\subsection{Achievability}\n\\label{subsec:coding1}\n\nFor achievability, we will use a lower bound on the $\\epsilon$-one-shot rate that is essentially due to Hayashi and Nagaoka~\\cite{hayashi03} who analysed the coding problem using the information spectrum method.\n\n\\begin{lem}[Theorem 1 of Ref.~\\cite{wang10}]\n\t\\label{lem:wangrenner}\n\tIf we have a c-q channel which maps from a finite message space $Y$ as $y\\mapsto \\rho^{(y)}$ , then the maximum rate with error probability at most $\\epsilon$ and $1-\\epsilon$, $R^*(\\epsilon)$ and $R^*(1-\\epsilon)$ respectively, are lower bounded\n\t\\begin{align}\n\t\tR^*(\\epsilon) &\\geq \\sup_{P_Y}\n\t\tD^{\\epsilon\/2}_{\\mathrm{h}} \\left( \\pi_{YZ} \\middle\\| \\pi_Y\\otimes \\pi_Z \\right) -\\log\\frac{8(2-\\epsilon)}{\\epsilon}\\\\\n\t\tR^*(1-\\epsilon) &\\geq \\sup_{P_{Y}}\n\t\tD^{1-2\\epsilon}_{\\mathrm{h}} \\left( \\pi_{YZ} \\middle\\| \\pi_Y\\otimes\\pi_Z \\right) -\\log\\frac{8(1-\\epsilon)}{\\epsilon}\n\t\\end{align}\n\twhere $\\pi_{YZ}$ is the joint state of the input and output, with inputs chosen according to the distribution $P_{Y}$\n\t\\begin{align}\n\t\t\\pi_{YZ}:=\\sum_{y\\in Y}P_{Y}(y)\\ketbra{y}{y}_{Y}\\otimes \\rho^{(y)}_{Z}.\n\t\\end{align}\n\\end{lem}\n\n\\begin{prop}[Channel coding: Achievability]\n\t\\label{prop:codingachievability}\n\tFor any moderate sequence $\\lbrace a_n\\rbrace_n$ and error probability $\\epsilon_n:=e^{-na_n^2}$, the rate is at least\n\t\\begin{align}\n\t\tR^*\\left(n,\\epsilon_n\\right)\\geq C-\\sqrt{2V_{\\min}}a_n+o(a_n).\n\t\\end{align}\n\tSimilarly, at error probability $1-\\epsilon_n$, the rate is at least\n\t\\begin{align}\n\t\tR^*\\left(n,1-\\epsilon_n\\right)\\geq C+\\sqrt{2V_{\\max}}a_n+o(a_n).\n\t\\end{align}\n\\end{prop}\n\\begin{proof}\n\tLet $X$ be our, possibly infinite, message space. By Lemma 3 of Ref.~\\cite{tomamicheltan14}, there exists a finite subset $Y\\subseteq X$, and a distribution $Q_{Y}$ thereon, such that $D(\\rho\\|\\sigma)=C$ and $V(\\rho\\|\\sigma)=V_{\\min}$ for states\n\t\\begin{align}\n\t\t\\rho:=\\sum_{y\\in Y}Q_{Y}(y)\\ketbra{y}{y}\\otimes \\rho^{(y)} \n\t\t\\qquad \\text{and} \\qquad \\sigma:=\\sum_{y\\in Y}Q_{Y}(y)\\ketbra{y}{y}\\otimes \\sum_{y'\\in Y}Q_{Y}(y')\\rho^{(y')}.\n\t\\end{align}\n\t\n\tClearly by restricting the message space we can only ever decrease the rate. By applying \\cref{lem:wangrenner} to the restriction of the message space to $Y$, we can lower bound the maximum rate of the full code. Applying this reasoning to $n$ memoryless applications of our channel we find\n\t\\begin{align}\n\t\tnR^*(n,\\epsilon_n) \n\t\t&\\geq \\sup_{P_{{Y}^n}} D^{\\epsilon_n\/2}_{\\mathrm{h}} \\left( \\pi_{Y^nZ^n} \\middle\\| \\pi_{Y^n}\\otimes \\pi_{Z^n} \\right) -\\log\\frac{8(2-\\epsilon_n)}{\\epsilon_n}.\n\t\\end{align}\n\tSubstituting in both the error probability, which is no larger than $\\epsilon_n=e^{-na_n^2}$, and a product distribution $Q_{{Y}^n}(\\vec{y}):= \\prod_{i=1}^n Q_{Y}(y_i)$ then we get\n\t\\begin{align}\n\t\tR^*(n,\\epsilon_n) \\geq \n\t\t\\frac{1}{n}D^{\\epsilon_n\/2}_{\\mathrm{h}} \\left( \\rho^{\\otimes n} \\middle\\| \\sigma^{\\otimes n} \\right)+\\mathcal{O}(a_n^2).\\label{ineq:rate}\n\t\\end{align}\n\tApplying \\cref{prop:HTD-inward}, we get an overall bound on the rate of\n\t\\begin{align}\n\t\tR^*(n,\\epsilon_n) \\geq \n\t\tC-\\sqrt{2V_{\\min}}\\,a_n+o(a_n).\n\t\\end{align}\n\tIf instead we were to take a distribution $Q_{Y}$ such that $V(\\rho\\|\\sigma)=V_{\\max}$, then the same arguments would allow us to use \\cref{prop:HTD-outward} to analogously give\n\t\\begin{align}\n\t\tR^*(n,1-\\epsilon_n) \\geq \n\t\tC+\\sqrt{2V_{\\max}}\\,a_n+o(a_n).\n\t\\end{align}\n\\end{proof}\n\n\\subsection{Optimality}\n\\label{subsec:coding2}\nSimilar to the second-order analysis of Ref.~\\cite{tomamicheltan14}, we are going to do this by relating the capacity and one-shot maximum rates to geometric quantities known as the divergence radius and divergence centre. \n\n\\begin{defn}[Divergence radius and centre]\n\t\\label{defn:radius}\n\tFor some set of states $\\mathcal{S}_0\\subseteq\\mathcal{S}$, the \\emph{divergence radius} $\\chi(\\mathcal{S}_0)$ and \\emph{divergence centre} $\\sigma^*(\\mathcal{S}_0)$ are defined as \n\t\\begin{align}\n\t\t\\chi(\\mathcal{S}_0):=\\mathop{\\mathrm{inf}\\vphantom{p}}_{\\sigma\\in\\mathcal{S}} \\sup_{\\rho\\in\\mathcal{S}_0}~D(\\rho\\|\\sigma), \\qquad\\qquad \n\t\t\\sigma^*(\\mathcal{S}_0):=\\mathop\\mathrm{arg~min}\\limits_{\\sigma\\in \\mathcal{S}}\\sup_{\\rho\\in\\mathcal{S}_0}D(\\rho\\|\\sigma).\n\t\\end{align}\n\tSimilarly the\n\t$\\epsilon$-\\emph{hypothesis testing divergence radius} $\\chi_{\\mathrm{h}}^\\epsilon(\\mathcal{S}_0)$ is defined as \n\t\\begin{align}\n\t\t\\chi_{\\mathrm{h}}^{\\epsilon}(\\mathcal{S}_0):=\\mathop{\\mathrm{inf}\\phantom{p}}_{\\sigma\\in\\mathcal{S}} \\sup_{\\rho\\in\\mathcal{S}_0}~D_{h}^{\\epsilon}(\\rho\\|\\sigma).\n\t\\end{align}\n\\end{defn}\n\nWhilst we have seen that the divergence radius captures the capacity of a channel, the $\\epsilon$-hypothesis testing divergence radius approximates the one-shot capacity.\n\n\\begin{lem}[Proposition 5 of \\cite{tomamicheltan14}]\n\t\\label{lem:rate}\n\tFor $\\mathcal{I}:=\\overline{\\mathop{\\mathrm{im}} \\mathcal{W}}$, the maximum rate with error probability at most $\\epsilon$, $R^*(\\epsilon)$, is upper bounded as\n\t\\begin{align}\n\t\tR^*(\\epsilon)\\leq \\chi_{\\mathrm{h}}^{2\\epsilon}(I)+\\log\\frac{2}{1-2\\epsilon}.\n\t\\end{align}\n\tSimilarly for an error probability $1-\\epsilon$, the maximum rate is upper bounded as\n\t\\begin{align}\n\t\tR^*(1-\\epsilon)\\leq \\chi_{\\mathrm{h}}^{1-\\epsilon\/2}(I)+\\log\\frac{2(2-\\epsilon)}{\\epsilon^2}.\n\t\\end{align}\n\\end{lem}\n\nIf we take $\\mathcal{I}_n:=\\overline{\\mathop{\\mathrm{im}} \\mathcal{W}^{\\otimes n}}$ to be the closure of the image of $n$ uses of this channel, then we can extend this bound on the one-shot rate to the $n$-shot rate as\n\\begin{align}\n\tnR^*(n,\\epsilon_n)&\\leq \\chi_{\\mathrm{h}}^{2\\epsilon_n}(\\mathcal{I}_n)+\\log\\frac{2}{1-\\epsilon_n},\\\\\n\tnR^*(n,1-\\epsilon_n)&\\leq \\chi_{\\mathrm{h}}^{2\\epsilon_n}(\\mathcal{I}_n)+\\log\\frac{2(2-\\epsilon_n)}{\\epsilon_n^2}.\n\\end{align}\nAs we are considering memoryless c-q channels, $\\mathcal{I}_n$ simply consists of elementwise tensor products of $\\mathcal{I}$\n\\begin{align}\n\t\\mathcal{I}_n=\\left\\lbrace \\bigotimes_{i=1}^n\\rho_i \\,\\middle|\\, \\rho_i\\in \\mathcal{I} \\right\\rbrace.\n\\end{align}\n\nOnce again we are going to take $a_n$ to be an arbitrary moderate sequence, and $\\epsilon_n:=e^{-na_n^2}$. Expanding this out, this gives bounds on the rate of\n\\begin{align}\n\tR^*(n,\\epsilon_n)&\\leq \\inf_{\\sigma^n}\\sup_{\\lbrace \\rho_i\\rbrace\\subseteq \\mathcal{I}}~\\frac{1}{n}D^{2\\epsilon_n}_{\\mathrm{h}}\\left(\\bigotimes_{i=1}^n\\rho_i \\middle\\|\\sigma^n \\right) +\\frac{1}{n}\\log \\frac{2}{1-\\epsilon_n},\\label{ineq:converse}\\\\\n\tR^*(n,1-\\epsilon_n)&\\leq \\inf_{\\sigma^n}\\sup_{\\lbrace \\rho_i\\rbrace\\subseteq \\mathcal{I}}~\\frac{1}{n}D^{1-\\epsilon_n\/2}_{\\mathrm{h}}\\left(\\bigotimes_{i=1}^n\\rho_i \\middle\\|\\sigma^n \\right) +\\frac{1}{n}\\log \\frac{2(2-\\epsilon)}{\\epsilon_n^2}.\n\\end{align}\n\nA standard approach now is to pick a state $\\sigma^n$, such that we can bound \nthe above quantities for arbitrary sequences $\\lbrace \\rho_i\\rbrace$ using the \nmoderate deviation analysis of the hypothesis testing divergence presented in \nSect.~\\ref{sec:hypo}. To do this we need to consider two cases. The \\emph{high cases} are those in which the empirical relative entropy corresponding to \n$\\lbrace \\rho_i\\rbrace _{i=1}^n$ is close to capacity, and the \\emph{low cases} \nare those in which the empirical relative entropy corresponding to $\\lbrace \n\\rho_i\\rbrace _{i=1}^n$ is far from capacity. Specifically, for some constant \n$\\gamma$ that will be chosen later, the $n$ which correspond to high and low \ncases are denoted by $H(\\lbrace\\rho_i\\rbrace,\\gamma)$ and \n$L(\\lbrace\\rho_i\\rbrace,\\gamma)$, respectively. They are defined as\n\\begin{align}\n\tH(\\lbrace\\rho_i\\rbrace,\\gamma):=\\left\\lbrace n~ \\middle| \n\t\\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\bar\\rho_n)\\geq C-\\gamma \\right\\rbrace\n\t\\quad\\text{and}\\quad\n\tL(\\lbrace\\rho_i\\rbrace,\\gamma):=\\left\\lbrace n~ \\middle| \n\t\\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\bar\\rho_n)< C-\\gamma \\right\\rbrace\n\\end{align}\nsuch that $H(\\lbrace\\rho_i\\rbrace,\\gamma)$ and $L(\\lbrace\\rho_i\\rbrace,\\gamma)$ \nbipartition $\\mathbb{N}$ for all $\\gamma$.\n\nBefore employing a moderate deviation bound, we are going to construct a separable state $\\sigma^n$ that will allow us two different moderate deviation analyses for low and high sequences, such that we can obtain the required bounds in both cases. A convenient choice of $\\sigma^n$ would be $\\sigma^n={\\bar\\rho_n}^{\\otimes n}$ where $\\bar\\rho_n:=\\frac{1}{n}\\sum_{i=1}^n\\rho_i$, but the order of the infimum and supremum require $\\sigma^n$ to be chosen to be independent of the sequence $\\lbrace \\rho_i\\rbrace$. Instead we are going to construct $\\sigma^n$ from a mixture of states that lie in a covering of $\\mathcal{S}$, and the divergence centre $\\sigma^*(\\mathcal{I})$. \n\nThe following lemma is based on a construction in Lemma II.4 of Ref.~\\cite{hayden04b}.\n\n\\begin{lem}[Lemma 18 of Ref.~\\cite{tomamicheltan14}]\n\tFor every $\\delta\\in(0,1)$, there exists a set $\\mathcal{C}^\\delta\\subset\\mathcal{S}$ of size\n\t\\begin{align}\n\t\\abs{\\mathcal{C}^\\delta}\\leq \\left(\\frac{20(2d+1)}{\\delta}\\right)^{2d^2}\\left(\\frac{8d(2d+1)}{\\delta}+2\\right)^{d-1}\\leq\\left(\\frac{90d}{\\delta^2}\\right)^{2d^2}\n\t\\end{align}\n\tsuch that, for every $\\rho\\in \\mathcal{S}$ there exists a state $\\tau\\in\\mathcal{C}^\\delta$ such that\n\t\\begin{align}\n\tD(\\rho\\|\\tau)\\leq \\delta\\qquad\n\t\\textrm{and}\\qquad\\lambda_{\\mathrm{min}}(\\tau)\\geq \\frac{\\delta}{8d(2d+1)+\\delta}\\geq \\frac{\\delta}{25d^2}.\n\t\\end{align} \n\\end{lem}\n\nGiven this covering upon states, we now want to take $\\sigma^n$ to be the separable state given by a mixture over such a covering, and the divergence centre\n\\begin{align}\n\t\\sigma^n(\\gamma):=\n\t\\frac{1}{2}\\sigma^*(I)^{\\otimes n}+\n\t\\frac{1}{2\\abs{\\mathcal{C}^{\\gamma\/4}}}\\sum_{\\tau\\in\\mathcal{C}^{\\gamma\/4}} \\tau^{\\otimes n}.\\label{eqn:sigmadef}\n\\end{align}\nUsing the inequality \n\\begin{align}\n\tD_{\\mathrm{h}}^\\epsilon\\bigl(\\rho\\,\\big\\|\\,\\mu\\sigma+(1-\\mu)\\sigma'\\bigr)\\leq D^\\epsilon_{\\mathrm{h}}(\\rho\\|\\sigma)-\\log \\mu\n\\end{align}\nwe will be able to bound divergences with respect to $\\sigma^n$ by those divergences with respect to either elements of $\\mathcal{C}^{\\gamma\/4}$, or $\\sigma^*$.\n\nWe will start by considering the low case. We will see that this case only accounts for hypothesis testing relative entropies which are below the capacity by a constant amount.\n\n\\begin{lem}[Low case]\n\t\\label{lem:low}\n\tFor any $\\gamma>0$, there exists a constant $N(\\lbrace a_i\\rbrace,\\gamma)$ such that\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq C-\\gamma\/4,\\\\\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq C-\\gamma\/4,\n\t\\end{align}\n\tfor any $\\lbrace\\rho_i\\rbrace_i\\subset I$, $n\\in \n\tL(\\lbrace\\rho_i\\rbrace,\\gamma)$ and $n\\geq N$.\n\\end{lem}\n\\begin{proof}\n\tWe are going to start by considering the $\\epsilon_n$-hypothesis testing divergence. Take $\\tau_n$ to be the closest element in $\\mathcal{C}^{\\gamma\/4}$ to $\\bar{\\rho}_n$, such that $D(\\bar{\\rho}_n\\|\\tau_n)\\leq \\gamma\/4$. Splitting out the $\\tau_n$ term from $\\sigma_n(\\gamma)$, we have\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\, \\sigma_n(\\gamma)\\right)\n\t\t&\\leq D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\, \\tau_n^{\\otimes n}\\right)+\\log 2\\abs{\\mathcal{C}^\\gamma}\\\\\n\t\t&\\leq D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\, \\tau_n^{\\otimes n}\\right)+2d^2\\log\\left(\\frac{120d}{\\gamma^2}\\right).\n\t\\end{align}\n\tAs the final term depending on $\\abs{\\mathcal{C}^{\\gamma\/4}}$ is independent of $n$, there must exist a constant $N_1(\\gamma)$ such that $2d^2\\log(120d\/\\gamma^2)\\leq n\\gamma\/4$ for any $n\\geq N_1$, and thus that\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\sigma_n(\\gamma)\\right)\n\t\t\\leq \\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\tau_n^{\\otimes n}\\right)+\\gamma\/4.\n\t\\end{align}\n\t\n\tApplying \\cref{prop:HTD-outward} to the $\\epsilon_n$-hypothesis testing relative entropy with respect to $\\tau_n$ we get that there exists an $N_2(\\lbrace a_i\\rbrace,\\gamma)$ such that \n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\tau_n^{\\otimes} \\right)\\leq \\frac{1}{n}\\sum_{i=1}^nD(\\rho_i\\|\\tau_n)+\\gamma\/4,\n\t\\end{align}\n\tfor any $n\\geq N_2$. As for the divergence terms given with respect to $\\tau_n$, we can rearrange them in terms of divergences relative to the sequence mean $\\bar\\rho_n$ using the information geometric Pythagorean theorem, yielding\n\t\\begin{align}\n\t\t\\sum_{i=1}^{n}D(\\rho_i\\|\\tau_n)\n\t\t&=\\sum_{i=1}^{n}\\Tr \\rho_i(\\log\\rho_i-\\log\\bar\\rho_n)+\\sum_{i=1}^{n}\\Tr\\rho_i(\\log\\bar\\rho_n-\\log \\tau_n)\\label{eqn:line2}\\\\\n\t\t&=\\sum_{i=1}^{n}D(\\rho_i\\|\\bar\\rho_n)+nD(\\bar\\rho_n\\|\\tau_n)\\\\\n\t\t&\\leq\\sum_{i=1}^{n}D(\\rho_i\\|\\bar\\rho_n)+n\\gamma\/4.\n\t\\end{align}\n\t\n\tIf we let $N(\\lbrace a_i\\rbrace, \\gamma):=\\max\\lbrace N_1,N_2\\rbrace$, then pulling the above results together we see that for any $n\\geq N$ \n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\sigma_n(\\gamma) \\right)\n\t\t&\\leq\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\tau_n^{\\otimes n}\\right)+\\gamma\/4\\\\\n\t\t&\\leq\\frac{1}{n}\\sum_{i=1}^nD(\\rho_i\\|\\tau_n)+2\\gamma\/4\\\\\n\t\t&\\leq\\frac{1}{n}\\sum_{i=1}^nD(\\rho_i\\|\\bar{\\rho}_n)+3\\gamma\/4.\n\t\\end{align}\n\tFinally, since $n\\in L(\\lbrace\\rho_i\\rbrace,\\gamma)$ the average relative \n\tentropy is bounded away from capacity, and we arrive at the bound:\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\sigma_n(\\gamma) \\right)\\leq C-\\gamma\/4.\n\t\\end{align}\n\t\n\tAs we only relied on \\cref{prop:HTD-outward} to bound the regularised \n\thypothesis testing divergence to within a constant of the average relative \n\tentropy, we could perform a similar analysis for the \n\t$(1-\\epsilon_n)$-hypothesis testing divergence using \\cref{prop:HTD-inward} \n\tinstead, which gives\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\middle\\| \\sigma_n(\\gamma) \\right)\\leq C-\\gamma\/4.\n\t\\end{align}\n\\end{proof}\n\nNow that we have dealt with cases far from capacity, we turn our attention to the high cases.\n\n\\begin{lem}[High case]\n\t\\label{lem:high}\n\tFor any $\\eta>0$, there exist constants $\\Gamma(\\eta)$ and $N(\\lbrace a_i\\rbrace,\\eta)$, such that\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\Gamma)\\right)\\leq C-\\sqrt{2V_{\\min}}\\,a_n+\\eta a_n\n\t\\end{align}\n\tfor any $\\lbrace\\rho_i\\rbrace_i\\subset I$, $n\\in \n\tH(\\lbrace\\rho_i\\rbrace,\\Gamma)$ and $n\\geq N$. Similarly, the \n\t$(1-\\epsilon_n)$-hypothesis testing relative entropy is bounded\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\Gamma)\\right)\\leq C+\\sqrt{2V_{\\max}}a_n+\\eta a_n.\n\t\\end{align}\n\\end{lem}\n\\begin{proof}\n\tSplitting out the $\\sigma^*$ factor within $\\sigma_n(\\gamma)$ gives\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,{\\sigma^*}^{\\otimes n}\\right)+\\log 2,\\\\\n\t\tD_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,{\\sigma^*}^{\\otimes n}\\right)+\\log 2.\n\t\\end{align}\n\tAs $\\frac{1}{n}\\log 2=o(a_n)$, there exists an $N_1(\\lbrace a_i\\rbrace)$ such that $n\\geq N_1$ implies \n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,{\\sigma^*}^{\\otimes n}\\right)+\\eta a_n\/3,\\\\\n\t\tD_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,{\\sigma^*}^{\\otimes n}\\right)+\\eta a_n\/3.\n\t\\end{align}\n\tWe now wish to employ a moderate deviation result. We will start by addressing the $\\epsilon_n$-hypothesis testing divergence. For the weaker bound of \\cref{prop:HTD-outward} we will have no required bounds on $\\frac{1}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma^*)$, but for the stronger bound we will need a uniform lower bound.\n\t\n\tIf $V_{\\min}\\leq \\eta^2\/18$, then the weakened bound of \\cref{prop:HTD-outward} is sufficient, giving an $N_2(\\lbrace a_n\\rbrace,\\eta)$ such that $n\\geq N_2$ implies\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,{\\sigma^*}^{\\otimes n}\\right)+\\eta a_n\/3\\\\\n\t\t&\\leq \\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma^*)+2\\eta a_n\/3\\\\\n\t\t&\\leq \\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma^*)-\\sqrt{2V_{\\min}}\\,a_n+\\eta a_n\\\\\n\t\t&\\leq C-\\sqrt{2V_{\\min}}\\,a_n+\\eta a_n.\n\t\\end{align}\n\t\n\tNext we need to consider the case where $V_{\\min}>\\eta^2\/18$. To do this, we will need to establish a lower bound on $\\frac{1}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma^*)$, which places it near $V_{\\min}$. The min-dispersion is defined for distributions which exactly achieve capacity; we will now consider an analogous quantity for distributions which are \\emph{near} capacity. Specifically\n\t\\begin{align}\n\t\tV_{\\min}(\\gamma):=\\inf_{P\\in\\mathcal{P}(\\mathcal{I})}\\left\\lbrace \n\t\t\\int\\mathrm{d}P(\\rho)~V\\left(\\rho\\middle\\|\\sigma^*\\right) \n\t\t~\\middle|~\n\t\t\\int\\mathrm{d}P(\\rho)~D\\left(\\rho~\\middle\\|\\int\\mathrm{d}P(\\rho')~\\rho'\\right)\\geq C-\\gamma\n\t\t\\right\\rbrace.\n\t\\end{align}\n\tBy definition of the channel dispersion we have that $V_{\\min}(0)=V_{\\min}$. By Lemma 22 of Ref.~\\cite{tomamicheltan14} we can strengthen this to $\\lim_{\\gamma\\to 0^+}V_{\\min}(\\gamma)=V_{\\min}$, and so for any $\\eta>0$ there must exist a constant $\\Gamma(\\eta)$ such that \n\t\\begin{align}\n\t\t\\sqrt{2V_{\\min}(\\Gamma)}\\geq \\sqrt{2V_{\\min}}-\\eta\/3.\\label{eqn:V}\n\t\\end{align} \n\tAs $V_{\\min}\\geq \\eta^2\/18$, this implies that $V_{\\min}(\\Gamma)>0$.\n\t\n\tNext, let $P_n$ be the empirical distribution corresponding to the set \n\t$\\lbrace\\rho_i\\rbrace_{i=1}^n$, i.e.\\ \n\t$P_n(\\rho):=\\frac{1}{n}\\sum_{i=1}^{n}\\delta(\\rho-\\rho_i)$. For all $n\\in \n\tH(\\lbrace\\rho_i\\rbrace,\\Gamma)$, these distributions are near capacity\n\t\\begin{align}\n\t\t\\int\\mathrm{d}P_n(\\rho)~ D\\left(\\rho~\\middle\\|\\int\\mathrm{d}P_n(\\rho')~\\rho'\\right)=\\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\bar{\\rho}_n)\\geq C-\\Gamma,\n\t\\end{align}\n\tand so we can lower bound the average variance with respect to the divergence centre\n\t\\begin{align}\n\t\t\\frac{1}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma^*)=\\int\\mathrm{d}P(\\rho)~ V(\\rho\\|\\sigma^*)\\geq V_{\\min}(\\Gamma)>0.\n\t\\end{align}\n\tUsing this lower bound, we can apply the stronger bound from \n\t\\cref{prop:HTD-outward} to give a constant $N_3(\\lbrace a_i\\rbrace,\\eta)$, \n\tsuch that, for every $n\\in H(\\lbrace\\rho_i\\rbrace,\\Gamma)$ and $n\\geq N_3$, \n\tthe hypothesis testing divergence is upper bounded\n\t\\begin{align}\n\t\tD_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)&\\leq\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,{\\sigma^*}^{\\otimes n}\\right)+\\eta a_n\/3\\\\\n\t\t&\\leq \\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma^*)-\\sqrt{\\frac{2}{n}\\sum_{i=1}^{n}V(\\rho_i\\|\\sigma^*)}a_n+2\\eta a_n\/3\\\\\n\t\t&\\leq \\frac{1}{n}\\sum_{i=1}^{n}D(\\rho_i\\|\\sigma^*)-\\sqrt{2V_{\\min}(\\Gamma)}\\,a_n+2\\eta a_n\/3\\\\\n\t\t&\\leq C-\\sqrt{2V_{\\min}}\\,a_n+\\eta a_n.\n\t\\end{align}\n\t\n\tPerforming a similar argument for $V_{\\max}$, we construct a function\n\t\\begin{align}\n\t\tV_{\\max}(\\gamma):=\\sup_{P\\in\\mathcal{P}(\\mathcal{I})}\\left\\lbrace \n\t\t\\int\\mathrm{d}P(\\rho)~V\\left(\\rho\\middle\\|\\sigma^*\\right) \n\t\t~\\middle|~\n\t\t\\int\\mathrm{d}P(\\rho)~D\\left(\\rho~\\middle\\|\\int\\mathrm{d}P(\\rho')~\\rho'\\right)\\geq C-\\gamma\n\t\t\\right\\rbrace,\n\t\\end{align}\n\tand define a $\\Gamma$ such that\n\t\\begin{align}\n\t\\sqrt{2V_{\\max}(\\Gamma)}\\leq \\sqrt{2V_{\\max}}+\\eta\/3.\n\t\\end{align} \n\tFollowing through the rest of the argument, and employing \\cref{prop:HTD-inward}, we also get a bound on the $(1-\\epsilon_n)$-hypothesis testing divergence\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{1-\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\,\\sigma_n(\\gamma)\\right)\n\t\t&\\leq C+\\sqrt{2V_{\\max}}a_n+\\eta a_n.\n\t\\end{align} \n\\end{proof}\n\n\\begin{prop}[Channel coding: Optimality]\n\t\\label{prop:codingconverse}\n\tFor any moderate sequence $\\lbrace a_n\\rbrace_n$ and error probability $\\epsilon_n:=e^{-na_n^2}$, the rate is upper bounded as\n\t\\begin{align}\n\t\tR^*\\left(n,\\epsilon_n\\right)\\leq C-\\sqrt{2V_{\\min}}\\,a_n+o(a_n).\n\t\\end{align}\n\tFor error probability $(1-\\epsilon_n)$ the rate is similarly upper bound as\n\t\\begin{align}\n\t\tR^*\\left(n,1-\\epsilon_n\\right)\\leq C+\\sqrt{2V_{\\max}}\\,a_n+o(a_n).\n\t\\end{align}\n\\end{prop}\n\\begin{proof}\n\tApplying Lemmas \\ref{lem:low} and \\ref{lem:high}, we get that there exist constants $\\Gamma(\\eta)$ and $N_1(\\lbrace a_i\\rbrace,\\eta)$ such that\n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\, \\sigma_n(\\Gamma)\\right)\\leq \n\t\t\\begin{dcases}\n\t\t\tC-\\Gamma\/4 & n\\in L(\\lbrace\\rho_i\\rbrace,\\Gamma)\\\\\n\t\t\tC-\\sqrt{2V_{\\min}}\\,a_n+\\eta a_n & n\\in H(\\lbrace\\rho_i\\rbrace,\\Gamma)\n\t\t\\end{dcases}\n\t\\end{align}\n\tfor any $n\\geq N_1$. As $\\Gamma$ is a constant, there must exist some $N_2(\\lbrace a_i\\rbrace,\\eta)$ such that $\\Gamma\/4\\geq \\sqrt{2V_{\\min}}a_n$. As such, for any $n\\geq \\max\\lbrace N_1,N_2\\rbrace$, high or low, we have \n\t\\begin{align}\n\t\t\\frac{1}{n}D_{\\mathrm{h}}^{\\epsilon_n}\\left( \\bigotimes_{i=1}^n\\rho_i \\,\\middle\\|\\, \\sigma_n(\\Gamma)\\right)\\leq C-\\sqrt{2V_{\\min}}\\,a_n+\\eta a_n.\n\t\\end{align}\n\tPulling this bound back to Eq.~\\ref{ineq:converse}, we have\n\t\\begin{align}\n\t\tR^*(n,\\epsilon_n)\n\t\t&\\leq \\sup_{\\lbrace \\rho_i\\rbrace\\subseteq I}\\frac{1}{n}D^{2\\epsilon_n}_{\\mathrm{h}}\\left(\\bigotimes_{i=1}^n\\rho_i \n\t\t\\middle\\|\\sigma^n(\\Gamma) \\right) +\\frac{1}{n}\\log \n\t\t\\frac{2}{1-\\epsilon_n}\\\\\n\t\t&\\leq C-\\sqrt{2V_{\\min}}a_n+\\eta a_n+\\frac{1}{n}\\log \\frac{2}{1-\\epsilon_n}.\n\t\\end{align}\n\tFinally, noting that $1\/n=o(a_n)$, there must exist a constant $N_3 (\\lbrace a_i\\rbrace,\\eta)$ such that $n\\geq N_3$ implies\n\t\\begin{align}\n\t\\frac{1}{n}\\log\\frac{2}{1-\\epsilon_n}\\leq \\eta a_n.\n\t\\end{align}\n\tWe can therefore conclude that, for $n\\geq \\max\\lbrace N_1,N_2,N_3\\rbrace $, we get the overall upper bound\n\t\\begin{align}\n\tR^*(n,\\epsilon_n)\\leq C-\\sqrt{2V_{\\min}}\\,a_n+2\\eta a_n.\n\t\\end{align}\n\tAs this is true for arbitrary $\\eta>0$, we can take $\\eta \\searrow0$ and conclude \n\t\\begin{align}\n\tR^*(n,\\epsilon_n)\\leq C-\\sqrt{2V_{\\min}}a_n+o(a_n)\n\t\\end{align}\n\tas required. A similar analysis for the $(1-\\epsilon_n)$-error regime shows\n\t\\begin{align}\n\tR^*\\left(n,1-\\epsilon_n\\right)\\leq C+\\sqrt{2V_{\\max}}a_n+o(a_n).\n\t\\end{align}\n\\end{proof}\n\n\n\\section{Conclusion}\n\nThe main result of this paper is to give a second order approximation of the non-asymptotic fundamental limit for classical information transmission over a quantum channel in the moderate deviations regime, as in Eqs.~\\ref{eq:moderatesecondorder1} and \\ref{eq:moderatesecondorder2}:\n\\begin{align}\n\t\\frac{1}{n} \\log M^*(\\mathcal{W};n, \\varepsilon_n) &= C(\\mathcal{W}) - \\sqrt{2 V_{\\min}(\\mathcal{W})}\\, x_n + o(x_n) \\,,\\\\\n\t\\frac{1}{n} \\log M^*(\\mathcal{W};n,1- \\varepsilon_n) &= C(\\mathcal{W}) + \\sqrt{2 V_{\\max}(\\mathcal{W})}\\, x_n + o(x_n) \\,.\n\\end{align}\nAlong the lines of third and fourth order approximations for classical channel coding in the fixed error regime (see, e.g., Refs.~\\cite{polyanskiythesis10,tomamicheltan12,moulin12}), a natural question to ask is whether we can expand this further and resolve the term $o(x_n)$. A preliminary investigation suggests the conjecture that $o(x_n) = O(x_n^2) + O(\\log n)$ and that at least some of the implicit constants can be determined precisely. We leave this for future work.\n\nDue to the central importance of binary asymmetric quantum hypothesis testing \nwe expect our techniques to have applications also to other quantum channel \ncoding tasks. In particular, source coding~\\cite{datta15,leditzky16}, \nentanglement-assisted classical coding~\\cite{datta14} as well as \nquantum~\\cite{tomamichel16} and private coding~\\cite{wilde16} over quantum \nchannels have recently been analysed in the small deviations regime by relating \nthe problem to quantum hypothesis testing. An extension of these results to \nmoderate deviations using our techniques thus appears feasible.\n\n\\paragraph*{Acknowledgements.} MT is funded by an Australian Research Council Discovery Early Career Researcher Award (DECRA) fellowship (Grant Nos. CE110001013, DE160100821). Both MT and CTC acknowledge support from the ARC Centre of Excellence for Engineered Quantum Systems (EQuS). We also thank Hao-Chung Cheng and Min-Hsiu Hsieh for useful discussions and insightful comments.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWhite dwarfs (WDs) are degenerate stellar nuclei with a mass roughly that of the\nSun and radii one hundredth that of the Sun; consequently,\ntheir surface gravity is $\\sim$$10^4$ greater than the\nSun's. \\cite{lie84} identified some odd WDs with metal-rich\natmospheres. With such a powerful gravity pulling the chemical\nelements toward the stellar nucleus, it is somewhat unusual to have a metal-rich\natmosphere. In fact, the timescales for an element heavier than\nhydrogen or helium to sink are small: $\\sim$$10^2$ yr in WDs with\nhydrogen atmospheres (DAs) and $\\sim$$10^5$ yr with helium atmospheres\n(DBs) \\citep{jur08,von07,paq86}.\n\nInterstellar material accretion onto the WD surface was one of the first\nexplanations for the metal-rich atmospheres. Knowing the diffusion\ntimescales of metals in the stellar atmospheres and the metallicity of a given\nstar, it is possible to calculate the necessary accretion rate to keep this\nmetallicity constant in time \\citep{koe06}. Typical values are\n$10^{-18}$ to $10^{-15}$ M$_\\odot$yr$^{-1}$. These values are too high to be\nexplained exclusively by interstellar accretion. Furthermore, if there were\naccretion from the interstellar medium onto the DBs, there should be a large\namount of hydrogen pollution in their spectra, but this pollution has not been\ndetected \\citep{dup1993,far2008}.\n\n\\cite{zuc87} observed an IR~excess in the spectrum of G29-38. The shape of this\nIR~excess is a bump which peaks at $\\sim$$10~\\mu m$. Its width is roughly\n$20~\\mu m$ and can be fitted with a blackbody of $T_{\\mathrm{eff}} \\sim 10^3$ K.\n\\cite{gra90} argued that an asteroid closely approaching G29-38 could explain\nthis infrared excess. When the asteroid orbit reaches the Roche radius it is\ndisrupted and forms a disk around the star. This disk is heated up by the\nstellar radiation and emits in the infrared. The disk material falls down\ncontinuously on the WD giving rise to the observed metal-rich atmospheres.\n\nUsing a disk model, Jura \\& collaborators (2003,2007,2008) were able to fit\nthe IR excess of many WDs. Through these fits they determined disk physical\nparameters. With a different approach, \\cite{rea05} fit the spectrum to a thin\ndust shell model.\n\nWhile searching for interacting binary WDs, \n\\cite{gan2006,gan2007,gan2008} observed double peaks in calcium lines\nin a few DAZs and DBZs. Although these observations strongly suggest\nthe disk hypothesis,it is still possible that in some cases the emitting\nregion could be a torus or a shell \\citep{rea09} rather than a disk.\n\nPrevious works focused on the IR emission properties of\ndebris disks around the WD. In this work, we propose a new and complementary\nobservational test looking at the absorption and scattering properties instead.\nWe develop a simple theoretical framework to predict what will be observable\nand measurable according to the properties of the system. We also suggest an\nobservational program to reach our goal.\n\nWe investigate the possibility of detection of debris disks effects\nin the near-UV and optical. We start the analysis in the limit of an optically\nthick disk in Section~\\ref{sec:od} and in Section~\\ref{sec:ot} we extend this to\nthe optically thin limit case. In Section~\\ref{ObsTest}, we discuss the\nobservational predictions of our models. Our conclusions are presented in\nSection~\\ref{sec:conc}.\n\n\n\n\\section{Opaque disk}\n\\label{sec:od}\n\n The optically thick limit is the natural first approach to investigate the\npossible effects of a debris disk in the spectrum of a WD. This limit can be\nachieved not only in massive disks but also in certain regions of all disks\nspecially if the disk has some gas like some recently discovered gaseous disks\n\\citep{gan2006,gan2007,gan2008}. Also, the mathematical treatment developed in\nthis section will be used in the next section when dealing with the\noptically thin limit.\n\nA completely opaque disk will not have any\nspectral features because it is totally opaque and absorbs any photon whatever\nits energy. The only effect of an obscuring disk will be a decrease in the\nreceived flux from the star. The most the disk can obscure the star is half of\nthe projected stellar surface, $\\pi R_{wd}^2\/2$. The increase in the apparent\nmagnitude of the star will be $0.75$ mag.\n\nThis value is much higher than current photometric accuracy, and if present,\nwould have been detected for those WDs which have an observed parallax, a good\nspectrum and an IR~excess. The ``good spectrum' permits a determination of\n$T_{\\mathrm{eff}}$ and $\\log g$ and hence a luminosity. Clearly the luminosity\ninferred from the parallax should agree with the luminosity inferred from the\nspectral fit. At the very least, this procedure will allow us to affirm that\nthere is not a big opaque disk in the known WDs with IR~excess or, at least,\nthis disk is not in a favorable inclination.\n\n For the general case of a disk with arbitrary inclination and any combination\nof inner and outer radii, the flux received at the Earth from the system\n(target) is,\n\n\\begin{equation}\n \\begin{aligned}\n F^{target} &=\\lefteqn { \\int_{\\Omega_{wd}} I \\cos \\theta d\\Omega }\\\\\n &= I \\left(\\frac{R_*}{D}\\right)^2\n \\int_{0}^{\\pi\/2} \\int_{\\phi_{min}}^{\\phi_{max}}\n \\sin \\theta \\cos \\theta d \\phi d \\theta \n \\end{aligned}\n\\end{equation}\n\n\\noindent\nwhere we assumed that the intensity (I) is uniform over the stellar surface.\nThe stellar radius is $R_*$ and $D$ is the distance from the Earth to the system.\nThere are three possible projections for the disk, as seen in Figure~\\ref{3disks}.\nThis gives different values of $\\phi_{min}$ and $\\phi_{max}$:\n\n\n\\begin{figure}\n\\epsscale{0.6}\n\\plotone{3disks2.ps}\n\\caption{Debris disk and WD. For a given size and inclination of the disk,\n different amounts of it obscure the star. The $x-$ and $y-$axes are in\n the sky-plane and the observer is over the $z-$axis which makes an\n angle $i$ with the normal of the disk. The\n angles $\\phi_i$ and $\\phi_e$ show where the disk starts and stops\n obscuring the star.\n}\n\\label{3disks}\n\\end{figure}\n\n\n\n \\begin{itemize}\n \\item \n $\\int_{\\phi_{min}}^{\\phi_{max}} d \\phi =\n 2 \\left[ \\int_{-\\pi\/2}^{\\phi_i} d \\phi + \\int_{\\phi_e}^{\\pi\/2} d \\phi\n \\right ]\n \\\\= 2\\pi - 2( \\phi_e-\\phi_i )\n $,\n\n \\item\n $\\int_{\\phi_{min}}^{\\phi_{max}} d \\phi =\n 2 \\int_{-\\pi\/2}^{\\phi_i} d \\phi =\n \\pi + 2\\phi_i\n $,\n\n \\item\n $\\int_{\\phi_{min}}^{\\phi_{max}} d \\phi =\n 2 \\int_{-\\pi\/2}^{\\pi\/2} d \\phi =\n 2 \\pi\n $.\n\n \\end{itemize}\n\n Using the dimensionless radii $r_{\\{i\/e\\}} \\equiv R_{\\{i\/e\\}}\/R_{star}$ we\ndefine the function $g \\equiv g(\\theta,r_i,r_e)$:\n\n \\begin{equation}\n g = \\left\\{\n \\begin{array}{l l}\n \\pi - (\\phi_e-\\phi_i) & \\mbox{, $r_e \\cos i < \\sin \\theta $}\\\\\n \\pi\/2 + \\phi_i & \\mbox{, $r_i \\cos i < \\sin \\theta \\leq r_e\n \\cos i $}\\\\\n \\pi & \\mbox{, $r_i \\cos i \\geq \\sin \\theta$}\\\\\n \\end{array}\n \\right.\n \\label{eqg}\n \\end{equation}\n\n\n\\noindent\nand write the flux:\n\n \\begin{equation}\n F^{target} = I \\left(\\frac{R_*}{D}\\right)^2\n \\int_{0}^{\\pi\/2} \\sin(2\\theta) g(\\theta,r_i,r_e) d \\theta\n \\label{eqF}\n \\end{equation}\n\n\\noindent\nwhere\n\n \\begin{equation}\n \\phi_{i,e} = \\arctan \\left[ \\cos i \\arccos\\left(\n \\frac{\\sqrt{ \\sin^2\\theta\/r_{i,e}^2 - \\cos^2 i}}{\\sin i}\n \\right) \\right ]\n \\label{eqphi}\n \\end{equation}\n\n\n The total flux received from an unobscured system is $\\pi I\n (R_*\/D)^2$. We call the hypothetical unobscured star ``template''\n and the obscured star ``target''. Defining $p$ as the ratio of the\n obscured to the total projected area:\n\n\n\n\\begin{equation}\n p \\equiv \\frac{A_{obscured}}{A_{total}} = \\frac{A_{target}}{A_{template}}\n\\label{def:p}\n\\end{equation}\n\n\\noindent\nwe write the increase in magnitude as:\n\n\\begin{equation}\n \\Delta m = -2.5 \\log(1-p)\n\\label{eq:mp}\n\\end{equation}\n\n\nIn the completely opaque hypothesis we may obviously write that,\n\n\\begin{equation}\n \\begin{aligned}\n p &=\\lefteqn { \\frac{F^{target}}{F^{template}}\n = \\frac{F^{target}}{\\pi I} \\left(\\frac{D}{R_*}\\right)^2 }\\\\\n &= \\frac{1}{\\pi} \\int_{0}^{\\pi\/2} \\sin(2\\theta) g(\\theta,r_i,r_e) d \\theta\n \\end{aligned}\n\\label{eq:p}\n\\end{equation}\n\n The solution of Equation \\ref{eq:p} with Equations \\ref{eqg} and \\ref{eqphi}\ngives the flux received from the system, as can been seen in\nFigure~\\ref{fig:hidden}. The probability of finding a system more inclined than\na given angle is the ratio of the solid angle occupied by these systems\nto the total solid angle: $P(i>i_0) = \\cos i_0$.\n\n\\begin{figure}\n\\epsscale{1}\n\\plotone{hiddenFlux.ps}\n\\caption{Increase in magnitude vs. inclination for opaque disks for\n different inner ($r_i$) and outer ($r_e$) disk radius combinations.\n Systems seen face on have $i=0^\\circ$. The fast decrease of $\\Delta m$ for\n $i \\rightarrow 90^\\circ$ is an artifact of the mathematical model that\n assumes an infinitely thin disk. For real disks, there would be a plateau\n lower than curve peak, but the disk should be really thin indeed, so the\n plateau is very close to $\\Delta m = 0$ and it actually is not a plateau,\n but rather a single point.\n The top axis label shows the percentage of systems more inclined than $i$.\n The right axis label shows $p$, the ratio of the obscured to the total\n projected area (Equation~\\ref{def:p}).}\n\\label{fig:hidden}\n\\end{figure}\n\n Figure~\\ref{fig:hidden} shows that it is possible to detect a completely\nopaque debris disk. For an inclination angle causing any blocking, the bigger\nthe disk is, the easier it is to detect the effect. The inner and outer radii\nare based on physical constrains. If the inner radius is small, the dust\nparticles sublimate because the\ntemperature exceeds $\\sim 1200$K~\\citep{jur07a}. On the other hand, if the\nouter edge of the disk is big ($\\gtrsim 100 R_{wd}$), the dust grains will be\ncold and any emission will be undetectable in practice. The exact\nvalues depend on the dust type and grain size.\n\n The presence of an obscuring disk might be inferred by comparing the expected\nincrease in stellar magnitude with the luminosity derived from\n$T_{\\mathrm{eff}}$ and $\\log g$ and parallax. If the observed and the expected\nmagnitudes are correct and not equal, the flux deficiency can probably be\nexplained by obscuration from a debris disk.\n\n We use the measured parallax of GD~362 to illustrate the previous analysis\nwith one real case. \\cite{kil08} obtained d~$= 50.6_{-3.1}^{+3.5}$~pc for\nGD~362. Using simple error propagation, we have a rough estimate for the highest\nacceptable difference between expected and measured magnitude: $\\sigma_m\n\\approx 0.15$ mag. \\cite{kil08} did not find any discrepancies between parallax\nand flux, implying no obscuration of the star.\n\nFrom a geometrical perspective this is expected since from\nFigure~\\ref{fig:hidden} $\\Delta m\\approx0.15$ mag implies an inclination higher\nthan $\\sim$$80^\\circ$ and less than $\\sim$$20\\%$ of the systems will be more\ninclined than this. Indeed, \\cite{jur07b} showed that GD~362 must be seen\nnearly face on to be able to reproduce its IR~excess flux with physically\nreasonable inner and outer radii. Assuming an almost edge on system would\nrequire a big disk and an unusual mechanism to heat it to reproduce the\nmeasured IR~excess flux. Therefore, our work is in agreement with the previous\nresults and this analysis illustrates what kind of study must be done with\nother systems which may be found to be nearly edge on.\n\n\\section{Optically thin dust disk}\n\\label{sec:ot}\n\n After having derived the basic concepts of the problem with the optically\n thick limit we generalize the equations to the optically thin limit.\n The ratio ($\\xi_\\nu$) of the flux from obscured (target) to the equivalent\nstar with no obscuration (template) is composed of three main components:\n\n\\begin{equation}\n\\begin{aligned}\n \\xi_\\nu &\\equiv \\lefteqn {\\frac{F^{target}_\\nu}{F^{template}_\\nu}}\\\\ &=\n \\left. \\xi_\\nu \\right|_{unobscured} +\n \\left. \\xi_\\nu \\right|_{obscured} +\n \\left. \\xi_\\nu \\right|_{scattered}\n\\end{aligned}\n\\label{eq:3comp}\n\\end{equation}\n\n The unobscured ratio component is simply $1-p$. The obscured ratio\n is given by $p e^{-\\tau_\\nu^{ext} \/\\cos{i}}$. The extinction optical depth\n $\\tau_\\nu^{ext}$ of the disk regions obscuring the star accounts for the\n absorbed and scattered light along the line of sight to the star.\n The scattered component comes from the disk regions which do not obscure the\n star but scatter photons to the line of sight. There is no emission component\n because the dust temperature is lower than the dust sublimation temperature\n ($\\sim$1200~K) and thus the dust emission only contributes in the infrared.\n\n Using the dimensionless extinction efficiency ($Q^{ext}_\\nu$) instead of\nextinction cross section ($[C^{ext}_\\nu] = cm^2$) we write the differential\nextinction optical depth in the disk as:\n\n\\begin{equation}\n d\\tau_\\nu = n C^{ext}_\\nu dz = n Q^{ext}_\\nu \\pi a^2 dz,\n\\end{equation}\n\n\\noindent\nwhere $n$ (cm$^{-3}$) is the number of dust grains per unit volume, $a$ (cm)\nis the grain radius and $z$ (cm) is the vertical dimension of the disk\n\n The disk volume density ($\\rho$) is related to the density of a typical dust\ngrain ($\\rho_d$) through,\n\n\\begin{equation}\n \\rho = \\frac{4}{3} \\pi a^3 \\rho_d n .\n\\label{rho}\n\\end{equation}\n\n Assuming the disk to be vertically uniform we integrate to write\n\n\\begin{equation}\n \\tau_\\nu^{ext} = \\int^{H\/2}_{-H\/2} \\frac{3 Q^{ext}_\\nu \\rho}{4 a \\rho_d} dz\n = \\frac{3 Q^{ext}_\\nu \\rho}{4 a \\rho_d} H\n = \\frac{3 Q^{ext}_\\nu \\Sigma}{4 a \\rho_d},\n\\label{eq:tau}\n\\end{equation}\n\n\\noindent\nwhere $\\Sigma$ (g\/cm$^2$) is the disk surface density and $H$ is the disk\nheight.\n\n We define:\n\n\\begin{equation}\n \\tau_0 = \\frac{3 \\Sigma}{4 a \\rho_d}\n\\label{eqt0}\n\\end{equation}\n\n\\noindent\nand write Equation~\\ref{eq:3comp} as\n\n\\begin{equation}\n \\xi_\\nu = (1-p) +\n p e^{ -\\tau_0 Q^{ext}_\\nu \/ \\cos{i} } +\n \\left. \\xi_\\nu \\right|_{scattered}.\n\\label{eqFd}\n\\end{equation}\n\n To simplify the scattering term, we assume isotropic and\ncoherent scattering and also that the light is not attenuated before and after\nbeing scattered by the disk. The last hypothesis is valid in the optically thin\ncase and causes an overestimation of the scattering because we ignore the\nabsorbed photons. The scattered intensity is given by\n\n\\begin{equation}\n I^{sca}_\\nu = \\epsilon_\\nu \\frac{H}{\\cos{i}}\n = \\pi a^2 Q^{sca}_\\nu n J^{wd}_\\nu \\frac{H}{\\cos{i}},\n\\end{equation}\n\n\\noindent\nwhere $\\epsilon_\\nu$ is the emissivity, $Q^{sca}_\\nu$ is the scattering\nefficiency and $J^{wd}_\\nu$ is the mean stellar intensity\n\n\\begin{equation}\n J^{wd}_\\nu = \\frac{I^{wd}_\\nu \\pi R_{wd}^2}{4 \\pi r^2}\n\\end{equation}\n\n Ignoring the disk regions hidden by the star we integrate over the disk\nsurface to get the flux:\n\n\n\\begin{equation}\n F_\\nu = \\frac{1}{2} \\frac{3 \\Sigma}{4 a \\rho_d} Q^{sca}_\\nu\n \\ln \\left( \\frac{re}{ri} \\right ) \\cos{i}\n \\, \\pi I^{wd}_\\nu \\left( \\frac{R_{wd}}{D} \\right)^2\n\\end{equation}\n\n\\noindent\nusing Equation~\\ref{eqt0} and dividing by the template flux,\n\n\n\\begin{equation}\n \\left. \\xi_\\nu \\right|_{scattered} = \\frac{1}{2} \\tau_0 Q^{sca}_\\nu\n \\ln \\left( \\frac{re}{ri} \\right ) \\cos{i},\n\\end{equation}\n\n\\noindent\nwhich allows us to write Equation~\\ref{eqFd} as\n\n\\begin{equation}\n \\xi_\\nu = (1-p) +\n p e^{ -\\tau_0 Q^{ext}_\\nu \/ \\cos{i} } +\n \\frac{1}{2} \\tau_0 Q^{sca}_\\nu\n \\ln \\left( \\frac{re}{ri} \\right ) \\cos{i}\n\\label{eqFdcp}\n\\end{equation}\n\n\nBesides the parameters $p$ and $\\tau_0$, we have the absorption efficiencies\nwhich are characteristic of the dust type. We used the tables of optical\nconstants of silicate glasses from \\cite{dor95}. The\nauthors prepared two different glasses in laboratory: pyroxene,\nMg$_x$Fe$_{1-x}$SiO$_3$, with $x$=0.4, 0.5, 0.6, 0.7, 0.8, 0.95, 1.0 and olivine,\nMg$_{2x}$Fe$_{2-2x}$SiO$_4$, with $x$=0.4 and 0.5.\n\nFigures \\ref{pyrmg70} and \\ref{pyrmg70sca} display the results from\nEquation~\\ref{eqFdcp} for different optical depths and system\ngeometries. Figure \\ref{pyrmg70} represents inclinations where the disk\nobscures the star, and light is absorbed and Figure\n\\ref{pyrmg70sca} when there is no obscuration and we see only\nscattering plus the WD light. For observational tests, the region\nfrom $\\textrm{3000\\AA\\ to 5000\\AA}$ is the most interesting, because it\nshows a sharp change in the ratio between the target and the template\nwhich cannot be easily discarded as bad flux calibrations.\n\n\n\\begin{figure}\n\\epsscale{1}\n\\plotone{pyrmg70_sca.ps}\n\\caption{Expected ratio between the target and the template star\n (Equation~\\ref{eqFdcp}) when the inclination is such that the disk\n obscures\n a part of the WD as in 1 and 2 in Figure \\ref{3disks}. The disk is\n composed of olivine Mg$_{0.8}$Fe$_{1.2}$SiO$_4$ dust grains. To\n calculate the scattering we used\n $\\ln(r_e\/r_i) \\lesssim \\ln(100\/5) = 3$\n as a superior limit in Equation~\\ref{eqFdcp} and adopted $i=85^\\circ$.}\n\\label{pyrmg70}\n\\end{figure}\n\n\\begin{figure}\n\\epsscale{1}\n\\plotone{pyrmg70_pureSca.ps}\n\\caption{Expected ratio between the target and the template star\n (Equation~\\ref{eqFdcp}) when the disk does not obscure the WD\n and we see the scattering plus the stellar light. The disk is\n composed of olivine Mg$_{0.8}$Fe$_{1.2}$SiO$_4$ dust grains. All the\n curves were calculated with $\\tau_0 = 0.1$ because the pure scattering\n term in Equation~\\ref{eqFdcp} is linear in $\\tau_0$.}\n\\label{pyrmg70sca}\n\\end{figure}\n\n\n\n\\section{Observational test}\n\\label{ObsTest}\n\n Our modeling gives rise to direct observational tests. We can test the\n effects of the disk in the near-UV and the optical, dividing the\n spectrum of the target by the template. In the optically thin case, the\n result will be color dependent and can provide physical parameters for\n the disk structure.\n\n The template star should be as similar to the target star as\n possible. Ideally, it would be the same WD without the obscuring\n disk. As that is not possible to have, we need a similar WD without\n any peculiarity in the spectrum. Any other WD will differ from the\n target star in $T_{\\mathrm{eff}}$ and $\\log g$ and this difference\n can make the division of the spectra resemble the expected disk\n effects.\n\n In Figure~\\ref{wdRatio}, we present these effects using theoretical WD\n spectra from \\cite{koe08}. We assume a target star of\n $T_{\\mathrm{eff}} = 12,000$~K and $\\log g = 8.0$, similar to G29-38\n \\citep{rea09}. In the upper panel, we keep $T_{\\mathrm{eff}}$ fixed\n and vary $\\log g$ by $0.05$~dex and in the lower we keep $\\log g$\n fixed and vary $T_{\\mathrm{eff}}$ by $150$~K. According to \\cite{lie05},\n the uncertainties in temperature are of the order of $1.2 \\% \\approx 150$~K\n and 0.038 in $\\log g$. So, larger temperature or $\\log g$ differences would be\n readily noticed. One can see from Figure~\\ref{wdRatio} that modification of UV\n flux densities by disks that absorb or scatter light, as in\n Figures \\ref{pyrmg70} and \\ref{pyrmg70sca}, can be distinguished from\n observational uncertainties in template stars.\n\n The White Dwarf Catalog \\citep{mcc09} currently lists 12,456 stars.\n Therefore it is not too hard to find a template star with a temperature\n similar to the target. As an example, G29-38 and Ross~548 have\n exactly the same temperature and $\\log g$. One needs to be careful\n about this comparison as these values of $T_{\\mathrm{eff}}$ and\n $\\log g$ were obtained from different determinations. When\n comparing the target with template it will be necessary to use\n $T_{\\mathrm{eff}}$ and $\\log g$ obtained from similar data and the\n same models.\n\n\\begin{figure}\n\\epsscale{1}\n\\plotone{wdRatio.ps}\n\\caption{Comparison of the effects from debris disk obscuration and the effects\n of small differences in $T_{\\mathrm{eff}}$ and $\\log g$ between the target and\n the template stars. The gray lines show the expected effects shown in Figures\n \\ref{pyrmg70} and \\ref{pyrmg70sca}. In the upper panel the black solid\n lines show the ratio of WD spectra with fixed $T_{\\mathrm{eff}}$ and\n varying $\\log g$. In the lower panel, we fixed $\\log g$ and\n varied $T_{\\mathrm{eff}}$.}\n\\label{wdRatio}\n\\end{figure}\n\n\\subsection{Parameter determination}\n\n It is difficult to compare directly the definition of $p$ and $\\tau_0$ with\nthe expected values for real disks. We use disk parameters obtained in earlier\nworks to give observational expectations and also to help design future\nobservations.\n \n For disks, the fraction of the obscured to the total projected area, $p$\n(Equation~\\ref{def:p}), varies between $0$ when there is no obscuration and 0.5 for\na disk which obscures half of the stellar surface.\nHowever, if the infrared emission region is not a disk but a shell around the\nstar \\citep{rea05} $p$ will be always $1$. Hence, this work provides an\nindependent method to test the disk hypothesis.\n\n In addition to $p$, we can also determine $\\tau_0$. Estimates for the expected\nvalues are more uncertain, but we can get a rough idea by using some mean values\nfor the dust and disk properties. \\cite{kru03} gives $2.5$~g\/cm$^3$ as a typical\nvalue for the interstellar dust and we assume it as a good order of magnitude\nvalue for the dust in the disk. \\cite{jur07b} constrain the disk mass of GD~362\nbetween $10^{18}$ and $10^{24}$~g. Using typical disk sizes of $10~R_{wd}$ and\n$100~R_{wd}$ for the inner and outer disk radii, respectively, we get a range\nof $\\tau_0$ from $10^{-4}$ to somewhat greater than $1$, depending on the disk\nmass and the type of dust \\citep{dor95}. Therefore the parameters used in Figure\n\\ref{pyrmg70} are realistic.\n\n The disk inclination angle can be inferred from the presence of an flux excess\ndue to scattering into the line of sight. Figure \\ref{pyrmg70sca} shows this for\ninclinations of $0^\\circ$ and $60^\\circ$. Flux excess in near-UV has already\nbeen detected by \\cite{gan2006} in SDSS~1228+1040 and could be caused by light\nscattering. For larger inclinations there is a flux deficiency due to\nabsorption and scattering out the line of sight, as shown in\nFigure \\ref{pyrmg70} computed for $i=85^\\circ$. The dividing line between\nthe first or the second case is $\\sim$$80^\\circ$.\n\n\\section{Conclusions}\n\\label{sec:conc}\n\nIn this work, we introduce a new way of looking at the cause of IR excess in\nwhite dwarf stars.\nBy looking in the near-UV and optical instead of IR we add a new constraint to\ntest the disk hypothesis.\n\nOne important distinction of our method, is the fact that the presence of disks\nwould cause flux deficiencies in some systems and flux excess in others. We also\npoint out that shells would only introduce flux deficiencies effects, and these\neffects would be detectable in all shells. If we find flux deficiencies in every\nstar we observe this would strongly indicate the presence of shells rather than\ndisks. Flux deficiencies in only some objects and flux excess in others\ncorroborate the idea of a disk.\n\nIf we are convinced disk models are more adequate, detailed\ncomparisons between disk models and data will provide disk \nmass \\citep{jur07b}, composition, optical depth and inclination\nrelative to the line of sight.\n\n\n\n\\begin{acknowledgements}\n The authors acknowledge financial support from CNPq-MCT\/Brazil. We thank\nPaola D'Alessio, Detlev Koester, and Don Winget for helpful discussions,\nWilliam Reach for providing data, and Shashi Kanbur for reading the manuscript.\nWe are also grateful to Nikolai Voshchinnikov for his Mie-Theory program.\nFinally, we thank the anonymous referee for helpful comments on paper.\n\\end{acknowledgements}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe study of random walks on groups has a long history and multiple connections to almost all areas of mathematics. It is therefore natural that from the early days of the theory of topological quantum groups, random walks on them were considered. The point of view was often that of discrete groups (the problem is interesting even for duals of classical compact Lie groups). In particular, the study of probabilistic boundaries has been the subject of several works and is still an active area of research. However, there is an aspect which has attracted no attention up to very recently, even though it is an important part of the subject for classical groups : the search for explicit estimates of convergence of random walks.\n\nIn the case of classical finite groups, the first important results for us are due to P. Diaconis and his coauthors in the eighties and reveal a surprising behaviour called the \\emph{cut-off phenomenon} : for a number of steps, the total variation distance (see Subsection \\ref{subsec:randomwalks} for the definition) between the random walk and the uniform distribution stays close to one and then it suddenly drops and converges exponentially to $0$. This triggered numerous works yielding more and more examples of cut-off in various settings, but also counter-examples so that the question of why and when this happens stays largely unanswered. In the quantum setting, the only results up to now are contained in the recent thesis of J.P. McCarthy \\cite{maccarthy2017random} which studies convergence of random walks on finite quantum groups. There, the author gives explicit bounds for families of random walks on the Kac-Paljutkin and Sekine quantum groups, as well as on duals of symmetric groups. Unfortunately, the estimates are not tight enough to yield a complete cut-off statement for these examples.\n\nIn the present work, we turn to the case of infinite compact quantum groups. In particular, we will show that a specific random walk on the free orthogonal quantum groups $O_{N}^{+}$, coming from random rotations on $SO(N)$, has a cut-off with the same threshold as in the classical case, namely $N\\ln(N)\/2(1-\\cos(\\theta))$. This is the first complete cut-off result for a compact quantum group and the statement is all the more surprising that the computations involve mainly representation theory, which is very different for $SO(N)$ and $O_{N}^{+}$. Moreover, the representation theory of $O_{N}^{+}$ being in a sense simpler than that of $SO(N)$ we are able to give very precise statements for the bounds (not only up to some order) and the conditions under which they hold.\n\nUsing techniques from \\cite{hough2017cut}, we can extend our result to random mixtures of rotations provided that the support of the measure governing the random choice of angle is bounded away from $0$. We also consider other examples involving the free symmetric quantum groups $S_{N}^{+}$. In that case, the previous techniques often prove useless. One way round the problem is to compare the corresponding transition operators, which are always well-defined. There is then several options for the choice of a norm and we give results for one of the simplest choices, namely the norm as operators on the $L^{2}$ space.\n\nLet us conclude this introduction with an overview of the organization of this work. In Section \\ref{sec:preliminaries} we give some preliminaries concerning compact quantum groups and random walks on them. We have tried to remain as elementary as possible so that the paper could be readable for people outside the field of quantum groups. In Section \\ref{sec:orthogonal} we study central random walks associated to pure states on free orthogonal quantum groups and prove a kind of cut-off phenomenon in Theorem \\ref{thm:estimategreaterthan2} : for a number of steps, the walk is not comparable in total variation distance with the Haar measure and as soon as it is, it converges exponentially. Using this, we show in Theorem \\ref{thm:randomrotation} that the uniform plane Kac walk on $O_{N}^{+}$ has a cut-off with the same threshold as in the classical case. Eventually, we give in Section \\ref{sec:further} other examples connected to free symmetric quantum groups and illustrate the analytical issue mentioned above.\n\n\\section{Preliminaries}\\label{sec:preliminaries}\n\nIn this section we recall the basic notions concerning compact quantum groups and random walks on them. Since the abstract setting is not really needed to perform concrete computations, we will mainly set notations and give some fundamental results.\n\n\\subsection{Compact quantum groups}\n\nCompact quantum groups are objects of noncommutative topological nature and therefore belong to the world of operator algebras. However, in the present work most things can be treated at an algebraic level which is slightly simpler to describe. We will therefore first give the main definitions in the setting of Hopf algebras and then briefly introduce in the end of this subsection the related analytical objects. We refer the reader to Parts I and II of \\cite{timmermann2008invitation} for a detailed treatment of the algebraic theory of compact quantum groups and its link to the operator algebraic theory.\n\nThe basic example to keep in mind is of course that of a classical compact group $G$. In that case, the corresponding algebraic object is the complex algebra $\\O(G)$ of \\emph{regular functions}, i.e. coefficients of unitary representations. This is a Hopf algebra with an involution given by $f^{*}(g) = \\overline{f(g)}$. Moreover, the Haar measure on $G$ yields by integration a linear form $h$ on $\\O(G)$ which is positive ($h(a^{*}a) \\geqslant 0$) and invariant under translation. Abstracting these properties leads to the following notion (with $\\otimes$ denoting the algebraic tensor product over $\\mathbb{C}$) :\n\n\\begin{de}\nA compact quantum group $\\mathbb{G}$ is given by a Hopf algebra $\\O(\\mathbb{G})$ with an involution and a unital positive linear map $h : \\O(\\mathbb{G})\\to \\mathbb{C}$ which is invariant in the sense that for all $a\\in \\O(\\mathbb{G})$,\n\\begin{equation*}\n(h\\otimes\\id)\\circ\\Delta(a) = h(a).1 = (\\id\\otimes h)\\circ\\Delta(a),\n\\end{equation*}\nwhere $\\Delta : \\O(\\mathbb{G})\\to \\O(\\mathbb{G})\\otimes \\O(\\mathbb{G})$ is the coproduct.\n\\end{de}\n\nIn the present work we will always assume that $\\mathbb{G}$ is \\emph{of Kac type}, meaning that for all $a, b\\in \\O(\\mathbb{G})$ $h(ab) = h(ba)$ (the Haar state is then said to be \\emph{tracial}). Since the fundamental work of P. Diaconis and M. Shahshahani \\cite{diaconis1981generating}, it is known that convergence of random walks can be controlled using representation theory and we will see that the same is true in the quantum setting. As for classical compact groups, the results of \\cite{woronowicz1987compact} imply that any representation of a compact quantum group is equivalent to a direct sum of finite-dimensional unitary ones, so that we will only define the latter.\n\n\\begin{de}\nA \\emph{unitary representation of dimension $n$} of $\\mathbb{G}$ is a unitary element $u\\in M_{n}(\\O(\\mathbb{G}))$ such that for all $1\\leqslant i, j\\leqslant N$,\n\\begin{equation*}\n\\Delta(u_{ij}) = \\sum_{k=1}^{n}u_{ik}\\otimes u_{kj}.\n\\end{equation*}\nA \\emph{morphism} between representations $u$ and $v$ of dimension $n$ and $m$ respectively is a linear map $T : \\mathbb{C}^{n}\\rightarrow \\mathbb{C}^{m}$ such that $(T\\otimes \\id)u = v(T\\otimes \\id)$. Two representations are said to be \\emph{equivalent} if there is a bijective morphism between them. A representation $u$ is said to be \\emph{irreducible} if the only morphisms between $u$ and itself are the scalar multiples of the identity.\n\\end{de}\n\nWe will denote by $\\Irr(\\mathbb{G})$ the set of equivalence classes of irreducible representations of $\\mathbb{G}$ and for each $\\alpha\\in \\Irr(\\mathbb{G})$ we fix a representative $u^{\\alpha}$ and denote by $d_{\\alpha}$ its dimension (which does not depend on the chosen representative). It then follows that $\\O(\\mathbb{G})$ is spanned by the coefficients $u^{\\alpha}_{ij}$ of all the $u^{\\alpha}$'s. Moreover, the Haar state induces an inner product on $\\O(\\mathbb{G})$ for which the basis of coefficients is orthogonal. More precisely, it was proven in \\cite{woronowicz1987compact} that for any $\\alpha, \\beta\\in \\Irr(\\mathbb{G})$ and $1\\leqslant i, j\\leqslant d_{\\alpha}$, $1\\leqslant k, l\\leqslant d_{\\beta}$,\n\\begin{equation*}\nh(u^{\\alpha}_{ij}u^{\\beta\\ast}_{kl}) = \\delta_{\\alpha, \\beta}\\frac{\\delta_{i, k}\\delta_{j, l}}{d_{\\alpha}}.\n\\end{equation*}\nThe key object for computations with random walks is characters of irreducible representations. Let us therefore define these :\n\n\\begin{de}\nThe \\emph{character} of a representation $u^{\\alpha}$ of a compact quantum group $\\mathbb{G}$ is defined as\n\\begin{equation*}\n\\chi_{\\alpha} = \\sum_{i=1}^{d_{\\alpha}}u_{ii}^{\\alpha}\\in \\O(\\mathbb{G}).\n\\end{equation*}\nMoreover, it only depends on $\\alpha$ and not on the chosen representative.\n\\end{de}\n\nWe conclude this subsection with some analysis. As already mentioned, the bilinear map $(a, b)\\mapsto h(b^{*}a)$ defines an inner product on $\\O(\\mathbb{G})$ and the corresponding completion is a Hilbert space denoted by $L^{2}(\\mathbb{G})$. For any element of $\\O(\\mathbb{G})$, left multiplication extends to a bounded operator on $L^{2}(\\mathbb{G})$, yielding an injective $*$-homomorphism $\\O(\\mathbb{G})\\to B(L^{2}(\\mathbb{G}))$. The closure of the image of this map with respect to the weak operator topology is a von Neumann algebra denoted by $L^{\\infty}(\\mathbb{G})$. We will also need the analogue of $L^{1}$ functions. For $a\\in L^{\\infty}(\\mathbb{G})$, set $\\|a\\|_{1} = h(\\vert a\\vert)$ where $\\vert a\\vert = \\sqrt{a^{*}a}$ is defined through functional calculus. Then, $L^{1}(\\mathbb{G})$ is defined to be the completion of $L^{\\infty}(\\mathbb{G})$ with respect to this norm.\n\n\\subsection{Random walks and central states}\\label{subsec:randomwalks}\n\nWe will now introduce some material concerning random walks on compact quantum groups and the total variation distance. For finite quantum groups the subject has been treated in great detail by J.P. MacCarthy in \\cite{maccarthy2017random}. The generalization to the compact case is not difficult so that this subsection will be rather expository. If $G$ is a compact group and $\\mu$ is a measure on $G$, the associated random walk consists in picking elements of $G$ at random according to $\\mu$ and then multiplying them. The probability of being in some measurable set after $k$ steps is then given by the $k$-th convolution power $\\mu^{\\ast k}$ of $\\mu$, which can be expressed at the level of functions as\n\\begin{equation*}\n\\int_{G}f(g) \\mathrm{d}\\mu^{\\ast k}(g) = \\int_{G^{k}}f(g_{k}\\cdots g_{1})\\mathrm{d}\\mu(g_{1})\\cdots \\mathrm{d}\\mu(g_{k}).\n\\end{equation*}\nStudying the random walk associated to $\\mu$ is therefore the same as studying the sequence of measures $(\\mu^{\\ast k})_{k\\in \\mathbb{N}}$.\n\nTurning to quantum groups, first note that measures yield through integration linear forms on $\\O(G)$. If the measure is moreover positive, then so is the linear form and if its total mass is $1$ then the linear form sends the unit of $\\O(G)$ to $1$. Thus, probability measures yield \\emph{states} in the following sense :\n\n\\begin{de}\nA state on an involutive unital algebra $A$ is a linear form $\\varphi : A\\to \\mathbb{C}$ such that $\\varphi(1) = 1$ and $\\varphi(a^{*}a) \\geqslant 0$ for all $a\\in A$.\n\\end{de}\n\nA random walk on a compact quantum group $\\mathbb{G}$ is therefore given by a state $\\varphi$ on $\\O(\\mathbb{G})$. The definition of convolution translates straightforwardly to this setting and one can for instance define $\\varphi^{\\ast k}$ by induction through the formula\n\\begin{equation*}\n\\varphi^{\\ast (k+1)} = (\\varphi\\otimes \\varphi^{\\ast k})\\circ\\Delta = (\\varphi^{\\ast k}\\otimes\\varphi)\\circ\\Delta.\n\\end{equation*}\n\nThe key tool to estimate the rate of convergence of a random walk is a fundamental result of P. Diaconis and M. Sashahani \\cite{diaconis1981generating} bounding the \\emph{total variation distance} of the difference between a measure and the uniform one. Classically, if $\\mu$ and $\\nu$ are any two Borel probability measures on $G$, then\n\\begin{equation*}\n\\|\\mu - \\nu\\|_{TV} =\\sup_{E}\\vert \\mu(E) - \\nu(E)\\vert\n\\end{equation*}\nwhere the supremum is over all Borel subsets $E\\subset G$. This definition can be extended to quantum groups thanks to the fact that Borel subsets correspond to projections in the associated von Neumann algebra. This however requires that the states extend to $L^{\\infty}(\\mathbb{G})$, which may not be the case (see for instance Lemma \\ref{lem:boundednesscriterion}).\n\n\\begin{de}\\label{de:totalvariation}\nThe total variation distance between two states $\\varphi$ and $\\psi$ on $L^{\\infty}(\\mathbb{G})$ is defined by\n\\begin{equation*}\n\\|\\varphi - \\psi\\|_{TV} = \\sup_{p\\in \\mathcal{P}(L^{\\infty}(\\mathbb{G}))}\\vert \\varphi(p) - \\psi(p)\\vert,\n\\end{equation*}\nwhere $\\mathcal{P}(L^{\\infty}(\\mathbb{G})) = \\{p\\in L^{\\infty}(\\mathbb{G}) \\mid p^{2} = p = p^{*}\\}$.\n\\end{de}\n\nAssume now that we consider a state $\\varphi$ which is \\emph{absolutely continuous} with respect to the Haar state $h$, in the sense that there exists an element $a_{\\varphi}\\in L^{1}(\\mathbb{G})$ such that $\\varphi(x) = h(a_{\\varphi}x)$ for all $x\\in \\O(\\mathbb{G})$. Then, the total variation distance can be expressed in terms of $a_{\\varphi}$. Note that the proof below uses the traciality of the Haar state.\n\n\\begin{lem}\\label{lem:totalvariation}\nIf $\\varphi$ is a state on $L^{\\infty}(\\mathbb{G})$ with an $L^{1}$-density $a_{\\varphi}\\in L^{1}(\\mathbb{G})$, then\n\\begin{equation*}\n\\|\\varphi - h\\|_{TV} = \\frac{1}{2}\\|a_{\\varphi} - 1\\|_{1}.\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nLet $\\mathbf{1}_{\\mathbb{R}_{+}}$ be the indicator function of the positive real numbers and define a projection $p_{+} = \\mathbf{1}_{\\mathbb{R}_{+}}(a_{\\varphi} - 1)$ through functional calculus. We claim that the supremum in Definition \\ref{de:totalvariation} is attained at $p_{+}$. Indeed, for any projection $q\\in \\mathcal{P}(L^{\\infty}(\\mathbb{G}))$, setting $b_{\\varphi} = a_{\\varphi} - 1$ we have\n\\begin{equation*}\n\\vert\\varphi-h\\vert(q) = \\vert h(b_{\\varphi}p_{+}q) + h(b_{\\varphi}(1-p_{+})q)\\vert \\leqslant \\max(h(b_{\\varphi}p_{+}q), h(b_{\\varphi}(p_{+}-1)q))\n\\end{equation*}\nand observing that $p_{+}$ commutes with $b_{\\varphi}$ we get\n\\begin{equation*}\n\\vert\\varphi-h\\vert(q) \\leqslant \\max(h(b_{\\varphi}p_{+}qp_{+}), h(b_{\\varphi}(p_{+}-1)q(p_{+}-1))) \\leqslant \\max(h(b_{\\varphi}p_{+}), h(b_{\\varphi}(p_{+}-1))).\n\\end{equation*}\nWe conclude using the fact that $h(b_{\\varphi}) = (\\varphi - h)(1) = 1$. Now since\n\\begin{equation*}\n\\vert b_{\\varphi}\\vert = \\vert b_{\\varphi}\\vert p_{+} + \\vert b_{\\varphi}\\vert(1-p_{+}) = 2b_{\\varphi}p_{+} - b_{\\varphi}\n\\end{equation*}\nwe get\n\\begin{equation*}\n\\|a_{\\varphi} - 1\\|_{1} = h(\\vert b_{\\varphi}\\vert) = 2h(b_{\\varphi}p_{+}) - h(b_{\\varphi}) = 2(\\varphi - h)(p_{+}) = 2\\|\\varphi - h\\|_{TV}.\n\\end{equation*}\n\\end{proof}\n\nThis equality is the trick leading to the Diaconis-Shahshahani upper bound lemma which, in the end, does not involve $a_{\\varphi}$ any more. To state this result, let us first introduce a notation : if $\\varphi$ is a state and $\\alpha\\in \\Irr(\\mathbb{G})$, we denote by $\\widehat{\\varphi}(\\alpha)$ the matrix with coefficients $\\varphi(u^{\\alpha}_{ij})$ (this does not depend on the choice of a representative of $\\alpha$). We can then consider $\\widehat{\\varphi}$ as an element of the $\\ell^{\\infty}$-sum of the matrix algebras $B(H_{\\alpha})$, denoted by $\\ell^{\\infty}(\\widehat{\\mathbb{G}})$.\n\n\\begin{lem}[Upper bound lemma]\\label{lem:upperbound}\nLet $\\mathbb{G}$ be a compact quantum group and let $\\varphi$ be a state on $\\mathbb{G}$ which is absolutely continuous with respect to the Haar state. Then,\n\\begin{equation*}\n\\|\\varphi^{\\ast k} - h\\|_{TV}^{2} \\leqslant \\frac{1}{4}\\sum_{\\alpha\\in \\Irr(\\mathbb{G})\\setminus\\{\\varepsilon\\}}d_{\\alpha}\\Tr\\left(\\widehat{\\varphi}(\\alpha)^{* k}\\widehat{\\varphi}(\\alpha)^{k}\\right),\n\\end{equation*}\nwhere $\\varepsilon = 1\\in M_{1}(\\O(\\mathbb{G}))$ denotes the trivial representation.\n\\end{lem}\n\n\\begin{proof}\nThe proof for compact groups (assuming that the measure is central) was given in \\cite[Lem 4.3]{rosenthal1994random} and the proof for finite quantum groups was given in \\cite[Lem 5.3.8]{maccarthy2017random}. The argument here is the same so that we simply sketch it. The Cauchy-Schwartz inequality yields\n\\begin{equation*}\n\\|a_{\\varphi} - 1\\|_{1}^{2} = h(\\vert a_{\\varphi} - 1\\vert)^{2} \\leqslant h(1^{*}1)h\\left((a_{\\varphi}-1)^{*}(a_{\\varphi}-1)\\right) = \\|a_{\\varphi}-1\\|_{2}^{2}\n\\end{equation*}\nMoreover, the formula\n\\begin{equation*}\n\\widehat{h}(x) = \\sum_{\\alpha\\in \\Irr(\\mathbb{G})}d_{\\alpha}\\Tr(x)\n\\end{equation*}\ndefines a positive weight on $\\ell^{\\infty}(\\widehat{\\mathbb{G}})$. This is the analogue of the counting measure on a discrete group and one can define a Fourier transform $\\mathcal{F} : L^{2}(\\mathbb{G})\\rightarrow \\ell^{2}(\\mathbb{G})$ (see for instance \\cite[Sec 2]{podles1990quantum}) which is isometric. The conclusion now follows from the fact that the Fourier transform of $a_{\\varphi}$ is $\\widehat{\\varphi}$ and the relationship between convolution and Fourier transform.\n\\end{proof}\n\n\\begin{rem}\nBecause of the Cauchy-Schwartz inequality, $L^{2}(\\mathbb{G})\\subset L^{1}(\\mathbb{G})$ so that if $\\varphi$ is not absolutely continuous with respect to $h$, then the right-hand side of the inequality is infinite and the inequality trivially holds.\n\\end{rem}\n\nOur goal is therefore to bound $\\sum d_{\\alpha}\\Tr(\\varphi(\\alpha)^{*k}\\varphi(\\alpha)^{k})$ by an explicit function of $k$. This requires the computation of the trace of arbitrary powers of matrices which can be very complicated. As already observed in \\cite{rosenthal1994random}, things get more tractable when the measure is assumed to be central, i.e. invariant under the adjoint action, since then the Fourier transform of its density consists in scalar multiples of identity matrices. The same is true in the quantum setting, thanks to \\cite[Prop 6.9]{cipriani2012symmetries} which we recall here for convenience.\n\n\\begin{prop}\nLet $\\mathbb{G}$ be a compact quantum group and let $\\varphi : \\O(\\mathbb{G})\\to \\mathbb{C}$ be a state. Then, $\\varphi$ is invariant under the adjoint action if and only if for any irreducible representation $\\alpha\\in \\Irr(\\mathbb{G})$, there exists $\\varphi(\\alpha) \\in \\mathbb{C}$ such that $\\varphi(u_{ij}^{\\alpha}) = \\varphi(\\alpha)\\delta_{ij}$.\n\\end{prop}\n\nSuch \\emph{central states} are completely determined by their restriction to the so-called \\emph{central algebra} of $\\mathbb{G}$, which is simply the algebra $\\O(\\mathbb{G})_{0}$ generated by the characters, thanks to the equality\n\\begin{equation*}\n\\varphi(\\chi_{\\alpha}) = \\sum_{i=1}^{d_{\\alpha}}\\varphi(u_{ii}^{\\alpha}) = d_{\\alpha}\\varphi(\\alpha).\n\\end{equation*}\nIn several key examples, the central algebra is commutative, hence states exactly correspond to measures on its spectrum. A particular case is that of Dirac measures, i.e. evaluation at one point. This setting covers natural analogues of the random walk associated to the uniform measure on a conjugacy class. To see this, assume that $\\O(\\mathbb{G})$ is generated by the coefficients of a representation $u$ of dimension $N$ and let $G$ be the abelianization of $\\mathbb{G}$, that is to say the compact group such that $\\O(G)$ is the maximal abelian quotient of $\\O(\\mathbb{G})$. By construction, $G$ is realized as a group of $N\\times N$ matrices. Let $g\\in G$ and let $\\ev_{g} : \\O(\\mathbb{G})\\to \\mathbb{C}$ be the algebra map sending $u_{ij}$ to $g_{ij}$. Then,\n\\begin{equation*}\n\\varphi_{g} = h\\circ m^{(2)}\\circ(\\id\\otimes \\ev_{g}\\otimes S)\\circ\\Delta^{(2)}\n\\end{equation*}\nis a state on $\\O(\\mathbb{G})$, where $\\Delta^{(2)} = (\\id\\otimes\\Delta)\\circ\\Delta$, $m^{(2)} = m\\circ(\\id\\otimes m)$ and $S$ is the antipode. If $\\mathbb{G}$ is classical, then for any function $f$ one has\n\\begin{equation*}\n\\varphi_{g}(f) = \\int_{G}f(kgk^{-1})\\mathrm{d} k\n\\end{equation*}\nso that $\\varphi_{g}$ is the uniform measure on the conjugacy class of $g$. In the general case, the centrality of $\\varphi_{g}$ is easily checked :\n\\begin{align*}\n\\varphi_{g}(u^{\\alpha}_{ij}) & = \\sum_{k,l=1}^{d_{\\alpha}}h(u^{\\alpha}_{ik}\\ev_{g}(u^{\\alpha}_{kl})u^{\\alpha\\ast}_{jl}) = \\sum_{k,l=1}^{d_{\\alpha}}\\ev_{g}(u^{\\alpha}_{kl})h(u^{\\alpha}_{ik}u^{\\alpha\\ast}_{jl}) \\\\\n& = \\sum_{k,l=1}^{d_{\\alpha}}\\ev_{g}(u^{\\alpha}_{kl})\\frac{\\delta_{ij}\\delta_{kl}}{d_{\\alpha}} = \\delta_{ij}\\frac{\\ev_{g}(\\chi_{\\alpha})}{d_{\\alpha}}.\n\\end{align*}\nSeveral interesting random walks on compact Lie groups are of this type and we will study them in the quantum setting in the next sections.\n\n\\section{Free orthogonal quantum groups}\\label{sec:orthogonal}\n\nThe main example which we will study in this work is free orthogonal quantum groups. These objects, denoted by $O_{N}^{+}$, were first introduced by S. Wang in \\cite{wang1995free}. Here is how the associated involutive Hopf algebra is defined :\n\n\\begin{de}\nLet $\\O(O_{N}^{+})$ be the universal $*$-algebra generated by $N^{2}$ \\emph{self-adjoint} elements $u_{ij}$ such that for all $1\\leqslant i, j \\leqslant N$,\n\\begin{equation*}\n\\sum_{k=1}^{N}u_{ik}u_{jk} = \\delta_{ij} = \\sum_{k=1}^{N}u_{ki}u_{kj}.\n\\end{equation*}\nThe formula\n\\begin{equation*}\n\\Delta(u_{ij}) = \\sum_{k=1}^{N}u_{ik}\\otimes u_{kj}\n\\end{equation*}\nextends to a $*$-algebra homomorphism $\\Delta : \\O(O_{N}^{+})\\to \\O(O_{N}^{+})\\otimes \\O(O_{N}^{+})$ and this can be completed into a compact quantum group structure.\n\\end{de}\n\nThe relations defining $\\O(O_{N}^{+})$ are equivalent to requiring that the matrix $[u_{ij}]_{1\\leqslant i, j\\leqslant N}$ is orthogonal. Using this, it is easy to see that the abelianization of $O_{N}^{+}$ is the orthogonal group $O_{N}$. To compute upper bounds for random walks, we need a description of the representation theory of these objects. In fact, since we will only consider central states, all we need is a description of the central algebra which comes from the work of T. Banica \\cite{banica1996theorie}.\n\n\\begin{thm}[Banica]\nThe irreducible representations of $O_{N}^{+}$ can be labelled by positive integers, with $u^{0}$ being the trivial representation and $u^{1} = [u_{ij}]_{1\\leqslant i, j\\leqslant N}$. Moreover, the characters satisfy the following recursion relation :\n\\begin{equation}\\label{eq:charactersorthogonal}\n\\chi_{1}\\chi_{n} = \\chi_{n+1} + \\chi_{n-1}.\n\\end{equation}\n\\end{thm}\n\nIn particular, the central algebra $\\O(\\mathbb{G})_{0}$ is abelian and in fact isomorphic to $\\mathbb{C}[X]$. Moreover, the recursion relation \\eqref{eq:charactersorthogonal} is reminiscent of that of Chebyshev polynomials of the second kind. Indeed, let $U_{n}$ be these polynomials, i.e. $U_{n}(\\sin(\\theta)) = \\sin(n\\theta)$ for all $\\theta\\in \\mathbb{R}$. Then, $u_{n}(x) = U_{n}(x\/2)$ satisfies Equation \\eqref{eq:charactersorthogonal}. With this in hand, it can be proven that the map sending $\\chi_{n}$ to $u_{n}$ is an isomorphism. Moreover, we have the equality $d_{n} = u_{n}(N)$.\n\n\\subsection{Pure state random walks}\\label{subsec:purestates}\n\nA state is said to be \\emph{pure} if it cannot be written as a non-trivial convex combination of other states. Moreover, pure states on $\\O(O_{N}^{+})$ are still pure when restricted to the central algebra and it is well-known that pure states on an abelian algebra are given by evaluation at points of the spectrum. For $O_{N}^{+}$, the spectrum of $\\chi_{1}$ in the enveloping C*-algebra of $\\O(O_{N}^{+})$ is $[-N, N]$ by \\cite[Lem 4.2]{brannan2011approximation} so that for any $t\\in [-N, N]$ there is a central state $\\varphi_{t}$ on $\\O(O_{N}^{+})$ defined by $\\varphi_{t}(n) = u_{n}(t)\/d_{n}\\in \\mathbb{R}$. It follows from the definition that for a central state $\\varphi$, we have $\\varphi^{\\ast k}(n) = \\varphi(n)^{k}$, so that by Lemma \\ref{lem:upperbound}\n\\begin{equation*}\n\\|\\varphi_{t}^{\\ast k} - h\\|_{TV}^{2} \\leqslant \\frac{1}{4}\\sum_{n=1}^{+\\infty}d_{n}\\frac{u_{n}(t)^{2k}}{d_{n}^{2k-1}} = \\frac{1}{4}\\sum_{n=1}^{+\\infty}\\frac{u_{n}(t)^{2k}}{d_{n}^{2k-2}}\n\\end{equation*}\nand we will have to bound specific values of the polynomials $u_{n}$. It turns out that the behaviour of Chebyshev polynomials is very different if the argument is less than or greater than one and this will be reflected in the existence or absence of a kind of cut-off phenomenon for the associated random walks. Before turning to this, let us give general tools for the computations. Assume that $t > 2$ and let $0 < q(t) < 1$ be such that $t = q(t) + q(t)^{-1}$, i.e.\n\\begin{equation*}\nq(t) = \\frac{t - \\sqrt{t^{2} - 4}}{2}.\n\\end{equation*}\nThen, it can be shown by induction that\n\\begin{equation*}\nu_{n}(t) = \\frac{q(t)^{-n-1} - q(t)^{n+1}}{q(t)^{-1} - q(t)}.\n\\end{equation*}\nThis writing enables to efficiently bound $u_{n}(t)$ :\n\n\\begin{lem}\\label{lem:encadrement}\nFor all $n\\geqslant 1$ and $t\\geqslant 2$,\n\\begin{equation*}\ntq(t)^{-(n-1)} \\leqslant u_{n}(t)\\leqslant\\frac{q(t)^{-n}}{1-q(t)^{2}}\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nConsider the sequence $a_{n} = u_{n}(t)q(t)^{n}$. Then,\n\\begin{equation*}\n\\frac{a_{n+1}}{a_{n}} = q(t)\\frac{u_{n+1}(t)}{u_{n}(t)} = q(t)\\frac{q(t)^{-n-1} - q(t)^{n+1}}{q(t)^{-n} - q(t)^{n}} = \\frac{q(t)^{-n} - q(t)^{n+2}}{q(t)^{-n} - q(t)^{n}} > 1\n\\end{equation*}\nso that $(a_{n})_{n\\in \\mathbb{N}}$ is increasing. It is therefore always greater than its first term, which is $q(t)t = 1+q(t)^{2}$ and always less than its limit, which is\n\\begin{equation*}\n\\frac{q(t)^{-1}}{q(t)^{-1} - q(t)} = \\frac{1}{1-q(t)^{2}}.\n\\end{equation*}\n\\end{proof}\n\nTo lighten notations, let us set\n\\begin{equation*}\nA_{k}(t) = \\sum_{n=1}^{+\\infty}\\frac{u_{n}(t)^{2k}}{d_{n}^{2k-2}} = \\sum_{n=1}^{+\\infty}\\frac{u_{n}(t)^{2k}}{u_{n}(N)^{2k-2}}\n\\end{equation*}\n\n\\subsubsection{Random walks associated to small pure states}\n\nWe start with the case where $t$ is less than $2$. As we will see, things are then rather simple.\n\n\\begin{prop}\\label{prop:upperboundlessthantwo}\nLet $\\vert t\\vert < 2$ be fixed. Then, for any $k\\geqslant 2$,\n\\begin{equation*}\n\\|\\varphi_{t}^{\\ast k} - h\\|_{TV}\\leqslant \\frac{N}{2\\sqrt{1-q(N)^{2}}}\\left(\\frac{1}{N\\sqrt{1 - t^{2}\/4}}\\right)^{k}\n\\end{equation*}\nIn particular, if $t<2\\sqrt{1-N^{-2}}$ then the random walk associated to $\\varphi_{t}$ converges exponentially.\n\\end{prop}\n\n\\begin{proof}\nBecause $\\vert t\\vert\\leqslant 2$, there exists $\\theta$ such that $t = 2\\cos(\\theta)$. Thus,\n\\begin{equation*}\n\\vert u_{n}(t)\\vert = \\vert U_{n}(\\cos(\\theta))\\vert = \\left\\vert \\frac{\\sin\\left((n+1)\\theta\\right)}{\\sin(\\theta)}\\right\\vert \\leqslant \\frac{1}{\\vert\\sin(\\theta)\\vert}\n\\end{equation*}\nand\n\\begin{align*}\nA_{k}(t) & \\leqslant \\sum_{n=1}^{+\\infty}\\frac{1}{\\vert\\sin(\\theta)^{2k}\\vert u_{n}(N)^{2k-2}} \\\\\n& \\leqslant \\frac{1}{\\vert\\sin(\\theta)\\vert^{2k}}\\sum_{n=1}^{+\\infty}\\left(\\frac{q(N)^{n-1}}{N}\\right)^{2k-2} \\\\\n& = \\frac{1}{\\vert\\sin(\\theta)\\vert^{2k}N^{2k-2}}\\frac{1}{1-q(N)^{2k-2}} \\\\\n& \\leqslant \\frac{N^{2}}{1-q(N)^{2}}\\left(\\frac{1}{N\\vert\\sin(\\theta)\\vert}\\right)^{2k} \\\\\n\\end{align*}\nThe result now follows from Lemma \\ref{lem:upperbound} and the fact that $\\vert\\sin(\\theta)\\vert = \\sqrt{1-t^{2}\/4}$. Note that for $k = 1$, we get the sum of $\\vert \\sin((n+1)\\theta)\\vert\/\\vert\\sin(\\theta)\\vert$ which need not converge even though $\\varphi_{t}$ is bounded on $L^{\\infty}(O_{N}^{+})$.\n\\end{proof}\n\nProposition \\ref{prop:upperboundlessthantwo} shows that for a fixed $t$, the distance to the Haar state decreases exponentially provided $N$ is large enough and it is natural to wonder how optimal the rate $1\/N\\sqrt{1-t^{2}\/4}$ is. We give a partial answer through a lower obtained by the duality between the noncommutative $L^{1}$ and $L^{\\infty}$ spaces of a tracial von Neumann algebra. Concretely, this means that for any $a\\in L^{1}(\\mathbb{G})$,\n\\begin{equation*}\n\\|a\\|_{1} = \\sup\\{h(ax)\\mid \\|x\\|_{\\infty}\\leqslant 1\\} = \\|\\varphi\\|.\n\\end{equation*}\n\n\\begin{prop}\\label{prop:lowerbound}\nFor any $t\\in [-N, N]$ and any $k\\geqslant 1$,\n\\begin{equation*}\n\\|\\varphi^{\\ast k} - h\\|_{TV} \\geqslant \\frac{N}{4}\\left(\\frac{t}{N}\\right)^{k}.\n\\end{equation*}\n\\end{prop}\n\n\\begin{proof}\nRecall that $\\varphi^{\\ast k}(n) = \\varphi(n)^{k}$ and that $h(\\chi_{n}) = 0$ for all $n\\geqslant 1$. Thus,\n\\begin{equation*}\n\\|\\varphi^{\\ast k} - h\\| \\geqslant \\frac{1}{2}\\sup_{n \\geqslant 1}\\frac{\\varphi^{\\ast k}(\\chi_{n})}{\\|\\chi_{n}\\|_{\\infty}} = \\frac{1}{2}\\sup_{n \\geqslant 1}\\frac{d_{n}}{\\|\\chi_{n}\\|_{\\infty}}\\left(\\frac{u_{n}(t)}{d_{n}}\\right)^{k}.\n\\end{equation*}\nTaking $n=1$ and using $\\|\\chi_{1}\\|_{\\infty} = 2$ then yields the result.\n\\end{proof}\n\nEven though this bound is very general since it works for all $t$, it yields the same exponential rate as Proposition \\ref{prop:upperboundlessthantwo} for $t = \\pm \\sqrt{2}$, meaning that the bound of Proposition \\ref{prop:upperboundlessthantwo} is rather tight.\n\n\\subsubsection{The cut-off phenomenon}\n\nWe now turn to the case when $\\vert t\\vert$ is larger than two. The corresponding states will exhibit a kind of cut-off phenomenon : for a number of steps (depending on $t$ and $N$), the state is not absolutely continuous and as soon as it is, it converges exponentially. Let us first consider the boundedness problem.\n\n\\begin{lem}\\label{lem:boundednesscriterion}\nLet $\\vert t\\vert > 2$ be fixed. Then, $\\varphi_{t}^{\\ast k}$ extends to $L^{\\infty}(O_{N}^{+})$ if and only if $q(t) > q(N)^{1-1\/k}$. Moreover, it then has an $L^{1}$-density with respect to $h$.\n\\end{lem}\n\n\\begin{proof}\nBecause $\\|\\chi_{n}\\|_{\\infty} = n+1$,\n\\begin{equation*}\n\\frac{\\varphi_{t}^{\\ast k}(\\chi_{n})}{\\|\\chi_{n}\\|_{\\infty}} = \\frac{1}{n+1}\\frac{u_{n}(t)^{k}}{u_{n}(N)^{k-1}}\n\\end{equation*}\nso that Lemma \\ref{lem:encadrement} yields\n\\begin{align*}\n\\frac{\\varphi_{t}^{\\ast k}(\\chi_{n})}{\\|\\chi_{n}\\|_{\\infty}} & \\geqslant \\frac{1}{n+1}\\left(q(N)^{n}(1-q(N)^{2})\\right)^{k-1}\\left(q(t)^{-n+1}t\\right)^{k} \\\\\n& = \\left(\\frac{q(N)^{k-1}}{q(t)^{k}}\\right)^{n}\\frac{(tq(t))^{k}(1-q(N)^{2})^{k-1}}{n+1}\n\\end{align*}\nand this is not bounded in $n$ if $q(t) < q(N)^{1-1\/k}$. If now $k$ satisfies the inequality in the statement, then a similar estimate shows that the sequence\n\\begin{equation*}\na_{t, p} = \\sum_{n=0}^{p}\\frac{u_{n}(t)^{k}}{u_{n}(N)^{k-1}}\\chi_{n}\n\\end{equation*}\nconverges in $L^{\\infty}(\\mathbb{G})\\subset L^{1}(\\mathbb{G})$ and its limit is the density of $\\varphi_{t}^{\\ast k}$.\n\\end{proof}\n\nThe previous statement may be disappointing in that for a fixed $t$, the number $k$ goes to $1$ as $N$ goes to infinity. However, in the cases coming from classical random walks, $t$ depends on $N$ and we then get a cut-off parameter which also depends on $N$, see Subsection \\ref{subsec:randomrotations}. Let us now prove that as soon as $\\varphi_{t}^{\\ast k}$ is absolutely continuous, it converges exponentially to the Haar state. This is the main result of this section.\n\n\\begin{thm}\\label{thm:estimategreaterthan2}\nLet $\\vert t\\vert > 2$ be fixed and let $k_{0}$ be the smallest integer such that $q(t) > q(N)^{1-1\/k_{0}}$. Then, for any $k\\geqslant k_{0}$,\n\\begin{equation*}\n\\|\\varphi_{t}^{\\ast k} - h\\|_{TV}\\leqslant \\frac{1}{2}\\frac{Nq(t)^{k_{0}}}{\\sqrt{q(t)^{2k_{0}} - q(N)^{2k_{0}-2}}}\\left(\\frac{1}{Nq(t)(1-q(t)^{2})}\\right)^{k}\n\\end{equation*}\nMoreover, there exists $t_{0}$ and $t_{1}$ depending on $N$ satisfying $2 < t_{0} < 4\/\\sqrt{3} < t_{1} < N$ and such that if $t_{0} < \\vert t\\vert < t_{1}$, then the random walk associated to $\\varphi_{t}$ converges exponentially after $k_{0}$ steps.\n\\end{thm}\n\n\\begin{proof}\nWe start the computation by using Lemma \\ref{lem:encadrement} :\n\\begin{align*}\n\\frac{u_{n}(t)^{2k}}{u_{n}(N)^{2k-2}} & \\leqslant \\left(\\frac{1}{q(t)^{n}(1-q(t)^{2})}\\right)^{2k}\\left(\\frac{q(N)^{n-1}}{N}\\right)^{2k-2} \\\\\n& = \\left(\\frac{q(N)^{2k-2}}{q(t)^{2k}}\\right)^{n-1}\\frac{1}{N^{2k-2}(q(t)(1-q(t)^{2}))^{2k}}.\n\\end{align*}\nBy assumption, $q(t) > q(N)^{1-1\/k}$ so that\n\\begin{align*}\nA_{k}(t) & \\leqslant\\frac{1}{N^{2k-2}(q(t)(1-q(t)^{2}))^{2k}}\\frac{1}{1-\\frac{q(N)^{2k-2}}{q(t)^{2k}}} \\\\\n& \\leqslant\\frac{1}{N^{2k-2}(q(t)(1-q(t)^{2}))^{2k}}\\frac{1}{1-\\frac{q(N)^{2k_{0}-2}}{q(t)^{2k_{0}}}} \\\\\n& = \\frac{N^{2}q(t)^{2k_{0}}}{q(t)^{2k_{0}} - q(N)^{2k_{0}-2}}\\left(\\frac{1}{Nq(t)(1-q(t)^{2})}\\right)^{2k} \\\\\n\\end{align*}\nand the result follows by Lemma \\ref{lem:upperbound}.\n\nThe condition for exponential convergence is $q(t)(1-q(t)^{2}) > N^{-1}$. Consider the function $f : x\\mapsto x(1-x^{2})$. Elementary calculus shows that its maximum is\n\\begin{equation*}\nf\\left(\\frac{1}{\\sqrt{3}}\\right) = \\frac{2}{3\\sqrt{3}} > \\frac{1}{3} \\geqslant \\frac{1}{N}.\n\\end{equation*}\nThus, there exists an open interval $I$ containing $1\/\\sqrt{3}$ such that $f(q(t)) > 1\/N$ as soon as $q(t)$ is in $I$. Since $q(t) = 1\/\\sqrt{3}$ corresponds to $t = q(t) + q(t)^{-1} = 4\/\\sqrt{3}$, the proof is complete.\n\\end{proof}\n\nSo far our use of the term cut-off has been a little improper since we did not provide an upper bound for the total variation distance depending only on $(k-k_{0})\/N$. We will see however that when considering particular values of $t$ related to classical random walks, one can sharpen the previous result into a genuine cut-off statement. To conclude this section, let us give an explicit formula for the threshold $k_{0}$. Taking the logarithm of both sides of the equality $q(t)^{k} > q(N)^{k-1}$ and noting that $q(t) > q(N)$ yields\n\\begin{equation*}\nk_{0} = \\left\\lceil-\\frac{\\ln(q(N))}{\\ln(q(t)\/q(N))}\\right\\rceil.\n\\end{equation*}\n\n\\subsection{The quantum uniform plan Kac walk}\\label{subsec:randomrotations}\n\nIn this section we will give an explicit example of cut-off phenomenon by considering the quantum analogue of the \\emph{uniform plane Kac walk} on $SO(N)$. In the classical case, this was studied by J. Rosenthal in \\cite{rosenthal1994random} and by B. Hough and Y. Jiang in \\cite{hough2017cut} (who coined the name). In this model, a random rotation is obtained by randomly choosing a plane in $\\mathbb{R}^{N}$ and then performing a rotation of some fixed angle $\\theta$ in that plane. The corresponding measure is the uniform measure on the conjugacy class of a matrix $R_{\\theta}$ corresponding to a rotation in a plane (they are all conjugate once the angle $\\theta$ is fixed, so that the choice of the plane does not matter). As explained in Subsection \\ref{subsec:randomwalks}, this uniform measure has a natural analogue on $O_{N}^{+}$. In a sense, we are now \"quantum rotating\" the plane of $R_{\\theta}$ and the corresponding state is $\\varphi_{R_{\\theta}}$. Since\n\\begin{equation*}\n\\ev_{R_{\\theta}}(\\chi_{1}) = \\Tr(R_{\\theta}) = N-2+2\\cos(\\theta) = u_{1}(N-2+2\\cos(\\theta)),\n\\end{equation*}\nit follows by induction that $\\ev_{R_{\\theta}}(\\chi_{n}) = u_{n}(N-2+2\\cos(\\theta))$ so that $\\varphi_{R_{\\theta}} = \\varphi_{N-2+2\\cos(\\theta)}$. Note that this in a sense means that two classical orthogonal matrices are quantum conjugate if and only if they have the same trace since the uniform measure on their conjugacy classes then coincide. This illustrates the fact that there is no \"quantum $SO(N)$\" subgroup in $O_{N}^{+}$.\n\nAssuming that $\\theta$ is fixed once and for all, we will show that the corresponding random walk has a cut-off. This means that we have to prove that there exists $k_{1}$ such that for $k_{1} + cN$ steps the total variation distance decreases exponentially in $c$ while for $k_{1} - cN$ steps it is bounded below by a function which decreases slowly in $c$. We will therefore split the arguments into two parts. To simplify notations let us set $\\tau = 2(1-\\cos(\\theta))$.\n\n\\subsubsection{Upper bound}\n\nWe start with the upper bound. Since the parameter $t$ now depends on $N$, it is not even clear that the convolution powers of the state will ever extend to $L^{\\infty}(\\mathbb{G})$. To get some insight into this problem, let us first consider the threshold for $t = N-\\tau$ obtained in the previous section. For large $N$,\n\\begin{equation*}\n-\\frac{\\ln(q(N))}{\\ln(q(N-\\tau)\/q(N))} \\sim \\frac{N\\ln(N)}{\\tau}\n\\end{equation*}\nwhich is exactly the cut-off parameter conjecture by J. Rosenthal the classical case (and proven there to be valid for $\\theta = \\pi$) and later confirmed by B. Hough and Y. Jiang in \\cite{hough2017cut}. This suggests that the same phenomenon should occur for $O_{N}^{+}$. However, proving it requires some suitable estimates on the function $q(t)$. We start by giving some elementary inequalities.\n\n\\begin{lem}\\label{lem:variousbounds}\nThe following inequalities hold for all $t, N\\geqslant 4$ :\n\\begin{enumerate}\n\\item $\\displaystyle\\frac{q(N)}{q(N-\\tau)} \\leqslant \\frac{N-\\tau}{N}$,\n\\item $q(N) > 1\/N$,\n\\item $N\\ln\\displaystyle\\left(1-\\frac{\\tau}{N}\\right) \\leqslant -\\tau$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nConsider the function $f : t\\mapsto tq(t)$. Noticing that\n\\begin{equation*}\nq'(t) = \\frac{1}{2} - \\frac{1}{2}\\frac{t}{\\sqrt{t^{2} - 4}} = \\frac{\\sqrt{t^{2} - 4} - t}{2\\sqrt{t^{2}-4}} = \\frac{-q(t)}{\\sqrt{t^{2}-4}},\n\\end{equation*}\nwe see that\n\\begin{equation*}\nf'(t) = q(t) + tq'(t) = \\left(1-\\frac{t}{\\sqrt{t^{2}-4}}\\right)q(t) < 0\n\\end{equation*}\nso that $f$ is decreasing. Applying this to $N > N-\\tau$ yields the first inequality while $f(t) > \\lim_{+\\infty} f(t) = 1$ yields the second one. The third equality follows from the well-known bound $\\ln(1-x) \\leqslant -x$ valid for any $0 \\leqslant x < 1$.\n\\end{proof}\n\nIn the proof of Theorem \\ref{thm:estimategreaterthan2} we saw that the total variation distance can be bounded by\n\\begin{equation*}\n\\frac{Nq(N-\\tau)^{k}}{2\\sqrt{q(N-\\tau)^{2k} - q(N)^{2k-1}}}\\left(\\frac{1}{Nq(N-\\tau)(1-q(N-\\tau)^{2})}\\right)^{k}\n\\end{equation*}\nand the inequalities above will enable us to bound the first part of this expression. For the second part, we need to study $Nq(N-\\tau)(1-q(N-\\tau)^{2})$. Note that it is not even clear that this is greater than one and in fact, for $\\tau = 0$ it equals $1-q(N)^{4} < 1$. However, as soon as $\\tau > 0$, assuming that $N$ is large enough everything will work. To show this we will first prove two computational lemmata.\n\n\\begin{lem}\\label{lem:threefunctions}\nConsider the following functions defined for $t > 2$ :\n\\begin{equation*}\nf(t) = \\frac{\\tau^{2}}{2t(t+\\tau)^{2}} \\text{ and } g(t) = \\frac{16}{5}\\frac{1}{t^{3}(t^{2}-4)}\n\\end{equation*}\nand set\n\\begin{equation*}\nC(\\tau) = \\frac{2}{\\tau\\sqrt{5}}(2+\\sqrt{2+9\\tau^{2}})\n\\end{equation*}\nThen, $f(t) \\geqslant g(t)$ as soon as $t\\geqslant C(\\tau)$.\n\\end{lem}\n\n\\begin{proof}\nThe inequality $f(t)\\geqslant g(t)$ can be written as\n\\begin{equation}\\label{eq:functionalinequality}\n5\\tau^{2}t^{2}(t^{2}-4)\\geqslant 32(t+\\tau)^{2}.\n\\end{equation}\nBecause $t^{2}\\geqslant t^{2}-4$, the left-hand side is greater than $[\\sqrt{5}\\tau(t^{2}-4)]^{2}$ and \\eqref{eq:functionalinequality} will be satisfied as soon as $\\sqrt{5}\\tau(t^{2}-4)\\geqslant 4\\sqrt{2}(t+\\tau)$, which amounts to the quadratic inequality\n\\begin{equation*}\n\\sqrt{5}\\tau t^{2} - 4\\sqrt{2}t - 4\\tau(\\sqrt{5} + \\sqrt{2})\\geqslant 0.\n\\end{equation*}\nThe discriminant is $32 + 16\\tau^{2}(5+\\sqrt{10}) \\leqslant 32 + 16\\times 9\\times \\tau^{2}$ so that \\eqref{eq:functionalinequality} is satisfied as soon as\n\\begin{equation*}\nt\\geqslant \\frac{2\\sqrt{2}}{\\sqrt{5}\\tau} + \\frac{2}{\\sqrt{5}\\tau}\\sqrt{2+9\\tau^{2}}.\n\\end{equation*}\nThe result now follows from the observation that $C(\\tau)$ is greater than the right-hand side because $2\/3\\geqslant \\sqrt{2\/5}$.\n\\end{proof}\n\nWith this in hand we can prove the main inequality that we need.\n\n\\begin{lem}\\label{lem:hardlowerbound}\nLet $0 < \\tau \\leqslant 4$. Then, for any $N\\geqslant \\tau + C(\\tau)$,\n\\begin{equation*}\nq(N-\\tau)(1-q(N-\\tau)^{2}) \\geqslant \\frac{e^{\\tau\/N}}{N}.\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nLet us set $a_{0} = 1$, $a_{1} = 1\/2$ and for $n\\geqslant 2$,\n\\begin{equation*}\na_{n} = \\frac{1\\times 3\\times\\cdots\\times(2n-3)}{2\\times 4\\times\\cdots\\times 2n} = \\frac{(2n-3)!}{2^{n-2}(n-2)!\\times 2^{n}(n!)} = \\frac{1}{n4^{n-1}}{\\binom{2n-3}{n-1}}\n\\end{equation*}\nso that $\\sqrt{1+x} = \\sum_{n}(-1)^{n+1}a_{n}x^{n}$. It follows that\n\\begin{equation*}\nq(t) = \\frac{t}{2}\\left(1 - \\sqrt{1-\\frac{4}{t^{2}}}\\right) = \\frac{1}{t} + \\sum_{n=2}^{+\\infty}a_{n}\\left(\\frac{2}{t}\\right)^{2n-1}.\n\\end{equation*}\nMoreover, using twice the identity $1+q(t)^{2} = tq(t)$, we see that\n\\begin{equation*}\nq(t)(1-q(t)^{2}) = q(t)(2-tq(t)) = 2q(t) - t(tq(t) - 1) = 2q(t) - t^{2}q(t) + t\n\\end{equation*}\nand we deduce from this a series expansion, namely (setting $t = N-\\tau$)\n\\begin{align*}\nq(N-\\tau)(1-q(N-\\tau)^{2}) & = \\frac{2}{t} + \\sum_{n=2}^{+\\infty}2a_{n}\\left(\\frac{2}{t}\\right)^{2n-1} - t - \\sum_{n=2}^{+\\infty}a_{n}\\frac{2^{2n-1}}{t^{2n-3}} + t \\\\\n& = \\frac{1}{t} + \\sum_{n=2}^{+\\infty}(2a_{n} - 4a_{n+1})\\left(\\frac{2}{t}\\right)^{2n-1} \\\\\n& = \\frac{1}{t} - \\sum_{n=2}^{+\\infty}b_{n}\\left(\\frac{2}{t}\\right)^{2n-1}\n\\end{align*}\nwith $b_{n} = -2(a_{n} - 2a_{n+1}) = 2a_{n}(n-2)\/(n+1) > 0$. We have to find an upper bound for the sum in this expression. Using the fact that $\\binom{a}{b}\\leqslant 2^{a}$, we see that for $n\\geqslant 4$\n\\begin{equation*}\nb_{n}\\leqslant 2\\frac{n-2}{n+1}\\frac{1}{n4^{n-1}}2^{2n-3} = \\frac{n-2}{n(n+1)} \\leqslant \\frac{1}{10}\n\\end{equation*}\nwhere the last inequality comes from the fact that the sequence $(n-2)\/n(n+1)$ is decreasing for $n\\geqslant 4$. Since $b_{2}=0$ and $b_{3} = 1\/32 < 1\/10$, we have for all $t > 2$\n\\begin{align*}\n\\sum_{n=2}^{+\\infty}b_{n}\\left(\\frac{2}{t}\\right)^{2n-1} \\leqslant \\frac{1}{10}\\sum_{n=3}^{+\\infty}\\left(\\frac{2}{t}\\right)^{2n-1} = \\frac{16}{5}\\frac{1}{t^{3}(t^{2}-4)} = g(t).\n\\end{align*}\nMoreover,\n\\begin{align*}\n\\frac{1}{t} - \\frac{e^{\\tau\/(t+\\tau)}}{t+\\tau} & = \\frac{1}{t} - \\frac{1}{t+\\tau} - \\frac{\\tau}{(t+\\tau)^{2}} - \\frac{\\tau^{2}}{2(t+\\tau)^{3}} - \\sum_{k=3}^{+\\infty}\\frac{\\tau^{k}}{k!(t+\\tau)^{k+1}} \\\\\n& \\geqslant \\frac{\\tau^{2}}{(t+\\tau)^{2}}\\left(\\frac{1}{t} - \\frac{1}{2(t+\\tau)}\\right) - \\frac{\\tau^{3}}{(t+\\tau)^{4}}\\sum_{k=3}^{+\\infty}\\frac{1}{k!} \\\\\n& = \\frac{\\tau^{2}}{(t+\\tau)^{2}}\\left(\\frac{1}{2t} + \\frac{\\tau}{2t(t+\\tau)}\\right) - \\frac{\\tau^{3}}{(t+\\tau)^{4}}\\sum_{k=3}^{+\\infty}\\frac{1}{k!} \\\\\n& \\geqslant \\frac{\\tau^{2}}{2t(t+\\tau)^{2}} + \\frac{\\tau^{3}}{2t(t+\\tau)^{3}} - \\left(e-\\frac{5}{2}\\right)\\frac{\\tau^{3}}{(t+\\tau)^{4}} \\\\\n& \\geqslant \\frac{\\tau^{2}}{2t(t+\\tau)^{2}} = f(t)\n\\end{align*}\nSumming up, by Lemma \\ref{lem:threefunctions},\n\\begin{equation}\\label{eq:hardlowerbound}\nq(t)(1-q(t)^{2}) - \\frac{e^{\\tau\/(t+\\tau)}}{t+\\tau} \\geqslant f(t) - g(t)\\geqslant 0\n\\end{equation}\nas soon as $t\\geqslant C(\\tau)$, i.e. $N\\geqslant \\tau + C(\\tau)$\n\\end{proof}\n\n\\begin{rem}\nThe condition $N\\geqslant \\tau + C(\\tau)$ could probably be sharpened by considering better bounds for the binomial coefficients and improving Lemma \\ref{lem:threefunctions}. However, it is already quite good since for instance for $\\tau = 4$ it yields $N\\geqslant 8$ and for $\\tau=2$ it yields $N\\geqslant 6$.\n\\end{rem}\n\nWe are now ready to establish the upper bound for the cut-off phenomenon announced in the beginning of this section, which is the main result of this work.\n\n\\begin{thm}\\label{thm:randomrotation}\nThe random walk associated to $0 < \\theta\\leqslant \\pi$ has an upper cut-off at\n\\begin{equation*}\n\\frac{N\\ln(N)}{2(1-\\cos(\\theta))}\n\\end{equation*}\nsteps in the following sense : if $N\\geqslant \\tau + C(\\tau)$, then for any $c_{0} > 0$ and all $c\\geqslant c_{0}$, after\n\\begin{equation*}\nk = \\frac{N\\ln(N)}{2(1-\\cos(\\theta))} + cN\n\\end{equation*}\nsteps we have\n\\begin{equation*}\n\\|\\varphi_{R_{\\theta}}^{\\ast k} - h\\|_{TV} \\leqslant \\frac{1}{2\\sqrt{1-e^{-4c_{0}(1-\\cos(\\theta))}}}e^{-2c(1-\\cos(\\theta))}.\n\\end{equation*}\n\\end{thm}\n\n\\begin{proof}\nLet us set $k_{1} = N\\ln(N)\/\\tau$,\n\\begin{equation*}\nB_{k}(N) = \\frac{Nq(N-\\tau)^{k}}{2\\sqrt{q(N-\\tau)^{2k} - q(N)^{2k-2}}} \\text{ and } B_{k}'(N) = \\left(\\frac{1}{Nq(N-\\tau)(1-q(N-\\tau)^{2})}\\right)^{k}.\n\\end{equation*}\nWe will bound each part separately and then combine them to get the desired estimate. First, using Lemma \\ref{lem:variousbounds} we see that\n\\begin{equation*}\nN\\ln\\left(\\frac{q(N)}{q(N-\\tau)}\\right) \\leqslant N\\ln\\left(\\frac{N-\\tau}{N}\\right) \\leqslant -\\tau,\n\\end{equation*}\nso that\n\\begin{equation*}\n(2k_{1} + 2cN)\\ln\\left(\\frac{q(N)}{q(N-\\tau)}\\right) - 2\\ln(q(N)) \\leqslant - 2\\tau c\n\\end{equation*}\nand it then follows that\n\\begin{align*}\nB_{k}(N)\\leqslant \\frac{N}{2\\sqrt{1-e^{-2\\tau c}}} \\leqslant \\frac{N}{2\\sqrt{1-e^{-2\\tau c_{0}}}}.\n\\end{align*}\nTurning now to $B_{k}'(N)$, we have by Lemma \\ref{lem:hardlowerbound}\n\\begin{align*}\n(k_{1} + cN)\\left(-\\ln(Nq(N-\\tau)(1-q(N-\\tau)^{2}))\\right) & \\leqslant \\ln(N) - \\tau c\n\\end{align*}\nso that\n\\begin{equation*}\nB_{k}'(N) \\leqslant \\frac{1}{N}e^{-\\tau c}\n\\end{equation*}\nGathering both inequalities eventually yields the announced estimate.\n\\end{proof}\n\n\\begin{rem}\nThe fact that the statement is not uniform in $c$ may be disappointing, but we cannot do better with the upper bound lemma in the sense that for $k = k_{1} + cN$, all the inequalities we used become equivalences as $N$ goes to infinity, so that $e^{c\\tau}A_{k}(N-\\tau)$ is not bounded above uniformly in $c$. Note that the result could also be stated in the following way : for any $\\epsilon > 0$, there is a uniform (in $c$) upper cut-off at $k_{1}(1+\\epsilon)$ steps.\n\\end{rem}\n\n\\subsubsection{Lower bound}\n\nTo have a genuine cut-off phenomenon, we must now show that if $k = N\\ln(N)\/\\tau - cN$, then the total variation distance is bounded below by something which is almost constant. Usually, such bounds are proven using the Chebyshev inequality. Note that if $x$ is a self-adjoint element in a von Neumann algebra, then it generates an abelian subalgebra which is therefore isomorphic to $L^{\\infty}(X)$ for some space $X$. Then, any state on the original algebra restricts to a measure on $X$ so that it makes sense to apply the Chebyshev inequality to $x$. In our case, we will apply it to $x = \\chi_{1}$, so that we have to estimate the expectation and variance of this element under the state $\\varphi_{R_{\\theta}}^{\\ast k}$. To keep things clear, we first give these estimates in a lemma.\n\n\\begin{lem}\\label{lem:lowerboundquantumrotation}\nFor $k = N\\ln(N)\/\\tau - cN$ and $N\\geqslant 5$, we have\n\\begin{equation*}\n\\varphi_{R_{\\theta}}^{\\ast k}(\\chi_{1}) \\geqslant \\frac{e^{c\\tau}}{5} \\text{ and } \\var_{\\varphi_{R_{\\theta}}^{\\ast k}}(\\chi_{1}) \\leqslant 1.\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nAs explained in the proof of \\cite[Thm 2.1]{rosenthal1994random}, for any $N\\geqslant 5$ and any $-4 \\leqslant \\tau \\leqslant 4$,\n\\begin{equation*}\nN\\left(1 - \\frac{\\tau}{N}\\right)^{N\\ln(N)\/\\tau}\\geqslant \\frac{1}{5}\n\\end{equation*}\nso that\n\\begin{equation*}\n\\varphi_{R_{\\theta}}^{\\ast k}(\\chi_{1}) = \\frac{(N-\\tau)^{k}}{N^{k-1}} = N\\left(1-\\frac{\\tau}{N}\\right)^{N\\ln(N)\/\\tau}\\left(1-\\frac{\\tau}{N}\\right)^{-cN}\\geqslant \\frac{e^{c\\tau}}{5}.\n\\end{equation*}\nAs for the second inequality, first note that $\\chi_{1}^{2} = \\chi_{2} + 1$ so that\n\\begin{align*}\n\\var_{\\varphi_{R_{\\theta}}^{\\ast k}}(\\chi_{1}) & = 1 + \\frac{((N-\\tau)^{2} - 1)^{k}}{(N^{2}-1)^{k-1}} - \\left(\\frac{(N-\\tau)^{k}}{N^{k-1}}\\right)^{2} \\\\\n& \\leqslant 1 + \\frac{(N-\\tau)^{2k}}{N^{2k-2}}\\left(\\frac{\\left(1-(N-\\tau)^{-2}\\right)^{k}}{(1-N^{-2})^{k-1}} - 1\\right) \\\\\n& \\leqslant 1.\n\\end{align*}\n\\end{proof}\n\nWe are now ready for the proof of the lower bound.\n\n\\begin{prop}\\label{prop:lowercutoff}\nThe random walk associated to $0 < \\theta\\leqslant \\pi$ has a lower cut-off at\n\\begin{equation*}\n\\frac{N\\ln(N)}{2(1-\\cos(\\theta))}\n\\end{equation*}\nsteps in the following sense : for any $c > 0$, at\n\\begin{equation*}\nk = \\frac{N\\ln(N)}{2(1-\\cos(\\theta))} - cN\n\\end{equation*}\nsteps we have\n\\begin{equation*}\n\\|\\varphi_{R_{\\theta}}^{\\ast k} - h\\|_{TV} \\geqslant 1-200e^{-2c\\tau}\n\\end{equation*}\n\\end{prop}\n\n\\begin{proof}\nWe will evaluate the states at projections obtained by functional calculus and use the original definition of the total variation distance. Let us denote by $\\mathbf{1}_{S}$ the indicator function of a subset $S$ of $\\mathbb{R}$. The proof relies on the same trick as in the classical case (see for instance \\cite{rosenthal1994random}) using Chebyshev's inequality : noticing that because of the first inequality of Lemma \\ref{lem:lowerboundquantumrotation}, $\\mathbf{1}_{[0, e^{c\\tau}\/10]}(\\vert \\chi_{1}\\vert)\\leqslant \\mathbf{1}_{[e^{c\\tau}\/10, +\\infty]}(\\vert\\varphi_{R_{\\theta}}^{\\ast k}(\\chi_{1}) - \\chi_{1}\\vert)$, we have\n\\begin{align*}\n\\varphi_{R_{\\theta}}^{\\ast k}\\left(\\mathbf{1}_{[0, e^{c\\tau}\/10]}(\\vert\\chi_{1}\\vert)\\right) & \\leqslant \\varphi_{R_{\\theta}}^{\\ast k}(\\mathbf{1}_{[e^{c\\tau}\/10, +\\infty]}\\left(\\vert\\varphi_{R_{\\theta}}^{\\ast k}(\\chi_{1}) - \\chi_{1}\\vert)\\right) \\\\\n& \\leqslant 100e^{-2c\\tau}\\var_{\\varphi_{R_{\\theta}}^{\\ast k}}(\\chi_{1}) \\\\\n& \\leqslant 100e^{-2c\\tau}\n\\end{align*}\nOn the other hand, since $h(\\chi_{1}) = 0$ and $h(\\chi_{1}^{2}) = 1$,\n\\begin{equation*}\nh\\left(\\mathbf{1}_{[0, e^{c\\tau}\/10]}(\\vert\\chi_{1}\\vert)\\right) = 1 - h\\left(\\mathbf{1}_{]e^{c\\tau}\/10, +\\infty[}(\\vert\\chi_{1}\\vert)\\right) \\geqslant 1 - 100e^{-2c\\tau}.\n\\end{equation*}\nGathering these facts, we get\n\\begin{equation*}\n\\|\\varphi_{R_{\\theta}}^{\\ast k} - h\\|_{TV} \\geqslant \\frac{1}{2}\\left\\vert h(\\mathbf{1}_{[0, e^{c\\tau}\/10]}(\\vert\\chi_{1}\\vert)) - \\varphi_{R_{\\theta}}^{\\ast k}(\\mathbf{1}_{[0, e^{c\\tau}\/10]}(\\vert\\chi_{1}\\vert))\\right\\vert \\geqslant 1-200e^{-2c\\tau}\n\\end{equation*}\n\\end{proof}\n\nThe combination of Theorem \\ref{thm:randomrotation} and Proposition \\ref{prop:lowerbound} establishes the announced cut-off phenomenon. For $\\theta = \\pi$, J. Rosenthal proved \\cite{rosenthal1994random} that $N\\ln(N)\/4$ steps suffice to get exponential convergence, in accordance with our result (for $N\\geqslant 8$). For $\\theta\\neq \\pi$, he could only show that at least $N\\ln(N)\/2(1-\\cos(\\theta))$ steps are required and the sufficiency was proved by B. Hough and Y. Jiang in \\cite{hough2017cut}. One can also consider the random walk given by a random reflection since they form a conjugacy class. Noting that any reflection has trace $N-2$ the previous argument shows that for $N\\geqslant 6$ there is a cut-off with parameter $N\\ln(N)\/2$, in accordance with the results of U. Porod in the classical case \\cite{porod1996cut}. Note that we could also use the same computations to obtain a cut-off for $\\varphi_{t}$ as soon as $t > 2$, but we decided to stick to random walks which are connected to important classical examples.\n\nThe reader may be surprised that our random walk is defined on an analogue of $O_{N}$ instead of $SO(N)$ since we are considering the conjugacy class of a matrix with determinant one. This comes from the fact that $O_{N}^{+}$ is in a sense connected as a compact quantum group so that it has no \"$SO^{+}_{N}$\" quantum subgroup (recall that matrices with opposite determinant are quantum conjugate if they have the same trace) and this allows the random walk to spread on the whole of $O_{N}^{+}$. There is another quantum group linked to $SO(N)$, which is the quantum group of trace-preserving automorphisms of the algebra $M_{N}(\\mathbb{C})$ of $N$ by $N$ matrices. We will see in Subsection \\ref{subsec:quantumautomorphisms} that the uniform plane Kac walk on it has a cut-off with the same parameter as for $O_{N}^{+}$.\n\n\\subsection{Mixed rotations}\\label{subsec:mixedrotations}\n\nOne may also consider random walks associated to states which are \"mixed\" instead of being pure. For instance, let $\\nu$ be a probability measure on the circle $\\mathbb{T}$, and set\n\\begin{equation*}\n\\varphi_{\\nu}(x) = \\int_{\\mathbb{T}}\\varphi_{R_{\\theta}}(x)\\mathrm{d}\\nu(\\theta).\n\\end{equation*}\nThis defines a central state $\\varphi_{\\nu}$ on $O_{N}^{+}$ corresponding to a random walk where $\\theta$ is chosen randomly according to $\\nu$ and then $R_{\\theta}$ is randomly conjugated. B. Hough and Y. Jiang obtained in \\cite{hough2017cut} cut-off results for these random walks with the sole restriction that $\\nu\\neq\\delta_{0}$. In our context, a stronger assumption will be needed, due to an analytic issue. Assume for instance $\\nu(\\{0\\}) = p > 0$, then $\\varphi_{\\nu}$ can be written as\n\\begin{equation*}\n\\varphi_{\\nu} = p\\varphi_{N} + (1-p)\\varphi_{\\nu'}\n\\end{equation*}\nwhere $\\nu' = (1-p)^{-1}(\\nu - p.\\delta_{0})$. But $\\varphi_{N}$ is a very particular map called the \\emph{co-unit} and for a compact quantum group, the co-unit is bounded on $L^{\\infty}(\\mathbb{G})$ if and only if $\\mathbb{G}$ is \\emph{co-amenable}. Since co-amenability is known to fail for $O_{N}^{+}$ by \\cite[Cor 1]{banica1997groupe} and $\\varphi_{N}^{\\ast k} = \\varphi_{N}$ for any $k$, we see that $\\varphi_{\\nu}^{\\ast k}\\geqslant p^{k}\\varphi_{N}$ never extends to $L^{\\infty}(O_{N}^{+})$ so that the total variation distance between $\\varphi_{\\nu}$ and $h$ is not defined.\n\nThis suggests to assume that $\\nu(\\{0\\}) = 0$, but we were not able to prove a cut-off in this generality. However, if we assume that the support of $\\nu$ is bounded away for $0$, then everything works. The proof closely follows that of B. Hough and Y. Jiang in \\cite{hough2017cut} except for some computations, which we first treat separately.\n\n\\begin{lem}\\label{lem:boundsformixedrotations}\nFor any $N\\geqslant \\tau + C(\\tau)$ and any $0 < \\tau\\leqslant 4$,\n\\begin{equation*}\n\\varphi_{N-\\tau}(n)^{N\\ln(N)\/\\tau}\\leqslant d_{n}^{-1}.\n\\end{equation*}\nMoreover, for any $N\\geqslant 3$ and any $\\lambda > 0$,\n\\begin{equation*}\n\\sum_{n=1}^{+\\infty}d_{n}^{-\\lambda\/\\ln(N)} \\leqslant \\frac{e^{-\\lambda\/2}}{1-e^{-\\lambda\/2}}\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nSetting $k = N\\ln(N)\/\\tau$, we have by Lemma \\ref{lem:hardlowerbound} and the bounds of Lemma \\ref{lem:encadrement} and Lemma \\ref{lem:variousbounds},\n\\begin{align*}\n\\varphi_{N-\\tau}(n)^{k} & = \\left(\\frac{u_{n}(N-\\tau)}{u_{n}(N)}\\right)^{k} \\leqslant \\left(\\frac{q(N)^{n-1}}{q(N-\\tau)^{n}}\\frac{1}{N(1-q(N-\\tau)^{2})}\\right)^{k} \\\\\n& \\leqslant \\left(\\frac{q(N)}{q(N-\\tau)}\\right)^{k(n-1)}\\left(\\frac{1}{Nq(N-\\tau)(1-q(N-\\tau)^{2})}\\right)^{k} \\\\\n& \\leqslant \\frac{1}{N}\\left(1-\\frac{\\tau}{N}\\right)^{k(n-1)} \\leqslant \\frac{1}{N^{n}}\\leqslant \\frac{1}{d_{n}}.\n\\end{align*}\nFor the second inequality, we use again Lemma \\ref{lem:encadrement} to get\n\\begin{equation*}\nd_{n}^{-\\lambda\/\\ln(N)} \\leqslant \\left(\\frac{q(N)^{n-1}}{N}\\right)^{\\lambda\/\\ln(N)} = e^{-\\lambda}q(N)^{(n-1)\\lambda\/\\ln(N)}.\n\\end{equation*}\nUsing $\\sqrt{x}\\leqslant 1\/2 + x\/2$, we see that $q(N) \\leqslant 2\/N$ for all $N\\geqslant 4$, so that the right-hand side of the above inequality is bounded by\n\\begin{equation*}\ne^{-\\lambda}\\exp\\left((n-1)\\lambda\\left(\\frac{\\ln(2)}{\\ln(N)} - 1\\right)\\right) \\leqslant e^{-\\lambda(n+1)\/2},\n\\end{equation*}\nfrom which the result follows.\n\\end{proof}\n\nWe are now ready for the proof of the cut-off phenomenon. For convenience, we will rather consider a measure $\\mu$ on the interval $[0, 4]$ and set\n\\begin{equation*}\n\\varphi_{\\mu} = \\int_{0}^{4}\\varphi_{N-\\tau}\\mathrm{d}\\mu(\\tau).\n\\end{equation*}\n\n\\begin{thm}\\label{thm:mixedrotations}\nLet $\\mu$ be a measure on $[0, 4]$ such that there exists $\\delta > 0$ satisfying $\\mu([\\delta, 4]) = 1$ and set $\\eta = \\int\\tau\\mathrm{d}\\mu$. Then, for any $N\\geqslant \\max_{\\tau\\in [\\delta, 4]}(\\delta + C(\\delta))$, the random walk associated to $\\varphi_{\\mu}$ has a cut-off at $N\\ln(N)\/\\eta$ steps.\n\\end{thm}\n\n\\begin{proof}\nThe proof closely follows the argument of \\cite[Sec 4]{hough2017cut} and we first treat the upper bound. Let us set $k = N\\ln(N)\/\\eta + cN$. We start by the straightforward inequality\n\\begin{equation*}\n\\|\\varphi_{\\mu}^{\\ast k} - h\\|_{TV}\\leqslant \\int_{[\\delta, 4]^{k}}\\|\\varphi_{N-\\tau_{k}}\\ast\\cdots\\ast\\varphi_{N-\\tau_{1}} - h\\|_{TV}\\;\\mathrm{d}\\mu(\\tau_{1})\\cdots\\mathrm{d}\\mu(\\tau_{n})\n\\end{equation*}\nand set $E = \\{(\\tau_{1}, \\cdots, \\tau_{k})\\in [\\delta, 4]^{k}\\mid \\sum_{i=1}^{k}\\tau_{i}\\leqslant N\\ln(N) + c\\eta N\/2\\}$. Consider the random variable $X = \\sum_{i=1}^{k}\\tau_{i}$, which has expectation $k\\eta$ under $\\mu$. The measurable set $E$ corresponds to the event\n\\begin{equation*}\nX\\leqslant \\mathbb{E}(X)\\left(1-\\frac{c\\eta}{2(\\ln(N) + c\\eta)}\\right)\n\\end{equation*}\nso that by Hoeffding's inequality (using the fact that $0\\leqslant \\tau\\leqslant 4$),\n\\begin{equation*}\n\\mu^{\\otimes k}(E)\\leqslant \\exp\\left(-\\frac{2k}{16}\\left(\\frac{c\\eta}{2(\\ln(N) + c\\eta)}\\right)^{2}\\right) = \\exp\\left(-\\frac{c^{2}\\eta N}{32(\\ln(N)+c)}\\right).\n\\end{equation*}\nThe function $x\\mapsto x\/(\\ln(x)+c)$ is increasing as soon as $x\\geqslant e\\geqslant e^{1-c}$. In particular, for $N\\geqslant 3$ it can be bounded below by $3\/(\\ln(3)+c)$. Moreover, $c^{2}\/(\\ln(3)+c) > c-\\ln(3)$ so that\n\\begin{equation*}\n\\mu^{\\otimes k}(E)\\leqslant 3^{\\eta\/32}e^{-c\\eta\/32}\\leqslant 3^{1\/8}e^{-\\eta c\/32}.\n\\end{equation*}\nWe still have to bound the integral on the complement of $E$. To do this, we apply Lemma \\ref{lem:upperbound} and Lemma \\ref{lem:boundsformixedrotations} to the integrand (recall that $\\tau_{i} \\geqslant \\delta$ for all $i$), which is therefore less than\n\\begin{equation*}\n\\frac{1}{2}\\sqrt{\\sum_{n=1}^{+\\infty}d_{n}^{2}\\prod_{i=1}^{k}\\varphi_{N-\\tau_{i}}(n)^{2}}\\leqslant \\frac{1}{2}\\sqrt{\\sum_{n=1}^{+\\infty}\\exp\\left(2\\ln(d_{n})\\left(1-\\frac{1}{N\\ln(N)}\\sum_{i=1}^{k}\\tau_{i}\\right)\\right)}.\n\\end{equation*}\nBy definition of the complement of $E$, each term is bounded by $\\exp(-\\ln(d_{n})c\\eta\/\\ln(N))$ and by Lemma \\ref{lem:boundsformixedrotations} we conclude that\n\\begin{equation*}\n\\|\\varphi_{\\mu}^{\\ast k} - h\\|_{TV}\\leqslant 3^{1\/8}e^{-\\eta c\/32} + \\frac{e^{-c\\eta\/4}}{2\\sqrt{1-e^{-c\\eta\/2}}}.\n\\end{equation*}\n\nFor the lower bound, we proceed as in Proposition \\ref{prop:lowercutoff} and all that is needed is estimates of the mean and variance of $\\chi_{1}$. Noticing that\n\\begin{equation*}\n\\varphi_{\\mu}(1) = \\frac{1}{N}\\int_{0}^{4}\\chi_{1}(N-\\tau)\\mathrm{d}\\mu = 1 - \\frac{\\eta}{N},\n\\end{equation*}\nwe get $\\varphi_{\\mu}^{\\ast k}(\\chi_{1}) = d_{1}\\varphi_{\\mu}(1)^{k} = N(1-\\eta\/N)^{k}$ and as before we conclude that this is greater than or equal to $e^{\\eta c}\/5$ for any $N\\geqslant 5$. As for the variance, it follows from Popoviciu's inequality (see \\cite[Thm 2]{bhatia2000better} for an operator algebraic statement and proof), that since $-2 < \\chi_{1} < 2$,\n\\begin{equation*}\n\\var_{\\psi}(\\chi_{1})\\leqslant (2 - (-2))^{2}\/4 = 4\n\\end{equation*}\nfor any state $\\psi$. Applying this to $\\varphi_{\\mu}^{\\ast k}$ and using the same argument as in Proposition \\ref{prop:lowercutoff} then yields\n\\begin{equation*}\n\\|\\varphi_{\\mu}^{\\ast (N\\ln(N)\/4 - cN)} - h\\|_{TV} \\geqslant 1 - 500e^{-2\\eta c}.\n\\end{equation*}\n\\end{proof}\n\nExtending the previous result seems impossible with the techniques of the present work since it is clear that our estimates for fixed $\\tau$ can only be valid for $N$ larger than a function of $\\tau$ going to infinity as $\\tau$ goes to $0$.\n\n\\section{Further examples}\\label{sec:further}\n\nIn this section we will consider random walks on other compact quantum groups which were also introduced by S. Wang in \\cite{wang1998quantum} and called \\emph{free symmetric quantum groups}. As before, we define them through a universal algebra :\n\n\\begin{de}\nLet $\\O(S_{N}^{+})$ be the universal $*$-algebra generated by $N^{2}$ \\emph{self-adjoint} elements $u_{ij}$ such that for all $1\\leqslant i, j \\leqslant N$,\n\\begin{equation*}\nu_{ij}^{2} = u_{ij} \\text{ and } \\displaystyle\\sum_{k=1}^{N}u_{ik} = 1 = \\displaystyle\\sum_{k=1}^{N}u_{kj}.\n\\end{equation*}\nThe formula\n\\begin{equation*}\n\\Delta(u_{ij}) = \\sum_{k=1}^{N}u_{ik}\\otimes u_{kj}\n\\end{equation*}\nextends to a $*$-algebra homomorphism $\\Delta : \\O(S_{N}^{+})\\to \\O(S_{N}^{+})\\otimes \\O(S_{N}^{+})$ and this can be completed into a compact quantum group structure.\n\\end{de}\n\nAs the name and notation suggest, the abelianization of $\\O(S_{N}^{+})$ is exactly $\\O(S_{N})$ and the two even coincide for $N\\leqslant 3$. However, as soon as $N\\geqslant 4$ the compact quantum group $S_{N}^{+}$ is infinite (in the sense that the algebra $\\O(S_{N}^{+})$ is infinite-dimensional) and therefore behaves very differently from the classical symmetric group. This will raise an analytic issue in the sequel. Let us now describe the representation theory of $S_{N}^{+}$, which is quite close to that of $O_{N}^{+}$. The irreducible representations are still labelled by nonnegative integers but this time the recursion relation for characters is\n\\begin{equation}\\label{eq:recursionsymetric}\n\\chi_{1}\\chi_{n} = \\chi_{n+1} + \\chi_{n} + \\chi_{n-1}.\n\\end{equation}\nTo translate this into an explicit isomorphism with $\\mathbb{C}[X]$, first note that keeping the notations of Section \\ref{sec:orthogonal}, $u_{2n}(X)$ has only even powers of $X$ for any $n$. Thus, $v_{n}(X) = u_{2n}(\\sqrt{X})$ is a polynomial in $X$ and it is easily checked that this new sequence satisfies the above recursion relation. Once again, one has $d_{n} = v_{n}(N) = u_{2n}(\\sqrt{N})$.\n\n\\subsection{Pure state random walks on free symmetric quantum groups}\n\nAs for free orthogonal quantum groups, we can study pure state random walks. In view of the link between the polynomials $u_{n}$ and $v_{n}$, estimates of the total variation distance for the random walk associated to a pure state on $S_{N}^{+}$ can be easily deduced from Proposition \\ref{prop:upperboundlessthantwo} and Theorem \\ref{thm:estimategreaterthan2}. We will therefore simply give the statements, starting with the case of small $t$.\n\n\\begin{prop}\nLet $\\vert t\\vert < 4$ be fixed. Then, for any $k\\geqslant 2$,\n\\begin{equation*}\n\\|\\varphi_{t}^{\\ast k} - h\\|_{TV}\\leqslant \\frac{1}{2}\\sqrt{\\frac{N}{q(\\sqrt{N})^{2}(1-q(\\sqrt{N})^{4})}}\\left(\\frac{q(\\sqrt{N})}{N\\sqrt{1 - t^{2}\/4}}\\right)^{k}\n\\end{equation*}\nIn particular, if $t<2\\sqrt{1-\\left(\\frac{q(\\sqrt{N})}{N}\\right)^{2}}$ then the random walks converges exponentially.\n\\end{prop}\n\nOne can also get a lower bound like in Proposition \\ref{prop:lowerbound} : noticing that $u_{2}(X) = X^{2} - 1$ yields\n\\begin{equation*}\n\\|\\varphi^{\\ast k} - h\\|_{TV} \\geqslant \\frac{N-1}{6}\\left(\\frac{t-1}{N-1}\\right)^{k}.\n\\end{equation*}\n\nFor larger $t$, the proof is also the same as in Theorem \\ref{thm:estimategreaterthan2}.\n\n\\begin{prop}\nLet $\\vert t\\vert > 4$ and let $k_{0}$ be the smallest integer such that $q(t) > q(N)^{1-1\/k_{0}}$. If $k\\leqslant k_{0}$ then the state $\\varphi_{t}^{\\ast k}$ is not bounded on $L^{\\infty}(S_{N}^{+})$ and otherwise\n\\begin{equation*}\n\\|\\varphi_{t}^{\\ast k} - h\\|_{TV}\\leqslant \\frac{1}{2}\\sqrt{\\frac{N^{2}q(\\sqrt{t})^{4k_{0}}}{q(\\sqrt{t})^{4k_{0}} - q(\\sqrt{N})^{4k_{0}-4}}}\\left(\\frac{q(\\sqrt{N})}{\\sqrt{N}q(\\sqrt{t})^{2}(1-q(\\sqrt{t})^{2})}\\right)^{k}\n\\end{equation*}\n\\end{prop}\n\nThe main point in the above statement is that the threshold $k_{0}$ is the same as for $O_{N}^{+}$, so that the cut-off parameter of a uniform random walk on a conjugacy class should be given by the same formula as before. One of the simplest examples of such a random walk is the one associated to the uniform measure on the set of transpositions, or equivalently on the conjugacy class of a transposition. Since the trace of a transposition matrix is $N-2$, this is given by the state $\\varphi_{N-2}$ and the expected cut-off parameter is $N\\ln(N)\/2$. This can be proven by the same strategy as for Theorem \\ref{thm:randomrotation} but the computations are more involved.\n\n\\begin{thm}\\label{thm:randomtranspositions}\nFor any $N\\geqslant 16$, the random walk associated to $\\varphi_{N-2}$ on $S_{N}^{+}$ has a cut-off at $N\\ln(N)\/2$ steps.\n\\end{thm}\n\n\\begin{proof}\nFor the upper bound, the part concerning $B_{k}$ is the same as in the proof of Theorem \\ref{thm:randomrotation}, so let us focus on $B_{k}'$. It is enough to prove that\n\\begin{equation*}\n\\frac{\\sqrt{N}}{q(\\sqrt{N})}q(\\sqrt{N-2})^{2}\\left(1-q(\\sqrt{N-2})^{2}\\right) \\geqslant e^{2\/N}.\n\\end{equation*}\nWriting $q(t)^{2}(1-q(t)^{2}) = (3t-t^{3})q(t) + t^{2} - 2$ and expanding we get\n\\begin{align*}\nq(t)^{2}(1-q(t)^{2}) = t\\sum_{n=2}^{+\\infty}(3a_{n} - 4a_{n+1})\\left(\\frac{2}{t}\\right)^{2n-1} \\\\\n\\end{align*}\nsince $c_{n} = 4a_{n+1} - 3a_{n} = (n-5)a_{n}\/(n+1)$, the sum splits as\n\\begin{equation*}\n\\frac{1}{t^{2}} + \\frac{1}{t^{4}} + \\frac{1}{t^{6}} - t\\sum_{n=6}^{+\\infty}c_{n}\\left(\\frac{2}{t}\\right)^{2n-1}.\n\\end{equation*}\nMoreover, the same estimate as for $b_{n}$ yields $c_{n}\\leqslant (n-5)\/2n(n+1)$ and the sequence on the right-hand side is increasing up to $n=10$ and then decreasing. Its maximum is therefore $1\/44$. Using $\\sqrt{t}$ instead of $t$ and the fact that $c_{n} \\leqslant 1\/44$, we eventually get\n\\begin{equation*}\nq(\\sqrt{t})^{2}(1-q(\\sqrt{t})^{2}) = \\frac{1}{t} + \\frac{1}{t^{2}} + \\frac{1}{t^{3}} - \\sum_{n=6}^{+\\infty}2c_{n}\\left(\\frac{4}{t}\\right)^{n-1}\\geqslant \\frac{t^{2} + t + 1}{t^{3}} - \\frac{4^{5}}{22\\times t^{4}(t-4)} .\n\\end{equation*}\nOn the other hand,\n\\begin{equation*}\ne^{2\/(t+2)} \\leqslant \\sum_{k=0}^{+\\infty}\\left(\\frac{2}{t+2}\\right)^{k} = \\frac{t+2}{t} = 1+\\frac{2}{t}\n\\end{equation*}\nso that it is enough to have (noticing that $q(x)^{-1} = (x+\\sqrt{x^{2}-4})\/2$)\n\\begin{align*}\n& \\sqrt{t+2}\\frac{\\sqrt{t+2} + \\sqrt{t-2}}{2}\\left(\\frac{t^{2} + t + 1}{t^{3}} - \\frac{4^{5}}{22\\times t^{4}(t-4)}\\right) - 1 - \\frac{2}{t} \\geqslant 0.\n\\end{align*}\nTo see when this inequality holds, let us first prove that for $t\\geqslant 12$, $1\/2t^{3} \\geqslant 4^{5}\/22(t^{4}(t-4))$. Proceeding as in the proof of Lemma \\ref{lem:threefunctions}, we reduce the problem to $11t(t-4)\\geqslant 4^{5}$, i.e.\n\\begin{equation*}\nt^{2} - 4t - \\frac{4^{5}}{11} \\geqslant 0.\n\\end{equation*}\nwhich is satisfied as soon as $t$ is greater than $2 + 2\\sqrt{1+16^{2}\/11}\\leqslant 12$. Using this, it is now enough to check that\n\\begin{align*}\n1 + \\frac{2}{t} & \\leqslant \\left(1 + \\frac{t}{2} + \\frac{\\sqrt{t^{2}-4}}{2}\\right)\\left(\\frac{1}{t} + \\frac{1}{t^{2}} + \\frac{1}{2t^{3}}\\right) \\\\\n& = \\frac{1}{t} + \\frac{1}{t^{2}} + \\frac{1}{2t^{3}} + \\frac{1}{2} + \\frac{1}{2t} + \\frac{1}{4t^{2}} + \\frac{\\sqrt{t^{2}-4}}{2t} + \\frac{\\sqrt{t^{2}-4}}{2t^{2}} + \\frac{\\sqrt{t^{2}-4}}{4t^{3}}.\n\\end{align*}\nWe will prove the stronger inequality obtained by removing the terms with $t^{3}$ at the denominator in the right-hand side. After simplifying and multiplying by $2t^{2}$ we get the inequality\n\\begin{equation*}\nt^{2} + t \\leqslant \\frac{5}{2} + t\\sqrt{t^{2}-4} + \\sqrt{t^{2}-4}.\n\\end{equation*}\nNow, the function $f: t\\mapsto (t+1)(t-\\sqrt{t^{2}-4})$ satisfies\n\\begin{equation*}\nf'(t) = t-\\sqrt{t^{2}-4} + (t+1)\\left(1-\\frac{t}{\\sqrt{t^{2}-4}}\\right) = \\frac{t-\\sqrt{t^{2}-4}}{\\sqrt{t^{2} - 4}}\\left(\\sqrt{t^{2}-4} - (t+1)\\right).\n\\end{equation*}\nThis is negative, thus $f$ is decreasing and for $t\\geqslant 14$ it is smaller than $f(14) \\approx 2.15 < 5\/2$.\n\nConcerning the lower bound, first note that the expectation and variance of $\\chi_{1}$ with respect to $h$ are respectively equal to $0$ and $1$. Moreover, by the same argument as for $O_{N}^{+}$,\n\\begin{equation*}\n\\varphi_{N-2}(\\chi_{1}) = \\frac{(N-3)^{k}}{(N-1)^{k}} \\geqslant \\frac{e^{2c}}{5}\n\\end{equation*}\nfor $k = N\\ln(N)\/2 - c$ and the variance can be bounded independently from $N$ by Popoviciu's inequality.\n\\end{proof}\n\n\\begin{rem}\nPlotting the function appearing in the study of the upper bound suggests that it is positive as soon as $t\\geqslant 12$, which would give a cut-off for all $N\\geqslant 14$. This indicates that even though they look loose, our estimates are close to optimal.\n\\end{rem}\n\nThe cut-off parameter is the same as in the classical case (see \\cite{diaconis1981generating}). We can even consider the conjugacy class of $m$-cycles for any integer $m$ and, for $N$ large enough, the cut-off will appear at $N\\ln(N)\/m$ steps, again as in the classical case \\cite{hough2016random}.\n\n\\subsection{Mixed states and transition operators}\n\nThere are many examples of mixed states on $S_{N}^{+}$ coming from classical random walks. However, their study in the quantum case is prevented by the fact that $S_{N}^{+}$ is not amenable for $N\\geqslant 5$, a phenomenon which was alluded to for $O_{N}^{+}$ in Subsection \\ref{subsec:mixedrotations}. We will now illustrate this in more details on a simple example with random transpositions as follows : assume you have a deck of $N$ cards and spread them on a table. Randomly select one card uniformly (i.e. with probability $1\/N$ for each card) and then select another one in the same way. If the same card has been selected twice, nothing is done. Otherwise, the two cards are swapped. This corresponds to the measure on $S_{N}$ giving probability $1\/N^{2}$ to all transpositions and $1\/N$ to the identity. Since transpositions form a conjugacy class, the measure can be restated as being\n\\begin{equation*}\n\\mu_{\\text{rt}} = \\frac{N-1}{N}\\mu_{\\text{tran}} + \\frac{1}{N}\\delta_{\\id}.\n\\end{equation*}\nwhere $\\mu_{\\text{tran}}$ is the uniform measure on the set of transpositions. The equation above directly gives the state on $\\O(S_{N}^{+})$ corresponding to \"random quantum transposition\" :\n\\begin{equation*}\n\\varphi_{\\text{rt}} = \\frac{N-1}{N}\\varphi_{N-2} + \\frac{1}{N}\\varphi_{N}.\n\\end{equation*}\nThe state $\\varphi_{N-2}^{\\ast k}$ is bounded on $L^{\\infty}(S_{N}^{+})$ for $k$ large enough but not $\\varphi_{N}^{\\ast k}$ since it is the co-unit and $S_{N}^{+}$ is not co-amenable for $N\\geqslant 5$. This implies that no convolution power of $\\varphi_{\\text{rt}}$ is bounded on $L^{\\infty}(S_{N}^{+})$ so that the total variation distance is never defined (it is clear that the sum in the upper bound lemma diverges since each term is greater than $N^{-k}$). This is in sharp contrast with the classical case (a finite quantum group is always amenable).\n\nHowever, it is known (see for instance \\cite[Lem 3.4]{brannan2011approximation}) that the associated transition operator $P_{\\varphi_{\\text{rt}}} = (\\id\\otimes \\varphi_{\\text{rt}})\\circ\\Delta$ always extends to a bounded linear map on $L^{\\infty}(\\mathbb{G})$. We can therefore compare it with $P_{h}$ using operator norms. In particular, we can see them as operators on $L^{2}(S_{N}^{+})$ and the corresponding norm is then easy to compute :\n\n\\begin{lem}\nLet $\\psi$ be any central linear form on a compact quantum group $\\mathbb{G}$. Then,\n\\begin{equation*}\n\\|P_{\\psi}\\|_{B(L^{2}(\\mathbb{G}))} = \\sup_{\\alpha\\in \\Irr(\\mathbb{G})}\\vert \\psi(\\alpha)\\vert.\n\\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nBy Woronowicz' Peter-Weyl theorem, the elements $u^{\\alpha}_{ij}$ form an orthogonal basis of $L^{2}(\\mathbb{G})$. Moreover, a straightforward calculation yields\n\\begin{equation*}\nP_{\\psi}(u^{\\alpha}_{ij}) = \\psi(\\alpha)u^{\\alpha}_{ij}\n\\end{equation*}\nso that $P_{\\psi}$ is diagonal in this basis and the result follows.\n\\end{proof}\n\nSince $P_{h}$ is the projection onto the linear span of $1$, the above Lemma means that the distance in operator norm is exactly given by the spectral gap of the operator $P_{\\varphi}$. In this setting it is not very difficult to prove that there is a cut-off phenomenon.\n\n\\begin{prop}\nThe random walk associated to $\\varphi_{\\text{rt}}$ has a cut-off in the $L^{2}$-operator norm at $k = N\/2$ steps.\n\\end{prop}\n\n\\begin{proof}\nWe first show that the supremum of\n\\begin{equation*}\n\\left(\\frac{N-1}{N}\\frac{v_{n}(N-2)}{v_{n}(N)} + \\frac{1}{N}\\right)^{k}\n\\end{equation*}\nis attained at $n = 1$. Let us set, for $n\\geqslant 1$, $a_{n}(t) = u_{n+1}(t)\/u_{n}(t)$. The recursion relation \\eqref{eq:recursionsymetric} implies\n\\begin{equation*}\na_{n+1}(t) = t - \\frac{1}{a_{n}(t)}\n\\end{equation*}\nfrom which it follows by induction that for all $n$, $t-1\/t \\leqslant a_{n}(t)\\leqslant t$. Using this, we see that\n\\begin{equation*}\n\\frac{u_{n+1}(\\sqrt{N-2})}{u_{n+1}(\\sqrt{N})}\\frac{u_{n}(\\sqrt{N})}{u_{n}(\\sqrt{N-2})} = \\frac{a_{n}(\\sqrt{N-2})}{a_{n}(\\sqrt{N})} \\leqslant \\frac{\\sqrt{N(N-2)}}{N-1} < 1.\n\\end{equation*}\nThus, the sequence $u_{n}(\\sqrt{N-2})\/u_{n}(\\sqrt{N})$ is decreasing and the claim is proved. As a consequence,\n\\begin{equation*}\n\\|P_{\\varphi_{\\text{rt}}^{\\ast k}} - P_{h}\\|_{B(L^{2}(S_{N}^{+}))} = \\left(\\frac{N-1}{N}\\frac{N-3}{N-1} + \\frac{1}{N}1\\right)^{k} = \\left(1 - \\frac{2}{N}\\right)^{k}.\n\\end{equation*}\nBecause $1-x\\leqslant e^{-x}$, for any $c > 0$\n\\begin{equation*}\n\\|P_{\\varphi_{\\text{rt}}}^{\\ast (N\/2+cN)} - P_{h}\\|_{B(L^{2}(S_{N}^{+}))}\\leqslant e^{-1}e^{-2c},\n\\end{equation*}\nyielding the upper bound.\n\nAs for the lower bound, using an estimate already mentioned in Lemma \\ref{lem:lowerboundquantumrotation} we have for $N\\geqslant 5$\n\\begin{align*}\n\\left(1-\\frac{2}{N}\\right)^{N\/2}\\geqslant \\left(\\frac{1}{5N}\\right)^{1\/\\ln(N)} = e^{-1-\\ln(5)\/\\ln(N)} \\geqslant e^{-2}\n\\end{align*}\nso that for $c<1$,\n\\begin{equation*}\n\\|P_{\\varphi_{\\text{rt}}}^{\\ast (N\/2-cN)} - P_{h}\\|_{B(L^{2}(S_{N}^{+}))}\\geqslant e^{2c-2}\\geqslant e^{-2}(1-e^{-2c}).\n\\end{equation*}\n\\end{proof}\n\nThe cut-off in total variation distance for the classical random walk associated to $\\mu_{\\text{rt}}$ occurs at $N\\ln(N)\/2$ steps (see \\cite{diaconis1981generating}) and was one of the first important results of the theory. Since we considered a weaker norm, we get a better cut-off parameter. However, there are other norms available for operators on a von Neumann algebra which may be closer to the total variation distance and therefore yield a different cut-off parameter. In particular, since transition operators are completely positive, it would be interesting to have estimates for the \\emph{completely bounded norm} of $P_{\\varphi_{\\text{rt}}^{\\ast k}} - P_{h}$.\n\n\\subsection{Quantum automorphisms of matrices}\\label{subsec:quantumautomorphisms}\n\nAs mentioned in the end of Subsection \\ref{subsec:randomrotations}, apart from $O_{N}^{+}$ there is another quantum generalization of $SO(N)$, called the \\emph{quantum automorphism group of $(M_{N}(\\mathbb{C}), \\mathrm{tr})$}. This means that it is a universal object in the category of compact quantum groups acting on $M_{N}(\\mathbb{C})$ in a trace-preserving way. For $N=2$, this is known to be isomorphic to $SO(3)$.\n\nIt was shown in \\cite{banica1999symmetries} that the representation theory of this quantum group is the same as $S_{N}^{+}$. The only difference is that the dimensions are given by $u_{2n}(N) = v_{n}(N^{2})$. We can therefore consider the pure states $\\varphi_{t}$ as before for $0 \\leqslant t < N^{2}$ and the same arguments as in Theorem \\ref{thm:randomtranspositions} would show that the random walk associated to random rotations with a fixed angle $\\theta$ has a cut-off at $N\\ln(N)\/2(1-\\cos(\\theta))$ steps. There is however a quicker way to this. Consider the subalgebra of $\\O(O_{N}^{+})$ generated by all products $u_{ij}u_{kl}$ of two generators. Then, this is isomorphic to the Hopf algebra of the quantum automorphism group of $(M_{N}(\\mathbb{C}), \\mathrm{tr})$. The random walk can therefore be obtained by simply restricting the state to this subalgebra and as far as Lemma \\ref{lem:upperbound} is concerned this is just restricting to the sum of even terms. The upper bound for the cut-off then trivially follows from Theorem \\ref{thm:randomrotation}. As for the lower bound, it is a computation similar to that of Proposition \\ref{prop:lowerbound} using $\\chi_{2}$ instead of $\\chi_{1}$.\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzipci b/data_all_eng_slimpj/shuffled/split2/finalzzipci new file mode 100644 index 0000000000000000000000000000000000000000..e27b6001cf5bae1413845656a12d06effbd501f0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzipci @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nResearch in interpretable machine learning (IML) has explored means to understand how machine learning (ML) models work since the nineties ({\\it e.g}., \\citeauthor{towell1992interpretation} \\citeyear{towell1992interpretation}). Popular methods to help understand ML models are referred to as attribution methods \\citep{olah, yeh2018, koh2017understanding}, they identify features or instances responsible for a classification.\n\nWith the exception of human-centered studies \\citep{hoffman2018metrics}, the evaluation methods being used in XAI and IML include comparison to existing methods, metrics and axioms, sensitivity analyses, gold features, and demonstration of image classification (details and references in Section Background and Related Works). The problems with these methods include that they do not indicate where current XAI approaches fail thereby preventing consistent progress of the field. They do not measure accuracy as a way to validate correctness or to produce accountable agents ({\\it e.g}., \\citeauthor{Diakopoulos2014AlgorithmicAR} \\citeyear{Diakopoulos2014AlgorithmicAR}, \\citeauthor{kroll2016accountable} \\citeyear{kroll2016accountable}, \\citeauthor{doshivelez2017rigorous} \\citeyear{doshivelez2017rigorous}), and it is practically impossible to determine whether one XAI method is better than other or what the weaknesses of existing methods are, leaving researchers without guidance on which re-search questions will advance the field.\n\nThe intended purpose of this paper is to address these limitations with current XAI evaluation methods by proposing the use of data representing ground-truth explanations (GTE). In a variety of computer science tasks, it is a standard practice to treat some representation of data as ground truth. Ground-truth data, in its essence, represents data that is verifiable and considered as the most accurate against which a new system is tested against \\citep{wikilink}. Various authors agree that the lack of ground-truth for evaluating explanations is a limitation \\citep{tomsett2019sanity, hooker2019benchmark, yang2019evaluating, Montavon2019}. Consequently, we investigate the challenges in creating data representing GTEs. Our goal is to promote consistent and methodical progress of the XAI field. The scope of this paper is limited to neural networks (NN) for classification tasks.\n\nThe next section presents related methods, metrics and axioms to evaluate XAI methods. Then, we introduce how to generate three data sets representing GTEs. Next, we use the generated data to train NN models to submit to LIME \\citep{ribeiro2016should} and produce explanations while converting the GTEs into LIME's explanations format. We evaluate LIME, and analyze the evaluation, seeking support to our conclusions as a means to validate the evaluation. We conclude with a discussion of issues and benefits, and future work.\n\\begin{table*}[t]\n\\centering\n\\small\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{p{0.31\\textwidth}|p{0.69\\textwidth}}\n \\hline\n \\textbf{Evaluation method\/Axiom\/Metric} & \\textbf{Method proposer and\/or example authors who employed them}\\\\\n \\hline\n Sensitivity analysis & \\citet{adebayo2018sanity}\\\\\n \\hline\n Example images & \\citet{ribeiro2016should}\\\\\n \\hline\n SAT\/Model counting & \\citet{ignatiev2018, narodytska2019assessing, ignatiev2019validating} \\\\\n \\hline\n Correlation, completeness, and complexity & \\citet{cui2019integrative}\\\\\n \\hline\n Conservation, continuity & \\citet{Montavon2019}\\\\\n \\hline\n Fidelity & \\citet{alavarez2018}\\\\\n \\hline\n Gold features & \\citet{ribeiro2016should}\\\\\n \\hline\n Post-hoc accuracy & \\citet{chen2018learning, Bhatt2019BuildingHT, xie2019reparameterizable, bai2020attention}\\\\\n \\hline\n Perturbation analysis for vision & \\citet{zeiler2014visualizing}\\\\\n \\hline\n ROAR & \\citet{hooker2019benchmark}\\\\\n \\hline\n Perturbation on Time Series & \\citet{schlegel2019rigorous}\\\\\n \\hline\n Implementation invariance, sensitivity & \\citet{sundararajan2017axiomatic}\\\\\n \\hline\n Input invariance & \\citet{kindermans2019reliability}\\\\\n \\hline\n Simulated users & \\citet{ribeiro2016should}\\\\\n \\hline\n Amazon Mechanical Turk users & \\citet{ribeiro2016should, chen2018learning}\\\\\n \\hline\n In-depth Interviews & \\citet{hoffman2018metrics}\\\\\n \\hline\n\\end{tabular}\n}\n\\caption{Methods, metrics or axioms used to evaluate XAI and IML methods}\n\\label{alternativetable1}\n\\end{table*}\n\\section{Background and Related Works} \\label{BGDRW}\nTable 1 lists various methods currently used to evaluate both XAI and IML methods. The authors referenced in the table are those that first proposed the methods or that have used them within XAI and IML works. None of these methods use data representing ground-truth explanations. The closest to ground truth is the use of gold features \\citep{ribeiro2016should}, which are a set of features used in a model that are well-known to be the most important.\n\n\\citet{doshivelez2017rigorous} categorize IML evaluations as application-, human-, and functionally-grounded. The authors propose that any method should be evaluated along those three categories, one informing the other. \\citet{yang2019evaluating} are the only authors who actually present a reason against using ground truth to benchmark explanation methods, which is that of explanation quality is user-dependent. These authors propose three metrics for IML, namely, generalizability, fidelity, and persuasibility. Their fidelity metric aims to measure explanation relevance in the applied context. \\citet{gunning2019darpa} propose XAI approaches are evaluated along five categories, namely, {\\it explanation goodness}, {\\it explanation satisfaction}, {\\it mental model understanding}, {\\it user task performance}, and {\\it appropriate trust and reliance}. Considering that human-centered studies entail a lot of subjectivity, of those, only {\\it explanation goodness} seems as an objective category of explanation quality. All other categories are evaluated by humans or an external task. \\citet{tomsett2019sanity} conducted a meta evaluation of saliency methods by analyzing metrics previously proposed in the literature to evaluate saliency methods. To do this, they adopted psychometric tests to verify local saliency metric reliability and consistency of the saliency metrics that rely on perturbations. They conclude that saliency metrics can produce unreliable and inconsistent results, even if not always.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=\\columnwidth, height=4cm]{GTE-diagram.png}} \n\\caption{{In green, this diagram shows the steps to generate data representing GTEs and to use the data to train models. In orange are the processes for aligning GTE to LIME, and to send predictions for LIME to explain. In yellow, the evaluation compares the two orange processes}}\n\\label{gted}\n\\end{figure}\n\n\\section{Generate Data Representing Ground-Truth Explanations (GTE)}\nFigure \\ref{gted} gives an overview of the entire approach from generating the data up to evaluation. We start describing how to generate the data. We propose to generate data sets from existing processes, either natural or artificial, and identify classes from said processes. We propose to represent classes in a data set via mathematical equality or inequality constraints as a minimal canonical representation from which explanations can be aligned with the format of explanations produced by various XAI methods. We define the classes and the intervals to populate feature values to create instances. The intervals where instance feature values can be populated will determine noise and commonsense. The nature of values allowed for each feature in the generated equations will determine whether classes will remain disjoint. Overlapping classes will impact evaluations producing noise. Another consideration when defining intervals to populate feature values is commonsense. If an explanation indicates that the value of a feature is 0.3x$10^{-6}$, then the feature should not represent someone's age. Next, we describe the generation of three data sets. \n\n\\subsection{Generate Data Set Loan}\n\nThe process we chose is loan underwriting. This is a small data set, consisting of 54 instances, two classes accept and reject, and three input features characterizing three well-known factors considered in loan underwriting, namely, job condition, credit score, and debt-to-income ratio. We created this data set manually to characterize realistic instances. \nThe instance space is given by the arrangement of the three features and their four possible values, given by 4 x 4 x 4 = 64. We eliminated 10 instances from the data because they were not realistic. The data is generated with a system of two equations as follows:\n\\begin{equation}\n f(x)= \n\\begin{cases}\n 8(x_1 - 2)^2 + 3x_2^3 - x_3^4 + 4, & \\text{\\it if } x\\ne 2\\\\\n 3x_2^3 + x_3^4 + 12, & \\text{\\it if } x= 2\n\\end{cases}\n\\end{equation}\n\\begin{equation}\n\\begin{cases}\n Accepted, & \\text{\\it if } f(x)\\ge 32\\\\\n Rejected, & \\text{\\it if } f(x)< 32\n\\end{cases}\n\\end{equation}\n\nAs stated above, we considered class overlap and commonsense when defining the allowable values for the three features. The first feature, $x_1$ corresponds to the job condition of the applicant. This feature can be populated with integer values along the interval [2, 5], where 2 represents lack of a job, and values 3, 4, and 5, respectively that applicant has a job for less than one year, less than 3 years, or more than 3 years. The second feature, $x_2$, refers to credit score, which assumes integer values in the interval [0, 3], distributed in ranges from less than 580, 650, 750 and more than 750. The third and last feature, $x_3$, refers to the ratio of debt payments to monthly income, which assumes integer values in the interval [0, 3], distributed in ranges from less than 25\\%, 50\\%, 75\\% and more than 75\\%.\n\n\\subsection{Generate Data Set Distance}\n\nWe adopt the equation used to calculate travel consumption based on travel distance. The Data Set Distance has a total of 2,600,000 instances, described through five features, and 10 classes.\nThe 5 variables are Trip Frequency (TF), Trip Distance (TD), Transportation Occupancy (TO) ,Energy Intensity (EI) and Travel Mode (m). The Data Set Distance is generated using Equation 4 for travel energy consumption based on travel distance. \n\nUsing the base equation, we created 10 unique variations with the following goals: the variations should be kept realistic, the variations are a set of operations (such as raising to an exponent or multiplying by a scalar) performed on one or more variables. Afterward, we generate the data for the base equation by creating every permutation of 4 equation variables within a specified range using a truncated normal distribution. The 4 variables are used as features along with a 5th variable, travel mode. For each of the 10 variations, we use the set of operations on the base equation data to generate the equivalent rows for that variation.\n\\begin{equation}\n E_m = TF \\times \\frac{TD_m}{TO_m} \\times EI_m\n\\end{equation}\n\n\\subsection{Generate Data Set Time}\n\nFor Data Set Time and Distance, we used processes from the field of energy consumption analysis that describe various realistic processes with different focuses (e.g., distance or time) and include equations with a variety of features that can receive multiple values. These characteristics facilitate the generation of large data sets, so we can create conditions similar to those faced by XAI methods in the real world. In this paper, we generate data from transportation energy consumption, which can be used to calculate travel time and travel distances related to household energy consumption.\n\nEquation 3 is the basic equation to calculate energy (E) as a function of time. The four variables are Travel Time (TT), Speed, Fuel Economy (FE) and Travel Mode (m). The Data Set Time has a total of 504,000 instances and seven classes. Each class is defined by a small tweak to the equation. Using the base equation, we create seven unique variations with the same goals and process as we did for the Distance data set.\n\n\\begin{equation}\n E_m = \\sum_{m=1}^{5} = TT \\times Speed_m \\times FE_m\n\\end{equation}\n\n\\subsection{Train NN Models}\nThe number of models, the type of models, and how they vary between them depends on the metrics selected in the previous step. Consider, for example, the selected metric is {\\it implementation invariance} \\cite{sundararajan2017axiomatic}. This metric requires multiple types of models. In this paper, we trained two models for the Loan and Time data sets, and one model for the Distance data set, which we summarize next (detailed architectures are given in the Github link given at the end of this paper).\n\n\\subsubsection{Models NN1 and NN2}\nThe changes from NN1 to NN2 for Loan and Time included number and type of layers. Both models built for the Loan data reached 100\\% accuracy. Given the small number of 54 instances, we did not separate testing and training. For Data Set Time, NN1 reached 97\\% accuracy and NN2 96\\%. The generated instances were set to 403,200 for training, and 100,800 for testing.\nTrain NN 2 to 96\\% accuracy 403,200 training instances, 100,800 testing instances. The accuracy obtained for Data Set Distance was much lower, 82\\%. This is certainly due to noise class overlaps that occurred during the data set generation. \n\n\\subsection{LIME Explains Predictions}\nAs depicted in the diagram in Figure \\ref{gted}, after training the models, the next steps can be concurrent. This section describes the step where LIME explains the predictions from the models. First, let us briefly review how LIME works. The Local interpretable model-agnostic explanations (LIME) is a feature attribution method formally described in \\citet{ribeiro2016should}. LIME assumes that the behavior of a sample point ({\\it e.g}., instance) can be explained by fitting a linear regression with the point (to be explained) and the point's close neighbors. LIME perturbs the values of the point to be explained and submits them to the model to obtain their prediction, thus creating its own data set. Next, LIME measures the cosine similarity between the point to be explained and the points generated from its perturbations to select a region of points around it. LIME then utilizes a hyperparameter, {\\it number of samples (num\\_sample)}, to select the number of points it will use in the final step, which is the fitting of a linear regression. The hyperparameter {\\it num\\_sample} determines how many of the perturbed points will be used with the point to be explained to fit a linear regression. This last step produces coefficients of the line that expresses LIME's explanation. \n\nFor Data Set Loan, we submit to LIME all the 54 instances to be explained, and models NN1 and NN2. Note that all these instances with both NN models will have correct predictions because both models reached 100\\% accuracy. The number of samples selected was 25 to be used in the first evaluations, but we also created GTE for 5 and 50 number of samples. The output we receive back from LIME are two sets of 54 by 3 coefficients, one coefficient for each of the three features, one set for NN1, and one set for NN2. \n\nThe Data Set Time has 100,800 instances but the models did not reach 100\\% accuracy. Consequently, we randomly selected 10,000 that both models NN1 and NN2 predicted correctly and only submitted those 10,000 to LIME together with the two models. We made sure to select the instances from those correctly predicted because sending instances incorrectly predicted by the models would mislead LIME to produce bad decisions, and we had to make sure we could provide LIME with the best data for a fair assessment. The output produced by LIME are two sets of 10,000 by 4 coefficients, accounting for the four features in this data set, one for each model NN1 and NN2. We set hyperparameter number of samples as 1,000.\n\nThe accuracy reached by the Distance model NN1 was 82\\%. The number of testing instances was 520,000, so we randomly selected 50,000 instances from this data set from those correctly predicted. The data produced by LIME for NN1 is a 50,000 by 5 matrix, given the five features in this set. For number of samples, we used 5,000.\n\n\\subsection{GTE Data Aligns with LIME Explanation Format}\n\nThis step is represented in Figure \\ref{gted}, in orange, and is concurrent to the step \"Lime Explains Predictions\". As already noted, the data we produce representing GTEs are a specific way to represent explanations. We can only use them to evaluate any target XAI method after the data representing GTEs are at the same format or have been processed under the same conditions. As a general rule of thumb, this conversion may imply to take the ground-truth data and execute the last steps of the target method.\n\nFor LIME, an explanation consists of a fitted line from a target point whose prediction we want to explain and the number of points (based on the hyperparameter number of samples, {\\it num\\_sample}) that are the closest to the target based on the cosine similarity. Ultimately, this means that evaluating LIME means to determine how realistic are the perturbed points that are the closest based on cosine similarity in the number of the number of samples. Consequently, we take the points from the data representing GTEs and execute this same process, namely, measure the cosine similarity of each target point to be explained and then fit linear functions using the same Ridge regression method and the same regression parameters used in LIME using the same number of samples as established by the {\\it num\\_sample} hyperparameter. The result is that, for each data set, we have matrices with the same number of coefficients (one per feature per instance) as produced by each model. \n\n\\section{Evaluation Measures Compare Lime Explanations Against GTE}\n\\subsection{Propose Evaluation Measures}\n\\subsubsection{Euclidean distance (ED)} We adopt the {\\it Euclidean distance} (ED), which is an obvious choice of method to measure how far two points are in $n$-dimensional spaces. For this reason, we compute, for each instance of each data set and NN model, the ED between the point described in the GTE data and the point described through LIME's explanation coefficients. The range of the ED is (-$\\infty$, +$\\infty$), however, we do normalize the ED using the maximum and minimum points obtained for each data set and parameters. The goal is to keep the ED's results within the interval [0, 1] for better visualization. The purpose of computing the ED between the GTE data and LIME's explanation coefficients is to measure {\\it accuracy} as a measure of {\\it explanation goodness}. The ED, for being a distance, produces results in the opposite direction of quality. For this reason, later we will compute the {\\it Complement of ED}, which we denote as {\\it C-of-ED}, as its mathematical complement.\n\n\\subsubsection{Implementation Invariance} \\citet{sundararajan2017axiomatic} proposed that explanation methods should produce identical attributions for networks that produce the same outputs while receiving the same inputs, which are referred to as functionally equivalent. This is why we created models with different architectures for two of our data sets.\n\n\\subsubsection{Measures of Order} We propose to use the order of the explanation coefficients as measures of {\\it accuracy} or {\\it explanation goodness}. In LIME \\citep{ribeiro2016should}, the explanation coefficients assign importance to each feature in the sense that the feature that is assigned the highest coefficient is the most important feature in the explanation. This is related to the use of gold features to evaluate explanations, as proposed by \\citet{ribeiro2016should}. When gold features are used, the evaluation often targets the inclusion or not of a feature in an explanation. In the studies in this paper, we do not discuss the inclusion or not of features because our data sets have three, four, and five features each. At these small numbers of features, LIME includes all of them; this way we do not have to evaluate whether a feature is present, but how important it is considered. Note that the order of features is particularly proposed to evaluate LIME given the format LIME presents its explanations, although this would be an important aspect to consider when evaluating any explanation method. \n\nWe define two evaluation measures of order: {\\it Second Correct} and {\\it All Correct}. Respectively, {\\it Second Correct} indicates whether the second feature in the descending order of importance of an explanation's coefficients is correct in the sense that it is the same feature ordered as the second most important in the GTE data. Then {\\it All Correct} indicates that all the features are in the same order as the features in the GTE data. The values for these measures are counted as 1 or 0 for each instance. The comparisons include results for 100 runs, hence the values represent percentages.\n\\subsection{Comparing GTEs to LIME Explanations}\n\n\\subsubsection{NN1 vs. NN2}\nWe start by comparing ED across the two different NN architectures for data sets Loan and Time, NN1 and NN2 to assess {\\it Implementation Invariance}. Both were executed for 100 runs and thus we compute average and standard deviation across the 100 runs for each instance.\n\nWe use the parametric statistical testing t-test to measure whether the values differ significantly across the two samples NN1 and NN2. To conduct the t-test, we pose the hypothesis that the true difference between NN1 and NN2 is zero. The t-test determines that for $p$-values greater than 0.1, we cannot reject the hypothesis that the difference is zero between the samples. The $p$-values computed for Loan and Time data set are respectively, 0.979 and 0.661. These resulting $p$-values show that for both Loan and Time, the differences between NN1 and NN2 are not statistically significant. \\citet{sundararajan2017axiomatic} suggests that explanation methods should satisfy {\\it Implementation Invariance} for functionally equivalent NNs. This means that their explanations ought to be the same. As far as the t-test shows, the explanation coefficients are not significantly different, so at this level of specificity, they satisfy {\\it Implementation Invariance}. Given these results, we will use only NN1 for the remainder of the studies.\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=\\columnwidth, height=6cm]{CofEDvsMeasuresOfOrder.png}} \n\\caption{C-of-ED (red), Second Correct (green), and All Correct (black) for data sets Loan, Time, and Distance}\n\\label{CofEDvs}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{p{0.1\\textwidth} |c|c|c c}\n \\hline\n Data Set & Ave. C-of-ED & Ave. Second & Ave. All \\\\\n \\hline\n Loan & 0.47 & 0.32 & \\textbf{0.179}\\\\\n \\hline\n Time & 0.76 & 0.03 & 0.0008\\\\\n \\hline\n Distance & \\textbf{0.82} & \\textbf{2.88} & 0.08\\\\ \n\\end{tabular}\n\\caption{Averages (Ave.) obtained for C-of-ED, {\\it Second correct}, and {\\it All correct} for all 100 runs and all instances for NN1 for the three data sets}\n\\label{5-25-50}\n\\end{table}\n\n\\subsubsection{Comparing C-of-ED against Second Correct and All Correct}\nNow we compare the measures of order {\\it All Correct} and {\\it Second Correct} against the complement of the ED, {\\it C-of-ED}. We use the complement given that measures of order are in the opposite direction of ED. Figure \\ref{CofEDvs} shows the three measures for Loan, Time, and Distance. \n\nThe visual inspection suggests multiple ideas. First, if we look at the red line for {\\it C-of-ED}, it shows that the quality of LIME's explanations seem to increase with larger values of the number of samples hyperparameter. Note that the charts for Time and Distance show only 100 samples because showing all 10,000 and 50,000 would make them indecipherable. For this reason, we include Table \\ref{5-25-50} with the averages for all instances to help the interpretation of the charts. The averages for Loan, Time, and Distance are, respectively, 0.47, 0.76, and 0.82. Recall the numbers set for number of samples submitted to LIME for these respective data sets were respectively, 25, 1,000, and 5,000. This seems reasonable as it allows LIME more chances to populate the region of the instance to be explained, thus increasing its chances of success.\n\nSecond, the measure {\\it All Correct} (black line) in Figure \\ref{CofEDvs} represents the number of times an instance has all its feature coefficients in the same order as GTE's coefficients. It is not surprising that this is the line further down with respect to the $y$-axis as it is the more demanding than {\\it Second Correct} (in green). \n\nThird, even with limited samples in the charts for Time and Distance, we see that the quality of LIME's explanations vary. This deserves a more detailed analysis. With the small data set Loan, we can see, for example, all three measures agree that Instance 50 is lower in quality than Instance 49. But before we examine the numbers and explore potential reasons for LIME having more difficulty to explain some instances over others, let's scrutinize these measures.\n\n\\subsection{Validating Evaluation Measures}\nIn this section, we investigate whether we have any evidence to support these results by further analyzing what these measures above can tell us about LIME explanations. To do this, we now focus on the Data Set Loan because its small scale allows us to conduct a detailed and comprehensive analysis. Above when we described the experimental design for the Loan data, we mentioned we selected the parameter number of samples to be 25. We now expand the results for two more number of sample values, 5 and 50.\n\nFigure \\ref{ed-5-25-50} shows the measures {\\it C-of-ED}, {\\it Second correct}, and {\\it All correct} for the different hyperparameter number of samples used with the Loan data. We kept the colors we used in earlier charts, making lighter hues for number of samples 5, darker for 50, and kept an intermediary tone for 25. For C-of-ED, the average at 5 number of samples is the highest, 0.60 against 0.47 and 0.41 for 25, and 50. For {\\it Second correct}, the highest average is 0.35, obtained with 5 and 50 number of samples, against 0.32 with 25. For {\\it All correct}, the highest is again at 5 number of samples with 0.22 against 0.18 and 0.16 for 25 and 50. \n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth, height=6cm]{ED2ndAllLoan5-25-50.PNG}} \n\\caption{Comparison of C-of-ED (top), Second Correct (middle), and All Correct (bottom) for Data Set Loan and number of samples 5, 25, and 50} \n\\label{ed-5-25-50}\n\\end{figure}\n\nThe first observation is that these results do not match the conclusion above that higher number of samples lead to more accurate explanations although this observation makes sense technically. Thorough examination of the results for every instance reveals that, at 5 number of samples, data representing GTE has a very high proportion of coefficients that are zero. The exact number is 18 zeros for coefficient $x_1$, 23 for $x_2$, and 27 for $x_3$, corresponding to 18, 23, and 50\\%. These high numbers of zeros can be explained by the low number of samples that would make it hard to fit the linear regression and thus return zeros. We then examined the number of zeros in the coefficients produced by LIME and we noted that in the 100 runs of 54 instances, the total number of zeros is 26, 27, and 29, respectively for $x_1$, $x_2$, and $x_3$, representing averages of 2.6, 2.7, and 2.9 for all 54 instances (these are around 5\\%). Consequently, given that LIME coefficients do not have such an abundance of zeros, ED will artificially show better results at number of samples 5 because these distances will be shorter. A distance between a number, which can be positive or negative, and zero will be shorter than a number and another that may also be positive or negative. The zeros also cause problems in computing the measures of order.\n\nTwo observations can be made from the identification of these high volumes of zero. First, the evaluations for 5 number of samples for the Loan data are artificial. They are revealed by the measures as good but the numbers are artificial, they do not originate from better explanations. Consequently, we do not have any reason to question that higher number of samples lead to better quality explanations.\n\nSecond, these artificially produced number do indicate better quality and all the proposed measures have shown them. This supports the quality of the proposed measures.\n\nFinally, these studies suggest that the best quality of explanations from LIME for the Loan Data should be when using 50 number of samples, but the measures do not show this with consistency. Consider that 50 number of samples is almost as much as the total number of instances in the Data Set Loan. With both the data representing GTEs and LIME using 50 number of samples, what would be the cause of the difference in the coefficients? If we could tell LIME the range and precision of the allowable values for the data to use in the perturbations, with only three features and a NN with 100\\% accuracy, LIME would only generate perturbations that matched the actual data set, and given that we used the same cosine similarity and the same Ridge regression with the same parameters, LIME's perturbations would be all actual instances. When using 50, it would be 50 out of 54, exactly like the data representing GTEs. Consequently, the only point of information separating LIME from better ({\\it i.e}., more accurate) explanations is not knowing the range and precision of allowable values. In practice, in a real-world model that needs explanation, there is nothing preventing us from asking the actual values allowed in data to create more accurate perturbations. This demonstrates how the use of data representing ground-truth explanations can lead to analyses that will improve existing XAI methods. \n\n\\section{Discussion and Conclusions}\nThe methodology we describe to generate data representing {\\it ground-truth explanations} (GTEs) poses many challenges. It requires the identification of a data-generation process and needs equations to define classes. The possibility of class overlap, their benefits and limitations, and methods to avoid noise are questions for future work.\n\nThe need to align data representing GTEs with the method targeted to evaluate may pose challenges such as the one we faced when setting a low value to a hyperparameter that produced artificially good results. This suggests this approach may be far from being fully automated.\n\nThe proposing authors of {\\it implementation invariance} \\cite{sundararajan2017axiomatic} suggest that explanation methods should satisfy it, which means producing the same explanation as long as NNs are functionally equivalent. If we envisage an explanation in support of accountability reports, then we want to have methods that can distinguish when a different architecture leads to a different explanation. Furthermore, when computing {\\it implementation invariance}, we face the question of which level of specificity to compare the explanations from these models. This poses questions with respect to whether what being the same for explanations mean. Consider that this question will differ depending on how the XAI method formats explanations.\n\nWe analyzed the results of our evaluation of LIME and showed how that analysis led us to conclusions about how LIME could be improved. Although not explicitly shown, our proposed method is measurable and verifiable, allowing the comparison between two explanation approaches. Further work examining why a method performs better in a certain type of instance, such as in outliers vs. non-outlier instances can help direct how to improve said methods. Finally, this proposed approach sheds light into how to demonstrate accountability, to create benchmarks, and contribute to the progress of the field.\n\nAll data and code necessary for reproducibility is available at https:\/\/github.com\/Rosinaweber forwardslash DataRepresentingGroundTruthExplanations\/tree\/master.\n\n\\subsubsection{Acknowledgements}\nRosina Weber and Prateek Goel are supported by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through the Biomedical Data Translator program award {\\#}OT2TR003448. Any opinions expressed in this document are those of the authors and do not necessarily reflect the views of NCATS, other Translator team members, or affiliated organizations and institutions.\n\n\n\\section{Copyright}\nAll papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \\textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details.\n\n\\section{Formatting Requirements in Brief}\nWe need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai21.sty).\n\\textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:\n\n\\begin{quote}\n\\begin{itemize}\n\\item Your .tex file must compile in PDF\\LaTeX{} --- (you may not include .ps or .eps figure files.)\n\\item All fonts must be embedded in the PDF file --- including includes your figures.\n\\item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.\n\\item No type 3 fonts may be used (even in illustrations).\n\\item You may not alter the spacing above and below captions, figures, headings, and subheadings.\n\\item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein).\n\\item You may not alter the line spacing of text.\n\\item Your title must follow Title Case capitalization rules (not sentence case).\n\\item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below).\n\\item \\LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).\n\\item No \\LaTeX{} 209 documents may be used or submitted.\n\\item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).\n\\item Two-column format in AAAI style is required for all papers.\n\\item The paper size for final submission must be US letter without exception.\n\\item The source file must exactly match the PDF.\n\\item The document margins may not be exceeded (no overfull boxes).\n\\item The number of pages and the file size must be as specified for your event.\n\\item No document may be password protected.\n\\item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).\n\\item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).\n\\item Your PDF must be compatible with Acrobat 5 or higher.\n\\item Your \\LaTeX{} source file (excluding references) must consist of a \\textbf{single} file (use of the ``input\" command is not allowed.\n\\item Your graphics must be sized appropriately outside of \\LaTeX{} (do not use the ``clip\" or ``trim'' command) .\n\\end{itemize}\n\\end{quote}\n\nIf you do not follow these requirements, your paper will be returned to you to correct the deficiencies.\n\n\\section{What Files to Submit}\nYou must submit the following items to ensure that your paper is published:\n\\begin{itemize}\n\\item A fully-compliant PDF file that includes PDF metadata.\n\\item Your \\LaTeX{} source file submitted as a \\textbf{single} .tex file (do not use the ``input\" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).\n\\item The bibliography (.bib) file(s).\n\\item Your source must compile on our system, which includes only standard \\LaTeX{} 2020 TeXLive support files.\n\\item Only the graphics files used in compiling paper.\n\\item The \\LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.).\n\\end{itemize}\n\nYour \\LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \\textbf{Do not submit your source in multiple text files.} Your single \\LaTeX{} source file must include all your text, your bibliography (formatted using aaai21.bst), and any custom macros.\n\nYour files should work without any supporting files (other than the program itself) on any computer with a standard \\LaTeX{} distribution.\n\n\\textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.\n\n\\textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.\n\n\\textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files.\n\n\\textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB.\nName your source file with the last (family) name of the first author, even if that is not you.\n\n\n\\section{Using \\LaTeX{} to Format Your Paper}\n\nThe latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \\TeX\\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.\n\n\\subsection{Document Preamble}\n\nIn the \\LaTeX{} source for your paper, you \\textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).\n\nLeave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.\n\n\\subsubsection{The Following Must Appear in Your Preamble}\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\n\\def\\year{2021}\\relax\n\\documentclass[letterpaper]{article}\n\\usepackage{aaai21}\n\\usepackage{times}\n\\usepackage{helvet}\n\\usepackage{courier}\n\\usepackage[hyphens]{url}\n\\usepackage{graphicx}\n\\urlstyle{rm}\n\\def\\UrlFont{\\rm}\n\\usepackage{graphicx} \n\\usepackage{natbib}\n\\usepackage{caption}\n\\frenchspacing\n\\setlength{\\pdfpagewidth}{8.5in}\n\\setlength{\\pdfpageheight}{11in}\n\\pdfinfo{\n\/Title (AAAI Press Formatting Instructions for Authors\nUsing LaTeX -- A Guide)\n\/Author (AAAI Press Staff, Pater Patel Schneider,\nSunil Issar, J. Scott Penberthy, George Ferguson,\nHans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez)\n\/TemplateVersion (2021.1)\n}\n\\end{verbatim}\\end{scriptsize}\n\\end{quote}\n\n\\subsection{Preparing Your Paper}\n\nAfter the preamble above, you should prepare your paper as follows:\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\n\\begin{document}\n\\maketitle\n\\begin{abstract}\n\\end{abstract}\\end{verbatim}\\end{scriptsize}\n\\end{quote}\n\n\\noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows:\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:Intro}Introduction}\n\nOne way of realizing theories describing gravity as a truly\nhigher-dimensional interaction becoming effectively 4D at low-enough \nenergies is represented by the braneworld models, where the\nobservable universe is a 3-brane (domain wall) to which the\nstandard model (non-gravitational) matter fields are confined,\nwhile gravity field enters the extra spatial dimensions the size\nof which may be much larger than the Planck length scale\n$l_\\mathrm{P}\\sim 10^{-33}\\, \\mathrm{cm}$ \\cite{Ark-Dim-Dva:1998:}.\n\nAs shown by Randall and Sundrum \\cite{Ran-Sun:1999:}, gravity can be localized near the brane at low energies even with\na non-compact, infinite size extra dimension \nwith the warped spacetime satisfying the 5D Einstein equations\nwith negative cosmological constant. Then an arbitrary energy-momentum tensor\ncould be allowed on the brane \\cite{Shi-Mae-Sas:2000:}.\n\n\nThe Randall-Sundrum model gives 4D Einstein gravity in low energy\nlimit, and the conventional potential of Newtonian gravity appears\non the 3-brane with high accuracy \\cite{Ran-Sun:1999:}. Significant deviations from the\nEinstein gravity occur at very high energies, e.g., in the very\nearly universe, and in vicinity of compact objects\n\\cite{Maa:2004:,Dad-etal:2000:,Ger-Maa:2001:,Ali-Gum:2005:}.\nGravitational collapse of matter trapped on the brane results in\nblack holes mainly localized on the brane, but their horizon could\nbe extended into the extra dimension. The high-energy effects\nproduced by the gravitational collapse are disconnected from the\noutside space by the horizon, but they could have a signature on\nthe brane, influencing properties of black holes\n\\cite{Maa:2004:}. There are high-energy effects of local\ncharacter influencing pressure in collapsing matter, and also\nnon-local corrections of ``backreaction'' character arising from\nthe influence of the Weyl curvature of the bulk space on the brane\n-- the matter on the brane induces Weyl curvature in the bulk\nwhich makes influence on the structures on the brane due to the bulk\ngraviton stresses \\cite{Maa:2004:}. The combination of\nhigh-energy (local) and bulk stress (non-local) effects alters\nsignificantly the matching problem on the brane, compared to the\n4D Einstein gravity; for spherical objects, matching no longer\nleads to a Schwarzschild exterior in general\n\\cite{Maa:2004:,Ger-Maa:2001:}. The Weyl stresses\ninduced by bulk gravitons imply that the matching conditions do\nnot have unique solution on the brane; in fact, knowledge of the\n5D Weyl tensor is needed as a minimum condition for uniqueness\n\\cite{Ger-Maa:2001:}.\\footnote{At present, no exact 5D\nsolution in the braneworld model is known.} Some solutions for\nspherically symmetric black holes \\cite{Dad-etal:2000:} and\nuniform density stars \\cite{Ger-Maa:2001:} have been discussed. It has been shown\nthat in the black hole case the matching conditions could be satisfied and the bulk effects \non the black hole spacetimes could be represented by a single ``brany`` parameter. \n\nAssuming spherically symmetric metric induced on\nthe 3-brane, the constrained effective gravitational field equations on the\nbrane could be solved, giving Reissner-Nordstr\\\"{o}m static\nblack hole solutions endowed with a braneworld parameter $b$ having character of a ``tidal'' charge,\n instead of the standard electric charge\nparameter $Q^2$ \\cite{Dad-etal:2000:}. The tidal charge can be both positive and negative, however, there are some\nindications that negative tidal charge should properly represent the\n``backreaction'' effects of the bulk space Weyl tensor on the\nbrane \\cite{Dad-etal:2000:}.\n\nThe stationary and axisymmetric solutions describing\nrotating black holes localized in the Randall-Sundrum braneworld\nwere derived in \\cite{Ali-Gum:2005:}, having the metric tensor of the\nKerr-Newman form with a tidal charge describing the 5D correction\nterm generated by the 5D Weyl tensor stresses. The tidal charge\nhas an ``electric'' character again and arises due to the 5D\ngravitational coupling between the brane and the bulk, reflected\non the brane through the ``electric'' part of the bulk Weyl tensor\n\\cite{Ali-Gum:2005:}, in analogy with the spherically\nsymmetric case \\cite{Dad-etal:2000:}.\n\nWhen both the tidal and electric charge are present the\nblack hole spacetime structure is much more complex and additional off-diagonal metric components $g_{r\\phi}$,\n $g_{rt}$ are relevant along with the standard $g_{\\phi t}$ component, \ndue to the combination of the local bulk effects and the\nrotational dragging. This distorts the event horizon which\nbecomes a stack of non-uniformly rotating null circles having\ndifferent radii at fixed $\\theta$ while going from the equatorial\nplane to the poles \\cite{Ali-Gum:2005:}. The uniformly\nrotating horizon is recovered for the rotation\nparameter $a$ small enough where Kerr-Newman form of the metric tensor is allowed describing charged and slowly rotating black holes \\cite{Ali-Gum:2005:}. In the absence of rotation, the metric tensor reduces\nto the Reissner-Nordstr\\\"{o}m form with correction term of non-local origin\n \\cite{Cha-etal:2001:}.\n\nHere we restrict our attention to the Kerr-Newman type of\nsolutions describing the braneworld rotating black holes with no\nelectric charge, since in astrophysically relevant situations the\nelectric charge of the black hole must be exactly zero, or very\nsmall \\cite{MTW}. Then the results obtained in analysing the\nbehaviour of test particles and photons or test fields around the\nKerr-Newman black holes could be used assuming both positive and\nnegative values of the braneworld tidal parameter $b$ (used instead of\ncharge parameter $Q^2$).\n\nThe information on the properties of strong gravitational fields in vicinity of compact objects, namely\nof black holes, is encoded into optical phenomena of different kind that enable us to make estimates of the black hole parameters, including its tidal charge, when predictions of the theoretical models are confronted with the observed data. From this point of view, the spectral profiles of accretion discs around the black holes in galactic binaries, e.g., in microquasars, are most promising \\cite{Nar-Mcl-Sha:2007:,McCli-Nar-Sha:2007:}, along with profiled spectral lines in the X-ray flux \\cite{Laor:1991:,Bao-Stu:1992:,Stu-Bao:1992:,Kar-Vok-Pol:1992:,Mat-Fab-Ros:1993:,Zak:2003:}. Important information could also be obtained from the quasiperiodic oscillations observed in the X-ray flux of some low-mass black hole binaries of Galactic origin \\cite{Rem-McCli:2006:ARASTRA:}, some expected intermediate black hole sources \\cite{Stroh:2007a:}, or those observed in Galactic nuclei \\cite{Asch:2004:ASTRA:,Asch:2007:}. In the case of our Galaxy centre black hole Sgr A$^*$, we could be able to measure the optical phenomena in more detailed form as compared with the other sources, since it is the nearest supermassive black hole with mass estimated to be $\\sim 4\\times 10^6 M_\\odot$ \\cite{Ghez:2005:}, enabling to measure the \"silhuette`` of the black hole and other subtle GR phenomena \\cite{Bardeen:1973:,Cun-Bar:1973:}.\n\\par\nIn the present paper, we give an introductory study of the tidal charge influence on the optical phenomena near a rotating black hole. We focus our attention to some characteristic phenomena in close vicinity of the black-hole horizon, where the effects of the tidal charge could be in principle of the same order as those of the black hole mass and spin, contrary to the case of weak lensing effects. The light escape cones are given for families of astrophysically interesting sources, namely in locally non-rotating frames, and frames related to circular geodetical motion and radially free-falling sources in section 4 \\cite{SSJ:RAGTime:2005:Proceedings}. The silhuette of the black hole is determined in section 5. Images of the accretion discs are determined in section 6 using the transfer-function method. In Section 7, time delay of hot spot radiation is determined for direct and indirect images assuming circular geodetical motion in close vicinity of the black hole horizon. In Section 8 relevance of some effects is estimated for the Galaxy centre Sgr $A^*$ supermassive black hole. Concluding remarks are presented in Section 9.\n\n\n\\section{\\label{sec:GravFielEqOnBrane}Gravitational field equations on the brane}\n\nIn the 5D warped space models of Randall and Sundrum, involving a non-compact extra dimension, the gravitational field equations in the bulk can be expressed in the form \\cite{Shi-Mae-Sas:2000:,Dad-etal:2000:}\n\n\\begin{equation}\n \\tilde{G}_{AB}=\\tilde{k}^2[-\\tilde\\Lambda g_{AB}+\\delta(\\chi)(-\\lambda g_{AB}+T_{AB})],\\label{beq1}\n\\end{equation}\nwhere the fundamental 5D Planck mass $\\tilde M_P$ enters via $\\tilde{k}^2=8\\pi\/\\tilde{M}_p^3 $, $\\lambda$ is the brane tension, and $\\tilde\\Lambda$ is the negative bulk cosmological constant. Denoting $\\chi=x^4$ as the fifth dimension coordinate , $\\chi=0$ determines location of the brane in the bulk space, at the point of $Z_2$ symmetry; $g_{AB}=\\tilde{g}_{AB}-n_A n_B$ is the induced metric on the brane, with $n_A$ being the unit vector normal to the brane.\n\\par\nThe effective gravitational field equations induced on the brane are determined by the bulk field equations (\\ref{beq1}), the Gauss - Codazzi equations and the generalised matching Israel conditions with $Z_2$-symmetry. They can be expressed as modified standard Einstein's equations containing additional terms reflecting bulk effects onto the brane \\cite{Shi-Mae-Sas:2000:}\n\n\\begin{equation}\n G_{\\mu\\nu}=-\\Lambda g_{\\mu\\nu}+k^2 T_{\\mu\\nu} + \\tilde{k}^2 S_{\\mu\\nu} -\\mathcal{E}_{\\mu\\nu},\\label{beq2}\n\\end{equation}\nwhere $k^2=8\\pi\/M_P^2$, with $M_P$ being the braneworld Planck mass. The relations of the energy scales and cosmological constants are given in the form\n\n\\begin{equation}\n M_P=\\sqrt{\\frac{3}{4\\pi}}\\left(\\frac{\\tilde{M}_P^2}{\\sqrt{\\lambda}}\\right)\\tilde{M}_P;\\quad \\Lambda=\\frac{4\\pi}{\\tilde{M}_P^3}\\left[\\tilde\\Lambda+\\left(\\frac{4\\pi}{3\\tilde{M}_P^3}\\right)\\lambda^2\\right].\\label{beq3}\n\\end{equation}\nLocal bulk effects on the matter are determined by the ``squared energy-momentum'' tensor $S_{\\mu\\nu}$, \nthat reads\n\\begin{equation}\n S_{\\mu\\nu}=\\frac{1}{12}T T_{\\mu\\nu}-\\frac{1}{4}T_\\mu^{\\phantom{\\mu}\\alpha}T_{\\nu\\alpha}+\\frac{1}{24}g_{\\mu\\nu}\\left(3T^{\\alpha\\beta}T_{\\alpha\\beta}-T^2\\right),\n\\end{equation}\nwhile the non-local bulk effects are given by the tensor $\\mathcal{E}_{\\mu\\nu}$ representing the bulk Weyl tensor $\\tilde{C}_{ABCD}$ projected onto the brane, whereas\n\n\\begin{equation}\n \\mathcal{E}_{AB}=\\tilde{C}_{ABCD}n^C n^D.\\label{beq4}\n\\end{equation}\n\nSymmetries of the Weyl tensor imply that $\\mathcal{E}_{[AB]}=\\mathcal{E}_A^{\\phantom{A}A}=0$ and $\\mathcal{E}_{AB}n^B=0$. Therefore, on the brane, $\\chi\\rightarrow 0$, there is $\\mathcal{E}_{AB}\\rightarrow \\mathcal{E}_{\\mu\\nu}\\delta_A^{\\phantom{A}\\mu}\\delta_B^{\\phantom{B}\\nu}$. The $\\mathcal{E}_{\\mu\\nu}$ tensor reflects influence of the non-local gravitational effects in the bulk, including the tidal (``Coulomb``) and transverse traceless (gravitational wave) imprints of the free gravitational field of the bulk.\n\\par\nWe restrict our attention to the vacuum (at both bulk and brane) solutions of the gravitational field equations on the brane. Assuming zero cosmological constant on the brane ($\\Lambda=0$) we arrive to the condition\n\n\\begin{equation}\n \\tilde\\Lambda=-\\frac{4\\pi\\lambda^2}{3\\tilde{M}_P^2}.\\label{beq5}\n\\end{equation}\nIn the absence of matter fields, there is $T_{\\mu\\nu}=0=S_{\\mu\\nu}$, i.e., we are not interested in the properties of the squared energy-momentum $S_{\\mu\\nu}$ representing local effects of the bulk. In the vacuum case, the effective gravitational field equations on the brane reduce to the form \\cite{Shi-Mae-Sas:2000:}\n\\begin{equation}\n R_{\\mu\\nu}=-\\mathcal{E}_{\\mu\\nu},\\quad R_\\mu^{\\phantom{\\mu}\\mu}=0=\\mathcal{E}_\\mu^{\\phantom{\\mu}\\mu}\\label{beq6}\n\\end{equation}\nimplying divergence constraint \\cite{Shi-Mae-Sas:2000:}\n\n\\begin{equation}\n \\nabla^\\mu\\mathcal{E}_{\\mu\\nu}=0\\label{beq7}\n\\end{equation}\nwhere $\\nabla_{\\mu}$ denotes the covariant derivative on the brane.\n\\par\nThe equation (\\ref{beq7}) represents Bianchi identities on the brane, i.e., an integrability condition for the field equations $R_{\\mu\\nu}=-\\mathcal{E}_{\\mu\\nu}$\\cite{Ali-Gum:2005:}. For stationary and axisymmetric (or static, spherically symmetric) solutions Eqs. (\\ref{beq6}) and (\\ref{beq7}) form a closed system of equations on the brane. \n\\par\nThe 4D general relativity energy-momentum tensor $T_{\\mu\\nu}$ (with $T_\\mu^{\\phantom{\\mu}\\mu}=0$) can be formally identified to the bulk Weyl term on the brane due to the correspondence \n\n\\begin{equation}\n k^2 T_{\\mu\\nu}\\quad\\leftrightarrow\\quad -\\mathcal{E}_{\\mu\\nu}.\\label{beq8}\n\\end{equation}\nThe general relativity conservation law $\\nabla^\\mu T_{\\mu\\nu}=0$ then corresponds to the constraints equation on the brane (\\ref{beq7}). This behaviour indicates that Einstein-Maxwell solutions in general relativity should correspond to braneworld vacuum solutions. This was indeed shown in the case of Schwarzchild (R-N) \\cite{Maa:2004:,Dad-etal:2000:} and Kerr (K-N) spacetimes \\cite{Ali-Gum:2005:}. In both of these solutions the influence of the non-local gravitational effects of the bulk on the brane are represented by a single \"braneworld\" parameter $b$. The Coulomb-like behaviour in the Newtonian potential\n\n\\begin{equation}\n \\Phi=-\\frac{M}{M^2_{P}r}+\\frac{b}{2r^2}\\label{beq9}\n\\end{equation}\ninspired the name tidal charge \\cite{Dad-etal:2000:}.\n \\par\n\n\n\n\\section{\\label{sec:NullGeo}Null geodesics in Kerr spacetime with a tidal charge}\n\n\\subsection{Geometry}\n\nFollowing the work of \\cite{Ali-Gum:2005:}, and using the standard Boyer-Linquist coordinates ($t$, $r$, $\\theta$, $\\varphi$), we can write the line element of Kerr black-hole (or naked singularity) spacetime on the three-brane in the form \n\\begin{eqnarray}\n\t\\mathrm{d} s^2 &=& -(1-\\frac{2Mr - b}{\\Sigma})\\mathrm{d} t^2 + \\frac{\\Sigma}{\\Delta}\\mathrm{d} r^2 + \\Sigma \\mathrm{d} \\theta^2 + \\frac{A}{\\Sigma}\\mathrm{d}\\varphi^2 \\nonumber\\\\\n\t&&- 2\\frac{2Mr-b}{\\Sigma}\\sin^2\\theta\\mathrm{d} t\\mathrm{d}\\phi,\\label{eq1}\n\\end{eqnarray}\nwhere \n\n\\begin{eqnarray}\n\t\\Sigma &=& r^2 + a^2\\cos^2\\theta\\label{eq2}\\\\\n\t\\Delta &=& r^2 - 2Mr + a^2 +b\\label{eq3}\\\\\n\tA &=& (r^2 + a^2)^2 - a^2\\Delta\\sin^2\\theta\\label{eq4}.\n\\end{eqnarray}\nM is the mass parameter, $a=J\/M$ is the specific angular momentum and the braneworld prarameter $b$ is the \\emph{tidal charge} representing imprint of non-local gravitational effects from the bulk space. The metric (\\ref{eq1}) has the \nsame form as the Kerr-Newman metric, where the tidal charge is replaced by the squared electric charge, $Q^2$. \nThe stress tensor on the brane $E_{\\mu\\nu}$ takes the form \\cite{Ali-Gum:2005:}\n\n\\begin{eqnarray}\n E_t^{\\phantom{t}t}&=&-E_\\varphi^{\\phantom{\\varphi}\\varphi}=-\\frac{b}{\\Sigma^3}[\\Sigma-2(r^2+a^2)],\\\\\n E_r^{\\phantom{r}r}&=&-E_\\theta^{\\phantom{\\theta}\\theta}=-\\frac{b}{\\Sigma^2},\\\\\n E_\\varphi^{\\phantom{\\varphi}t}&=&-(r^2+a^2)\\sin^2\\theta,\\\\ \nE_t^{\\phantom{t}\\varphi}&=&-\\frac{2ba}{\\Sigma^3}(r^2+a^2)\\sin^2\\theta\n\\end{eqnarray}\nthat is fully analogical ($b\\rightarrow Q^2$) to the components of energy-momentum tensor for Kerr-Newman spacetimes in Einstein's general relativity \\cite{Ali-Gum:2005:}.\n\nThe roots of $\\Delta = 0$ identify the type of braneworld Kerr spacetime. There are two possibilities, a black hole or a naked singularity. By introducing $a^2\/M^2\\rightarrow a^2$, $b\/M^2\\rightarrow b$, $r_+\/M\\rightarrow r_+$, or putting $M=1$, we write the roots of $\\Delta = 0$ in the form\n\n\\begin{equation}\n r_+ = 1+\\sqrt{1-a^2-b},\\quad\\textrm{(outer horizon)}\\label{horeq2}\n\\end{equation}\nand\n\\begin{equation}\n r_- = 1-\\sqrt{1-a^2-b},\\quad\\textrm{(inner horizon)}.\\label{horeq3}\n\\end{equation}\nThe metric given by the line element (\\ref{eq1}) determines the geometry of rotating black hole in braneworld universe if\n\n\\begin{equation}\n \t1\\ge a^2+b.\\label{horeq1}\n\\end{equation}\nThe strong inequality refers to the case of two horizonts $r_+$ and $r_-$. For extreme black holes ($1=a^2+b$) the horizons coincide $r_+ = r_- = 1$.\n\n\\begin{figure}[!th]\n\n\\includegraphics[width=10cm]{fig1b}\n\n\\caption{\\label{fig_1}The plot of the inner horizont radius $r_-$ as a function of tidal charge parameter $b$ for three representative values of rotational parameter $a^2=0.5$, $a^2=1.0$ and $a^2=1.5$.}\n\\end{figure}\nIt is clear that for $b\\ge 0$ the loci of the inner horizon $r_-$ are always\npositive. But for $b<0$, the loci of the inner horizon can also be at\nnegative $r$, as illustrated in Figure \\ref{fig_1}. \n\\par\nNotice that $a^2>1$ is not allowed for standard black holes and for $b>0$ \\cite{MTW}, but such a possibility appears for $b<0$. The rotational parameter of extreme black holes is given by $a^2=1-b$.\nThe case of $10$.\n\nThe reality conditions $(\\mathrm{d} r\/\\mathrm{d} w')^2 \\ge 0$ and $(\\mathrm{d}\\theta\/\\mathrm{d} w')^2 \\ge 0$ lead to the restrictions on the impact parameter $\\mathcal{L}$\n\n\\begin{equation}\n\t\\mathcal{L}_{min} \\leq \\mathcal{L} \\leq \\mathcal{L}_{max},\\label{eq12}\n\\end{equation}\nwhere\n\n\\begin{equation}\n\t\\mathcal{L}_{max} \\equiv \\frac{(a\\lambda -2r +b)^2}{\\Delta}+ r^2+2r-b,\\label{eq13}\n\\end{equation}\nand\n\\begin{equation}\n\t\\mathcal{L}_{min}\\equiv\\left\\{ \\begin{array}{lcr} \n\t\t\t\t\\lambda^2 & \\textrm{for} & |\\lambda|\\geq a,\\\\\n\t\t\t\t2a|\\lambda|-a^2 & \\textrm{for} & |\\lambda|\\leq a. \n\t\t\t \\end{array}\\right.\\label{eq14}\n\\end{equation}\nThe upper(lower) constraint, $\\mathcal{L}_{max}$($\\mathcal{L}_{min}$), comes from the radial-motion (latitudinal-motion) reality condition. The properties of the photon motion are determined by the behaviour of the surface $\\mathcal{L}_{max}(r;\\lambda,a,b)$, as given by (\\ref{eq13}). The extrema of the surface $\\mathcal{L}_{max}$ (giving spherical photon orbits) are determined by\n\n\\begin{eqnarray}\n\t\\lambda=\\lambda_+ &\\equiv& \\frac{r^2+a^2}{a},\\label{eq15}\\\\\n\t\\lambda=\\lambda_- &\\equiv& \\frac{r^2-b r - a^2 - r\\Delta}{a(r-1)}.\\label{eq16}\n\\end{eqnarray}\nThe values of $\\mathcal{L}_{max}$ at these extreme points are given by\n\n\\begin{eqnarray}\n\t\\mathcal{L}_{max}(\\lambda_{+})\\equiv\\mathcal{L}_+ &=& 2r^2+a^2,\\label{eq17}\\\\\n\t\\mathcal{L}_{max}(\\lambda_{-})\\equiv\\mathcal{L}_- &=&\\frac{2r(r^3-3r+4b)+a^2(r+1)^2}{(r-1)^2}\\label{eq18}.\n\\end{eqnarray}\nThe character of the extrema follows from the sign of $\\partial^2\\mathcal{L}_{max}\/\\partial r^2$. One finds that\n\n\n\n\n\\begin{eqnarray}\n\\frac{\\partial^2 \\mathcal{L}_{max}}{\\partial r^2} &=& \\frac{8r^2}{\\Delta},\\quad\\textrm{for}\\quad \\lambda = \\lambda_+,\\label{eq19}\\\\\n\\frac{\\partial^2 \\mathcal{L}_{max}}{\\partial r^2} &=&\\frac{8r^2}{\\Delta} - \\frac{8r}{(r-1)^2},\\quad\\textrm{for}\\quad \\lambda=\\lambda_-.\\label{eq20}\n\\end{eqnarray}\nClearly, there are only minima of $\\mathcal{L}_{max}$ along for $\\lambda=\\lambda_{+}$, corresponding to unstable\nspherical orbits.\n\\begin{figure}[ht]\n\n\\begin{tabular}{cc}\n \\includegraphics[width=6cm]{fig2a}&\\includegraphics[width=6cm]{fig2b} \n\\end{tabular}\n\n\\caption{\\label{fig2_a_b}Left: classification of Kerr spacetime in braneworld universe according to the values of $a^2+b$, $b$ and $n_{ext}$ (the number of local extrema of the curves $\\tilde\\lambda_\\pm$, which is also the number of circular photon orbits in the equatorial plane). \nThe classification regions are: I) for $a^2+b\\leq 1$ and $n_{ext}=2$, II) for $a^2+b\\leq 1$ and $n_{ext}=4$, III) $a^2+b>1$ and $b<1$ and $n_{ext}=2$, IV) for $a^2+b>1$ and $b>1$ and $n_{ext}=2$, V) for $a^2+b>1$ and $n_{ext}=0$, VI) for $a^2+b>1$ and $b<1$ and $n_{ext}=4$, VII) for $a^2+b>1$ and $b>1$ and $n_{ext}=4$. \nRight: zoom of the area in the dashed rectangle of the left plot, to cover regions VI and VII.} \n\\end{figure}\n\n\nFurther, we have to determine where the restrictions given by the latitudinal motion $\\mathcal{L}_{min}$ meet the restrictions on the radial motion $\\mathcal{L}_{max}$. We find that $\\mathcal{L}_{max}=\\lambda^2$ (for $|\\lambda|\\ge a$) is fullfilled where\n\n\\begin{equation}\n \\lambda=\\tilde\\lambda_\\pm\\equiv\\frac{a(b-2r\\pm r^2\\sqrt{\\Delta})}{r^2-2r+b},\\label{eq23}\n\\end{equation}\nwhile $ \\quad\\mathcal{L}_{max}= 2a|\\lambda| - a^2$ (for $|\\lambda|} 1$, $b^{\\phantom{i}<}_{\\phantom{i}>} 1$ and $n_{ex}$ . The classification is represented in Figure \\ref{fig2_a_b}. There are two different classes of the black-hole spacetimes, differing by the presence of the photon circular orbits under the inner horizon. However, in the astrophysically relevant region outside the outer horizon, both the classes are of the same character, having two unstable equatorial photon circular orbits, one corotating (at $r_{ph1}$) and the other counter-rotating (at $r_{ph2}>r_{ph1}$). The tidal charge $b$ introduces no qualitatively new feature into the behaviour of photon motion in the Kerr spacetimes, but the quantitative impact of $b<0$ with\nhigh magnitude are quite relevant, as shown in next sections. All the braneworld Kerr black holes with tidal charge $b<0$ belong to the class II discussed in the case of standard Kerr-Newman spacetimes \\cite{Stu:1981b:}. We illustrate in Figures \\ref{fig3}-\\ref{fig5} functions $\\lambda_\\pm$, $\\tilde\\lambda_\\pm$ and $\\bar\\lambda$ for such a black hole spacetime with parameters $a=0.9$ and $b=-1.0$. In this case typical for braneworld Kerr black hole with $b<0$ there exist ten significiant values of $\\lambda$ as given in Figures \\ref{fig3} - \\ref{fig5}.\n\n\\begin{figure}[!ht]\n\n \\includegraphics[width=12.0cm]{fig3}\n\n\\caption{\\label{fig3}The graphs of the $\\lambda_\\pm$, $\\tilde\\lambda_\\pm$ and $\\bar\\lambda$ functions are plotted for representative values of the parameters $a=0.9$ and $b=-1.0$. The two dashed rectangle areas labeled with numbers $1$ and $2$ are zoomed in the following figures. The horizontal gray dashed lines represent special values of the impact parameter $\\lambda$, denoted according to the text as $\\lambda_A$...$\\lambda_J$.}\n\\end{figure}\nFor each interval of $\\lambda$ as determined by the sequence of $\\lambda_A$ - $\\lambda_J$ introduced in Figure \\ref{fig3}, there exists a characteristic type of behaviour of the restricting \"radial\" function $\\mathcal{L}_{max}$ and its relation to the \"latitudinal\" restricting function $\\mathcal{L}_{min}$. They can be found in \\cite{Stu:1981b:} and will not be repeated here.\n\n\\begin{figure}[ht]\n\n\\begin{tabular}{cc}\n \\includegraphics[width=6cm]{fig4a}&\\includegraphics[width=6cm]{fig4b}\n\\end{tabular}\n\n\\caption{\\label{fig4}Left figure is the zoom of dashed area labelled $1$ in previous figure. Right figure is the zoom of dashed area labelled $2$ in previous figure. The dashed rectangle area here is zoomed in the next figure.}\n\\end{figure}\n\n\\begin{figure}[ht]\n\n \\includegraphics[width=6.2cm]{fig5}\n\n\\caption{\\label{fig5}The zoom of the dashed rectangle area in previous figure.}\n\\end{figure}\n\nThe allowed values of the impact parameter $\\mathcal{L}$ lie between the limiting functions $\\mathcal{L}_{min}$ and $\\mathcal{L}_{max}$. If the minimum $\\mathcal{L}_{max}^{min}\\equiv\\mathcal{L}_{max}(r_{min},\\lambda_0)$ of the limiting function $\\mathcal{L}_{max}$ is less than the value of the limiting function $\\mathcal{L}_{min}$, an incoming photon ($k^r < 0$) travelling from infinity will return back for all values of $\\mathcal{L}_0\\in[\\mathcal{L}_{min};\\mathcal{L}_{max}]$. If $\\mathcal{L}_{max}^{min}>\\mathcal{L}_{min}$, \nthe incoming photon ($k^r < 0$) travelling from infinity returns back if its impact parameter $\\mathcal{L}_0$ \nsatisfies the condition $\\mathcal{L}_{0}\\ge\\mathcal{L}_{max}^{min}$ and is captured by the black hole \nif $\\mathcal{L}_0<\\mathcal{L}^{min}_{max}$. \nThe minimum $\\mathcal{L}_{max}^{min}$ determines (with the particular value of $\\lambda$) a photon spherical orbit, \ni.e., a sphere where photons move with $r=const$ but with varying latitude $\\theta$ (and, of course, varying $\\varphi$). \nWhen the condition $\\mathcal{L}_0 = \\mathcal{L}_{min}$ is satisfied simultaneously, the spherical photon orbit is transformed \nto an equatorial photon circular orbit. Photons with $\\mathcal{L}_0=\\mathcal{L}_{max}^{min}$ coming from distant regions or \nregions close to the black hole horizon will wind up around the photon sphere. \n \n\\clearpage\n\n\n\n\\section{\\label{sec:LEC}Light escape cones}\nThe optical phenomena related to accretion processes in the field of rotating black holes could be efficiently studied by using the notion of light escape cones of local observers (sources) that determine which portion of radiation emitted by a source could escape to infinity and, complementary, which portion is trapped by the black hole \\cite{SSJ:RAGTime:2005:Proceedings}. Here we focus our attention to four families of observers (sources) that are of direct physical relevance.\n\n\n\\subsection{Local frames of stationary and free-falling observers}\nWe consider three families of stationary frames, namely $LNRF$ (Locally Nonrotatig Frame), $SF$ (Static Frame) and $GF_\\pm$(Circular Geodesic Frame) and one non-stationary frame, namely $RFF$ (Radially Falling Frame). \nThe $LNRF$ are of highest physical importance since the physical phenomena take the simplest form when expressed in such frames, because the rotational spacetime effects are maximally suppressed there \\cite{Bardeen:1973:,MTW}. The $GF_\\pm$ are directly related to Keplerian accretion discs in the equatorial plane of the spacetime, both corotating and counterrotating, while $RFF$ are related to free-falling spherical accretion. The $SF$ are fixed relative to distant observers. The $GF_\\pm$ and $RFF$ are geodetical frames, while $SF$ and $LNRF$ are generally accelerated frames.\n\nThe radial and latitudinal 1-forms of the three stationary frame tetrads are common for all three stationary cases and read\n\n\\begin{eqnarray}\n\t\\omega^{(r)}&=&\\left\\{0,\\sqrt{\\Sigma\/\\Delta},0,0 \\right\\},\\label{LC9}\\\\\n\t\\omega^{(\\theta)}&=&\\left\\{0,0,\\sqrt{\\Sigma},0 \\right\\}.\\label{LC10}\n\\end{eqnarray}\n$LNRF$ correspond to observers with $\\Phi=0$ (zero angular momentum observers). Their time and azimuthal 1-forms read\n\n\\begin{eqnarray}\n\t\\omega^{(t)}&=&\\left\\{\\sqrt{\\frac{\\Delta\\Sigma}{A}},0,0,0 \\right\\},\\label{LC11}\\\\\n\t\\omega^{(\\varphi)}&=&\\left\\{-\\Omega_{LNRF}\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta,0,0,\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta\\right\\}.\\label{LC12}\n\\end{eqnarray}\nwhere \n\n\\begin{equation}\n\t\\Omega_{LNRF}=\\frac{a(2Mr-b)}{A}\\label{LC13}\n\\end{equation}\nis the angular velocity of $LNRF$ as seen by observers at infinity. \n\\par\nThe tetrad of $SF$ corresponding to observers with $\\Omega=0$ ,i.e. static relative to observers at infinity, is given by the formulae\n\n\\begin{eqnarray}\n\t\\omega^{(t)}&=&\\left\\{ \\sqrt{1-\\frac{2r-b}{\\Sigma}},0,0,\\frac{a(2r-b)\\sin^2\\theta}{\\sqrt{\\Sigma^2-(2r-b)\\Sigma)}} \\right\\},\\\\\n\t\\omega^{(\\varphi)}&=&\\left\\{ 0,0,0,\\sqrt{\\frac{\\Delta\\Sigma}{\\Sigma-(2r-b)}}\\sin\\theta \\right\\}.\n\\end{eqnarray}\n\nThe $GF_\\pm$ observers move along $\\varphi$-direction in the equatorial plane with velocity $V_{GF\\pm}$(+...corotating, -...counterrotating) relative to the $LNRF$ and with angular velocity $\\Omega$ relative to the static observers at infinity given by \\cite{}[SK]\n\\begin{equation}\n\\Omega_\\pm=\\pm\\frac{\\sqrt{r-b}}{r^2 \\pm a\\sqrt{r-b}}. \\label{ang_vel_gf}\n\\end{equation}\n\nThe velocity $V_{GF\\pm}$ is given by\n\n\\begin{equation}\n\tV_{GF\\pm}=\\pm\\frac{(r^2+a^2)Y\\mp a(2r-b)}{\\sqrt{\\Delta}(r^2\\pm aY)}.\\label{VGF}\n\\end{equation}\nwhere $Y=\\sqrt{r-b}$. The standard Lorentz transformation of the $LNRF$ tetrad gives the tetrad of $GF_\\pm$ in the form\n\\begin{eqnarray}\n\t\\omega^{(t)}_\\pm&=&\\left\\{ \\frac{r^2-2r+b\\pm a Y}{Z_\\pm},0,0,\\mp\\frac{(r^2+a^2)Y\\mp a(2r-b)}{Z_\\pm} \\right\\},\\\\\n\\omega^{(\\varphi)}_\\pm&=&\\left\\{\\mp \\frac{\\sqrt{\\Delta}Y}{Z_\\pm},0,0,\\frac{\\sqrt{\\Delta(r^2\\pm a Y)}}{Z_\\pm}, \\right\\}\n\\end{eqnarray}\nwhere \n\n\\begin{equation}\n\tZ_\\pm = r\\sqrt{r^2-3r+2b\\pm2aY}.\n\\end{equation}\nNote that the $GF_\\pm$ family is restricted to the equatorial plane, while $LNRF$ are defined at any $\\theta$.\n\nThe $RFF$ observers have velocity\n\n\\begin{equation}\n\tV_{RFF}=\\{V^{(r)},\\,V^{(\\theta)},\\,V^{(\\varphi)}\\}\n\\end{equation} \nas measured in $LNRF$. The radially free-falling (or free-escaping) observers starting (finishing) at infinity move with $\\theta = const$. Using the results of \\cite{Stu-Bic-Bal:1999:}, we find the velocity components of the free-falling frames in the $LNRF$ frames \n\n\n\\begin{eqnarray}\n\tV^{(r)}&=&\\pm\\sqrt{1-\\frac{\\Sigma\\Delta}{A}},\\\\\n\tV^{(\\theta)}&=&0,\\\\\n\tV^{(\\varphi)}&=& 0.\n\\end{eqnarray}\nClearly, the free-falling (free-escaping) observers move only radially in the $LNRF$, in analogy to particles radially moving in static frames of the Schwarzchild spacetimes.\n For the radially free-falling sources, the tetrad components $\\omega^{(\\theta)}$ and $\\omega^{(\\varphi)}$ coincide with those of the LNRF tetrad, while $\\omega^{(t)}$ and $\\omega^{(r)}$ are transformed. The local Lorentz transformation of the $LNRF$ to the $RFF_\\pm$ tetrad yields\n\n\\begin{eqnarray}\n\\omega_\\pm^{(t)}&=&\\left\\{ \\gamma\\frac{\\Delta\\Sigma}{A}, \\mp\\sqrt{\\frac{\\Sigma}{\\Delta}}V,0,0 \\right\\},\\\\\n\\omega_\\pm^{(r)}&=&\\left\\{\\mp\\gamma\\sqrt{\\frac{\\Delta\\Sigma}{A}}V,\\sqrt{\\frac{\\Sigma}{\\Delta}}\\gamma,0,0\\right\\},\\\\\n\\omega_\\pm^{(\\theta)}&=&\\{0,0,\\sqrt{\\Sigma},0\\},\\\\\n\\omega_\\pm^{(\\varphi)}&=&\\left\\{-\\Omega_{LNRF}\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta,0,0,\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta \\right\\}.\n\\end{eqnarray}\n\n\n\\subsection{Construction of escape cones}\n\n\\begin{figure}[ht]\n\t\\includegraphics[width=10cm]{fig8}\n\\caption{\\label{fig8}Definition of directional angles $\\alpha_0$, $\\beta_0$ and $\\gamma_0$ in a local frame. Vectors $\\vec{e}_r$, $\\vec{e}_\\theta$, $\\vec{e}_\\varphi$ are the basic tetrad vectors. Position of the observer (source) is given by the coordinates $(r_0,\\theta_0)$. Vector $\\vec{k}$ represents a photon as observed by the observer in the given tetrad and vector $\\vec{k}^\\prime$ is its projection into the plane ( $\\vec{e}_\\theta$, $\\vec{e}_\\varphi$). }\n\\end{figure}\n\n For each direction of emission in the local frame of a source, there is a corresponding pair of values of the impact parameters $\\lambda$ and $\\mathcal{L}$ which can be related to the directional cosines of the photon trajectory in the local frame at the position of the source. Of course, the analysis of the turning points of the radial motion of photons, presented in the previous section, is crucial in determining the local escape cones as the boundary of the escape cone is given by directional angles related to spherical photon orbits.\n\nProjection of a photon 4-momentum $\\vec{k}$ onto the local tetrad of an observer is given by the formulae\n\n\\begin{eqnarray}\nk^{(t)}&=&-k_{(t)}=1,\\label{LC1}\\\\\nk^{(r)}&=&k_{(r)}=\\cos\\alpha_0,\\label{LC2}\\\\\nk^{(\\theta)}&=&k_{(\\theta)}=\\sin\\alpha_0\\cos\\beta_0,\\label{LC3}\\\\\nk^{(\\varphi)}&=&k_{(\\varphi)}=\\sin\\alpha_0\\sin\\beta_0,\\label{LC4}\n\\end{eqnarray} \nwhere $\\alpha_0$, $\\beta_0$ are directional angles of the photon in the local\nframe (see Figure \\ref{fig8}) and $\\cos\\gamma_0=\\sin\\alpha_0\\sin\\beta_0$. \nIn terms of the local tetrad components of the photon 4-momentum and the related directional angles, the conserved quantities, namely, the azimutal momentum $\\Phi$, energy $E$ and $K$ read\n\n\\begin{eqnarray}\n\t\\Phi&=&k_\\varphi=-\\omega^{(t)}_{\\phantom{(t)}\\varphi}k^{(t)} + \\omega^{(r)}_{\\phantom{(r)}\\varphi}k^{(r)}+\\omega^{(\\theta)}_{\\phantom{(\\theta)}\\varphi}k^{(\\theta)}+\\omega^{(\\varphi)}_{\\phantom{(\\varphi)}\\varphi}k^{(\\varphi)},\\label{LC6}\\\\\n\tE&=&-k_t=\\omega^{(t)}_{\\phantom{(t)}t}k^{(t)} - \\omega^{(r)}_{\\phantom{(r)}t}k^{(r)}-\\omega^{(\\theta)}_{\\phantom{(\\theta)}t}k^{(\\theta)}-\\omega^{(t)}_{\\phantom{(\\varphi)}\\varphi}k^{(\\varphi)},\\label{LC7}\\\\\n K&=&\\frac{1}{\\Delta}\\left\\{ [E(r^2+a^2)-a\\Phi]^2-(\\Sigma k^r)^2\\right\\}.\\label{LC8}\n\\end{eqnarray}\nThe impact parameters $\\lambda$ and $\\mathcal{L}$ defined by relations (\\ref{eq9}) and (\\ref{eq10}) are thus fully determined by any double, $D$, of angles from the set $M=[\\alpha_0,\\beta_0,\\gamma_0]$.\n\n Having defined the source frame, we can construct light escape cones assuming fixed coordinates of the source $r_0$, $\\theta_0$. Their construction proceedes in the following steps:\n\n\\begin{itemize}\n\\item for given $D$, say $D=[\\alpha_0,\\beta_0]$, we calculate $\\lambda=\\lambda(\\alpha_0,\\beta_0)$,\n\\item $\\lambda$ determines the behaviour of $\\mathcal{L}_{max}=\\mathcal{L}_{max}(r;\\lambda)$,\n\\item from the analysis presented in the previous section we calculate minimum of $\\mathcal{L}_{max}$, which reads $\\mathcal{L}_{max}^{min}=\\mathcal{L}_{max}(r_{min};\\lambda)$,\n\\item we search for such a double $D$ which satisfies equation $\\mathcal{L}_0(\\alpha_0,\\beta_0)=\\mathcal{L}_{max}(r_{min};\\lambda)$.\n\\end{itemize}\nHere, we present in detail the construction of light escape cones in particular case of the $LNRF$. The procedure is analogous for the other stationary frames and simply modified for the free-falling frames, being radius dependent.\n\n\\begin{figure}[ht]\n\\begin{tabular}{ll}\n\\includegraphics[width=6cm]{fig9a} & \\includegraphics[width=6cm]{fig9b}\n\\end{tabular}\n\\caption{\\label{fig9_a_b}Left. The functions $\\mathcal{L}_{max}$ and $\\mathcal{L}_{min}=\\lambda_0^2$ are plotted together with representative constant functions $\\mathcal{L}_1$ and $\\mathcal{L}_2$ to demonstrate the construction of the photon escape cone. Right. The intersections of $\\mathcal{L}_{max}(\\gamma_0)$ with $\\lambda^2(\\gamma_0)$ give the interval of relevant values of $\\gamma_0\\in[\\gamma_{min};\\gamma_{max}]$.}\n\\end{figure}\n\n\n\n\\par\nThe impact parameter $\\lambda$ expressed in terms of the angle $\\gamma_0$, related to the $LNRF$, reads\n\n\\begin{equation}\n\t\\lambda_0=\\frac{1}{\\Omega_{LNRF0}+\\frac{\\Sigma_0\\sqrt{\\Delta_0}}{A_0\\sin\\theta_0\\cos\\gamma_0}},\n\\end{equation}\nwhere index '$0$' refers to the frame with coordinates $[r_0,\\theta_0]$. The minimum of $\\mathcal{L}_{max}$ is located at\n\n\\begin{equation}\n\tr_{min}=\\left\\{ \\begin{array}{lcr}\n\t\t\t\\sqrt{a\\lambda - a^2} & \\textrm{for} & \\lambda\\geq\\lambda_G = a\\\\\n\t\t\t1-\\frac{k_1}{k_2}+\\frac{k_2}{3} & \\textrm{for} & \\lambda<\\lambda_G = a\n\t\t\t\\end{array}\\right.\\label{eq_rmin}\n\\end{equation} \nwhere \n\n\\begin{eqnarray}\n\tk_1&=&a^2+2b+a\\lambda-3,\\\\\n\tk_2&=&\\left\\{ 27(1-a^2-b)+2\\sqrt{3}\\sqrt{27(1-a^2-b)^2+k_1^3}\\right\\}^{1\/3}.\n\\end{eqnarray}\nThe relevant values of $\\mathcal{L}$ lie between $\\mathcal{L}_{max}$ and\n$\\mathcal{L}_{min}$ determined by Eqs (\\ref{eq13}) and (\\ref{eq14}). The\nintersections of functions $\\mathcal{L}_{max}=\\mathcal{L}_{max}(\\gamma_0)$ and\n$\\mathcal{L}_{min}(\\gamma_0)$ give the relevant interval of angles\n$\\gamma\\in[\\gamma_{min},\\gamma_{max}]$ (see Figure \\ref{fig9_a_b}). For each $\\gamma$ from $[\\gamma_{min},\\gamma_{max}]$ we calculate minimal value of the photon impact parameter $\\mathcal{L}$ for which the photon reaches the turning point $r_{min}$ and escapes to infinity. This minimal value is the minimum of $\\mathcal{L}_{max}$ which is located at $r_{min}$, eg. $\\mathcal{L}_{max}=\\mathcal{L}_{max}(r_{min};\\lambda_0(\\gamma_0),a,b)$, where $r_{min}$ is given by (\\ref{eq_rmin}).\nNow we can calculate the value of $\\alpha_0$ using equation\n\n\\begin{equation}\n\t\\cos\\alpha_0=\\frac{k^{(r)}}{k^{(t)}}=\\frac{\\omega^{(r)}_{LNRF\\mu}k^\\mu}{\\omega^{(t)}_{LNRF\\mu}k^\\mu}.\n\\end{equation} \nWe arrive to the formula\n\n\\begin{equation}\n\t\\cos\\alpha_0=\\pm\\sqrt{A_0}\\frac{\\sqrt{(r_0^2+a^2-a\\lambda_0)^2-\\Delta_0(\\mathcal{L}_{max}^{min}-2a\\lambda_0+a^2)}}{-a(a\\sin^2\\theta_0-\\lambda_0)\\Delta_0+(r_0^2+a^2)(r_0^2+a^2-a\\lambda_0)},\n\\end{equation}\nwhere $A_0=A(r_0,\\theta_0)$, $\\Delta_0=\\Delta(r_0)$ and $\\mathcal{L}_{max}^{min}=\\mathcal{L}_{max}(r_{min};\\lambda_0,a,b)$. The angle $\\beta_0$ can be calculated from the formula (\\ref{LC4}).\nIn this way we obtain angles from the arc $\\beta_0\\in\\langle -\\pi\/2; \\pi\/2\\rangle$. The remaining arc $\\beta_0\\in\\langle \\pi\/2; 3\\pi\/2\\rangle$ can be obtained by turning the arc $\\beta_0\\in\\langle -\\pi\/2; \\pi\/2\\rangle$ around the symmetry axis determined by angles $\\beta_0=-\\pi\/2$ and $\\beta_0=\\pi\/2$. This procedure can be done because photons released under angles $\\beta_0$ and $\\pi-\\beta_0$ have the same constants of motion. \nClearly, for sources under the radius corresponding to the corotating\nequatorial photon circular orbit, only outward directed photons with no\nturning point of the $r$-motion can escape. With radius of the source\napproaching the event horizon ($r_0\\rightarrow r_+$), the escape cone shrinks\nto infinitesimal extension, except the case of extreme black hole \\cite{Bardeen:1973:}. For the other frames considered here, the procedure of\nthe related light escape cone construction can be directly repeated, but with\nthe relevant tetrad 1-form components being used in the procedure.\n\nIn order to reflect properly the effect of the tidal charge $b$ on the escape cone structure, we shall give the cones for black hole sequences of two kind: first we keep the spin $a$ fixed and change $b$, second we keep fixed \"distance\" to the extreme black hole states, i.e., $a^2+b$ is fixed, and both $a$ and $b$ are changed. The positive tidal charges have tendency to slightly increase the asymmetry of the cones as compared with $b=0$ case, keeping its character similar to the case of Kerr black holes (see next section). Therefore, we focus our attention to the influence of negative tidal charges. \n\n\\begin{figure}[ht]\n \\begin{tabular}{ccc}\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_a0n998_b0_re6_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a0n998_bm1_re6_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a9981_bm3_re6M_th0_ok}\\\\\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_b0_r020_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_bm1_r020_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a9981_bm3_re20M_th0_ok}\n \\end{tabular}\n\\caption{Light escape cones as seen by $LNRF$ in the vicinity of the braneworld kerr black hole. \nTop set of images is plotted for radial coordinate of emitter $r_e=6M$ and bottom set for $r_e=20M$.\nThe rotational parameter $a=0.9981$ is fixed and the representative values of the braneworld parameter $b$ are $0$ (left), $-1$ (middle) and $-3$ (right). The shaded area represents photons captured by black hole. }\\label{LNRF_fixed_a_on_b}\n\\end{figure}\n\n\n\n\\begin{figure}[ht]\n \\begin{tabular}{ccc}\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_a21_b0_re6M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a22_bm1_re6M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a24_bm3_re6M_th0_ok}\\\\\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_a21_b0_re20M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a22_bm1_re20M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a24_bm3_re20M_th0_ok}\n \\end{tabular}\n\\caption{Light escape cones as seen by $LNRF$ in the vicinity of the extreme braneworld kerr black hole. Top set of images is plotted for radial coordinate of emitter $r_e=6M$ and bottom set for $r_e=20M$. The representative rotational and braneworld parameters [$a^2$,$b$] are [$1.0$,$0.0$](left), [$2.0$,$-1.0$](middle) and [$4.0$,$-3.0$](right). The shaded area represents photons captured by black hole. }\\label{LNRF_extreme_on_b}\n\\end{figure}\n\n\nBehaviour of the $LNRF$ escape cones in dependence on the braneworld parameter\n$b$ (and the spin $a$) is represented in Figures \\ref{LNRF_fixed_a_on_b} and \\ref{LNRF_extreme_on_b}.\nThe complementary trapped cones, corresponding to photons captured by the black hole, are shaded. \n\nAt a fixed radius expressed in units of $M$ the extension of the trapped cone grows with descending of $b$ to higher negative values and fixed spin $a$ and mass $M$, demonstrating thus the growing gravitational pull of the black hole due to growing magnitude of the negative braneworld parameter. The same statement holds also in the case of extreme Kerr black holes, when $a$ grows and $b$ descends, while $M$ is fixed. Clearly, the positive braneworld parameters have tendency to increase the asymmetry of the cones, while the negative ones symmetrize the escape cones with growing of $|b|$. On the other hand, the asymmetry of the escape cone grows with descending of $b$ for extreme black holes (Figure \\ref{LNRF_extreme_on_b}).\n\n\n\t\\begin{figure}[ht]\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_b0_re1n24M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm1_re3n91M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm3_re6n27M_ok}\\\\\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_b0_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm1_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm3_re10M_ok}\\\\\n\t\t\t\\includegraphics[width=4cm]{esape_cones_gf_a0n9981_b0_re10rh_ok}&\n\t\t\t\\includegraphics[width=4cm]{esape_cones_gf_a0n9981_bm1_re10rh_ok}&\n\t\t\t\\includegraphics[width=4cm]{esape_cones_gf_a0n9981_bm3_re10rh_ok}\n\t\t\\end{tabular}\n\t\t\\caption{Escape cones of GF+ observers. Top images are plotted for observer (emitter) at $r=r_{ms}$, middle images $r=10M$ and bottom images for \n\t\t$r=10\\cdot r_h$. The value of $a=0.9981$ is kept fixed. The representative values of $b$ are (from left to right) $0.0$, $-1.0$ and $-3$. }\\label{GF_escape_cones}\n\t\\end{figure}\t\n\n\t\\begin{table}[ht]\n\t\t\\tbl{Table of relevant values of $r_{ms}$ and $r_{h}$ used in plots on Figs \\ref{GF_escape_cones} and \\ref{GF_escape_cones_extreme}.}\n\t\t{\\begin{tabular}{@{}cccc@{}} \n\t\t\\toprule\n\t\t$(a^2, b)$ & (0.9981,0.0) & (0.9981,-1.0) & (0.9981,-3.0)\\\\ \n\t\t\\colrule\n\t\t$r_{ms}$ & 1.24M & 3.91M & 6.27M\\\\\n\t\t\\colrule\n\t\t$r_h$ & 1.062M & 2.002M & 2.73M\\\\\n\t\t\\botrule\n\t\t\\end{tabular}\\label{tabulka1}}\n\t\\end{table}\n\n\\begin{figure}[ht]\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a20n9999_b0_re1n062M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a21n9999_bm1_re1n06M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a23n9999_bm3_re1n06M_ok}\\\\\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a20n9999_b0_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a21n9999_bm1_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a23n9999_bm3_re10M_ok}\n\t\t\n\t\t\n\t\t\n\t\t\\end{tabular}\n\t\t\\caption{Escape cones of GF+ observers. Top images are plotted for observer (emitter) at $r=r_{ms}$ and bottom images for $r=10M$. The value of $a^2+b=0.9999$ is kept fixed. The representative values of $(a^2; b)$ are (from left to right) $(0.9999;0.0)$, $(1.9999;-1.0)$ and $(3.9999;-3)$. }\\label{GF_escape_cones_extreme}\n\t\\end{figure}\t\n \n Further, we represent the influence of the braneworld parameter on the escape cones for the circular (corotating) geodesic frames in Figure \\ref{GF_escape_cones}. Assuming astrophysically relevant sources in Keplerian accretion discs, their orbits must be located above the marginally stable orbit $r_{ms}$, determined implicitly by the condition \\cite{Ali-Gum:2005:,Stu-Kot:2008}\n\n\n\\begin{equation}\n \ta=a_{ms}(r;b)\\equiv\\frac{4(r-b)^{3\/2}\\mp r \\sqrt{3r^2-2r(1+2b)+3b}}{3r-4b}.\n\\end{equation}\nTherefore, we construct the escape cones for observers at $r=r_{ms}(a,b)$ and at fixed radii. In the sequence of black holes with fixed spin $a=0.9981$ (Figure \\ref{GF_escape_cones}) we include also a subsequence of escape cones constructed at the same relative distance from the black hole horizon in order to better illustrate the role of the tidal charge $b$. In the sequence of near-extreme black holes with $a^2+b=0.9999$ (Figure \\ref{GF_escape_cones_extreme}) the third sequence is not necessary as the black hole horizon is fixed at $r_h=1.01M$. Figures \\ref{GF_escape_cones} and \\ref{GF_escape_cones_extreme} demonstrate that the trapped cone expands as the tidal charge descends to lower negative values, both for black holes with fixed spin $a$ and for near-extreme holes. On the other hand, considering the cones at $r_{ms}$ we can conclude that the descending tidal charge ($b<0$) symmetrizes their shape for fixed $a$, but makes them strongly asymmetric for near-extreme black holes shrinking them strongly in the direction of the black hole rotation.\n\n\nFinally we demonstrate the relevance of the tidal charge $b$ in the character of escape cones of the $RFF_-$ (comparing them with those related to $LNRF$) in Figure \\ref{fig13a_f}. We construct the escape cones for two typical values of the tidal charge ($b=0$, $b=-3$) in a sequence of radii where the free-falling source is radiating, demonstrating thus the combined growing influence of the black hole gravitational pull on the photon motion and the velocity of the free-falling source. In order to illustrate the phenomena in a clear way, we compare the $RFF_-$ escape cones to the corresponding $LNRF$ escape cones. Clearly, the tidal charge descending to higher negative values makes stronger squeezing of the free-falling cones relative to the $LNRF$ escape cones at any fixed radius. Notice that both the $RFF_-$ and $LNRF$ cones are shifted due to the black hole rotational dragging. \nWe again observe the tendency of negative brany parameters to symmetrize and squeeze the escape cones. At a fixed $r$, the escape cones become smaller for growing $|b|$ due to stronger gravity. For completeness we present sequence of both the $RFF_-$ and $LNRF$ escape cones at the three fixed radii for an extreme black hole with $b=-3$ and $a^2=4$. We observe that both the $RFF_-$ and $LNRF$ cones are strongly shifted in the sense of the black hole rotation in vicinity of black hole horizon due to growing influence of the spin. The symmetrizing effect of descending values of negative tidal charge is canceled by strong influence of the rotational effects due to growing black hole spin.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n \\begin{tabular}{ccc}\n \\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_b0_re5}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_bm3_re5}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a2_bm3_re5}\\\\\n \\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_b0_re10}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_bm3_re10}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a2_bm3_re10}\\\\\n \\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_b0_re15}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_bm3_re15}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a2_bm3_re15}\n \\end{tabular}\n\\caption{\\label{fig13a_f}Comparison of the effect of the tidal charge $b$ on the shape of\n light escape cones of locally nonrotating (dashed curves) frames and free falling (solid curves)\n frames. In the left column light escape cones are plotted for the tidal\n charge parameter $b=0$ and in the middle one the light escape cones are\n plotted for $b=-3$. The spin $a=0.9981$ is kept fixed in both columns. The right column gives the sequence of the escape cones for an extreme black hole with [$a^2=4;b=-3$]. Emitting sources in all plots are moving in the equatorial plane. The radial distances of emitter are $r_e=5M$ (top row), $r_e=10M$ (middle row) and $r_e=15M$ (bottom row).}\n\\end{figure}\n\n\n\n\\clearpage\n\n\n\\section{\\label{sec:Silhuette}Silhuette of braneworld Kerr black hole}\n\nIn principle, it is of astrophysical importance to consider a black hole in front of a source of illumination whose angular size is large compared with the angular size of the black hole \\cite{Bardeen:1973:}. A distant observer will see a silhuette of the black hole, i.e., a black hole in the larger bright source. The rim of the black hole silhuette corresponds to photon trajectories spiralling around the black hole many times before they reach the observer. Of course, the shape of the silhuette enables, in principle, determination of the black hole parameters. But we have to be aware of the strong dependency of the silhuette shape on the observer viewing angle; clearly, the shape will be circular for observers on the black hole rotation axis, and its deformation grows with observer approaching the equatorial plane.\n\nAssuming that distant observers measure photon directions relative to the symmetry center of the gravitational field, the component of the angular displacement perpendicular to the symmetry axis is given by $-p^{(\\varphi)}\/p^{(t)}$ (for black hole rotating anticlockwise relative to distant observers), while for angular displacement parallel to the axis it is given by $p^{(\\theta)}\/p^{(t)}$. These angles are proportional to $1\/r_0$, therefore, it is convenient to use the impact parameters in the form independent of $r_0$ \\cite{Bardeen:1973:}\n\n\\begin{equation}\n \\tilde{\\alpha}=-r_0\\frac{p^{(\\varphi)}}{p^{(t)}}=-\\frac{\\lambda}{\\sin\\theta_0},\\label{silalpha}\n\\end{equation}\n\nand\n\n\\begin{eqnarray}\n \\tilde{\\beta}&=&r_0\\frac{p^{(\\theta)}}{p^{(t)}}=\\left[q+a^2\\cos^2\\theta_0-\\lambda^2\\cot^2\\theta_0\\right]^{1\/2}\\nonumber\\\\\n&&=\\left[\\mathcal{L}+a^2\\cos^2\\theta-\\frac{\\lambda^2}{\\sin^2\\theta_0}\\right]^{1\/2}.\\label{silbeta}\n\\end{eqnarray}\nPhoton trajectories reaching the observer are represented by points in the $(\\tilde{\\alpha}-\\tilde{\\beta})$ plane representing a small portion of the celestial sphere of the observer.\n\nThe shape of the black hole silhuette is the boundary of the no-turning-point region, i.e., it is the curve $\\mathcal{L}=\\mathcal{L}^{min}_{max}(\\lambda)$ expressed in the $(\\tilde{\\alpha}-\\tilde{\\beta})$ plane of the impact parameters. For observers in the equatorial plane $(\\theta_0 = \\pi\/2)$, $\\tilde{\\alpha}=-\\lambda$, $\\tilde{\\beta}=(\\mathcal{L}-\\lambda^2)^{1\/2}=q^{1\/2}$.\n\n\n\\begin{figure}[ht]\n\n\\begin{tabular}{cc}\n \\includegraphics[width=5.5cm]{silhuettes_a0n6_b}&\\includegraphics[width=5.5cm]{silhuettes_extreme_b}\n\\end{tabular}\n\n\\caption{\\label{fig15}Left figure. The $(\\bar\\alpha_0,\\bar\\beta_0)$ plots of the silhuettes of braneworld Kerr black hole on a bright background for rotational parameter $a^2=0.6$ and four representative values of tidal charge parameter $b=-3.0$, $b=-0.4$, $b=0.0$ and $b=0.4$. The observer is located at $r_0=10^4 M$ and $\\theta_0=90^\\circ$.\nRight figure. The silhuettes of extreme black holes for three representative values of braneworld parameter $b=0$ (solid), $b=-1$ (dashed) and $b=-3$ (dotted). Static observer is in equatorial plane at radial distance from the centre $r_0=10^4 M$.}\n\\end{figure}\n\n\n\n\n\n\n\nWe consider that the black hole is observed by static distant observers. Therefore, we shall use the static frames introduced above. The silhuette of the black hole is quite naturally related to their trapped (escape) light cones.\n\nThe marginal values of impact parameters $\\lambda_0$ and $\\mathcal{L}_0$(resp $q_0$) are obtained from the light escape cone. Using the stationarity of the braneworld Kerr spacetime we ``shoot out`` virtual photons from observer (static frame at very large distance $r_0$) and we are looking for the light escape cone of this virtual source (using the results of the previous section). The trapped light cone of this virtual source is constructed from the light escape cone of the virtual source by transformations of directional angle $\\alpha_0$ to $\\bar{\\alpha}_0=\\pi - \\alpha_0$ and directional angle $\\beta_0$ to $\\bar{\\beta}_0=\\beta_0$. In this way we get marginal directions for received photons from bright background behind the black hole. Then we can use the formulas (\\ref{LC6}), (\\ref{LC7}) and (\\ref{LC8}) to calculate the marginal values of $\\lambda_0$ and $q_0$($\\mathcal{L}_0$) in order to obtain the silhuette of the braneworld Kerr black hole in the plane $(\\tilde{\\alpha}-\\tilde{\\beta})$, i.e., the set of doubles $(\\tilde{\\alpha}_0,\\tilde{\\beta}_0)$ from equations (\\ref{silalpha}) and (\\ref{silbeta}). Here we plotted the silhuette directly from the trapped light cone $(\\bar{\\alpha}_0,\\bar{\\beta}_0)$ on the observer's sky $(\\bar{\\alpha}_0\\sin\\bar{\\beta}_0,\\bar{\\alpha}_0\\cos\\bar{\\beta}_0)$. Note that the angle $\\bar\\alpha_0$ is the radial coordinate and the angle $\\bar\\beta_0$ is the polar coordinate in the polar graph of the silhuette. \n\n\n\\begin{figure}[ht]\n\\begin{tabular}{cc}\n\t\\includegraphics[width=6cm]{silhuettes_a0n8_b02_th}&\\includegraphics[width=6cm]{silhuettes_a0n8_b0_th}\\\\\n\\includegraphics[width=6cm]{silhuettes_a0n8_bm1_th}&\\includegraphics[width=6cm]{silhuettes_a0n8_bm3_th}\n\\end{tabular}\n\\caption{\\label{fig18_a_d}The silhuettes of rotating braneworld black hole on a bright background. Each image contains three black hole shapes for three representative values of observer's inclination angle $\\theta_0=\\{0^\\circ(solid),45^\\circ(dashed),90^\\circ(dotted)\\}$, observer's radial coordinate $r_0=10^4 M$ and the rotational parameter $a^2=0.8$. Top left image: $b=0.2$. Top right image: $b=0.0$. Bottom left image: $b=-1.0$. Bottom right image: $b=-3.0$.}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{tabular}{cc}\n\t\\includegraphics[width=6cm]{silhuettes_a1_b0_th_extreme}&\\includegraphics[width=6cm]{silhuettes_a2_bm3_th_extreme}\n\\end{tabular}\n\\caption{The silhuettes of extreme rotating braneworld black holes on a bright background. Each image contains three black hole shapes for three representative values of observer's inclination angle $\\theta_0=\\{0^\\circ(solid),45^\\circ(dashed),90^\\circ(dotted)\\}$, observer's radial coordinate $r_0=10^4 M$. Silhuettes on the left figure are plotted for extreme black holes with $a^2=1$ and $b=0$ and on the right side for $a^2=4$ and $b=-3$.}\\label{fig19_a_b}\n\\end{figure}\n\n\n\\par \n\tWe shall give the silhuette of the black hole for observers located at fixed radius $r_0=10^4$M that corresponds to the angular size of $\\alpha\\sim 1.4$arcsec; for higher distances the angular size falls accordingly to the $1\/r_0$ dependence. \n\nFirst, we give an illustrative picture of the tidal charge influence on the silhuette properties for maximal inclination angle $\\theta_0=90^\\circ$ when the black hole rotational effects are strongest (Figure \\ref{fig15}). We present a sequence of silhuettes for fixed black hole spin and varying $b$ (left) and for extreme black holes with $a^2+b=1$ and both $a$, $b$ varying (right). We clearly see that the positive tidal charge squeezes magnitude of the silhuette making its shape more asymmetric, while negative tidal charge enlarges silhuette's diameter symmetrizing its shape when $a$ is fixed. For extreme black holes the silhuette asymmetry is kept but its extension grows with $b$ descending to higher negative values.\n\n\tSecond, there is a crucial effect of the viewing angle $\\theta_0$ onto the shape of the black hole silhuette, demonstrated in Figure \\ref{fig18_a_d} for representative values of $b$ and fixed spin $a$, and in Figure \\ref{fig19_a_b} for extreme black holes with parameters [$a^2=1$;$b=0$] and [$a^2=4$;$b=-3$].\n\n\n\nThe rotational effect on the shape of the silhuette grows with inclination angle growing and becomes strongest when $\\theta_0=\\pi\/2$; then the suppressing effect of the braneworld parameter is given in the most explicit form as demonstrated in Figure \\ref{fig15}. \n\n\nThe negative values of the braneworld parameter have the tendency to make the silhuette of a Kerr black hole (with $a^2$ fixed and for $r_0$, $\\theta_0$ fixed) spherical, suppressing thus the rotational effects. However the symmetrizing effect of the tidal charge could be masked by symmetrizing effect of the viewing angle $\\theta_0$. Therefore, it is very important for black hole parameter estimates to have observational limits on the value of $\\theta_0$.\n\n\\begin{figure}[ht]\n\t\\includegraphics[width=8cm]{fig14} \n\\caption{\\label{fig14}We define shift $s$ and ellipticity $\\epsilon=x\/y$ as parameters enabling us to characterize the magnitude of distorsion of Kerr black hole silhuette in braneworld universe.}\n\\end{figure}\n\n\n\nIn order to characterize the influence of the tidal charge on the silhuette of a Kerr black hole we define two quantities in principle measurable by distant observers. The \\emph{shift} $s$ of the silhuette \n\\begin{equation}\n \ts=\\tilde\\alpha(\\beta_m)\\sin(\\beta_m - \\pi),\\label{eqA}\n\\end{equation}\nand its \\emph{ellipticity} $\\epsilon$\n\\begin{equation}\n \t\\epsilon=\\frac{\\tilde\\alpha(\\beta=90^\\circ)+\\tilde\\alpha(\\beta=270^\\circ)}{2\\tilde\\alpha(\\beta_m)\\cos(\\beta_m - \\pi)},\\label{eqB}\n\\end{equation}\nwhere $\\beta_m$ is defined by $\\tilde\\alpha(\\beta_m)\\sin(\\beta_m - \\pi)\\ge \\tilde\\alpha(\\beta)\\sin(\\beta - \\pi),\\quad \\forall \\beta\\in[\\pi\/2,3\/2\\pi]$ i.e., it defines maximal extension of the silhuette in the $x$-direction. The definition of \\emph{shift} $s$ and \\emph{elipticity} $\\epsilon$ is illustrated in Figure \\ref{fig14}.\n\nWe calculated shift $s$ and ellipticity $\\epsilon$ as functions of tidal\nparameter $b$ for the Kerr black hole with rotational parameter $a^2=0.9995$\n(see Figure \\ref{fig16_a_b}).\nClearly, these are quantities that could be measured and used for a black hole parameters estimates, if observational techniques could be developed to the level enabling the silhuette detailed measuring. We shall discuss such a possibility for the case of the supermassive black hole predicted in the Galaxy Centre (Sgr $A^*$).\n \n\n\n\n\\begin{figure}[ht]\n\\begin{tabular}{cc}\n\t\\includegraphics[width=6cm]{shift_th89n9_th45_a0n9995_r10p4} &\\includegraphics[width=6cm]{ellipticity_th89n9_th45_a0n9995_r10p4} \\\\\n\t\\includegraphics[width=6cm]{ellipticity_extreme_th090_r0104}&\\includegraphics[width=6cm]{shift_extreme_th090_r0104}\t\n\\end{tabular}\n\\caption{\\label{fig16_a_b\n Top row. Left figure: the shift $s=s(b)$ as a function of braneworld parameter $b$. Right figure: the ellipticity $\\epsilon=\\epsilon(b)$ as a function of $b$. There are two curves on each image, one for observer inclination angle $\\theta_0=45^\\circ$ and second for $\\theta_0=89.9^\\circ$. The rotational parameter of black hole is fixed to value $a=0.9995$ and the radial coordinate of observer if $r_0=10^4 M$.\nBottom row. The ellipticity $\\epsilon$ (left) and shift $s$ (right) of the extreme black hole silhuette as functions of braneworld parameter $b$. Observer's coordinates are $\\theta_0=\\pi\/2$ and $r_0=10^4 M$. }\n\\end{figure}\n\n\n\n\n\\clearpage\n\\section{\\label{sec:DirAndIndirImages}Direct and indirect images of radiating disc}\n\nModelling of spectral line profiles of a thin radiating ring rotating in the equatorial plane of a braneworld Kerr black hole or light curve of an isotropically emitting point source orbiting such a black hole will give us information about the influence of the braneworld parameter $b$ on the optical phenomena in the strong field regime \\cite{SS:b:RAGTime:2007:Proceedings}. Here we restrict our attention to images of radiating discs. We can then, at least in principle, obtain estimates on the astrophysically acceptable values of the braneworld parameter $b$. \n\n\\subsection{Images of isoradial geodesics}\n\t\nCalculating images of an accretion disc (ring) in the equatorial plane of a braneworld Kerr black hole is the first step to calculate the optical phenomena. Generally one could obtain a direct and an indirect image (see Figures \\ref{fig19} and \\ref{fig20}), but in special cases the situation can be much more complicated due to complex character of the latitudinal and azimuthal photon motion. Here we focus our attention to the direct and indirect images of isoradial geodesics.\n\nIn order to find all relevant positions of points forming the rotating ring on observer's sky, we have to find photon trajectories between the ring particles and the observer, i.e., we seek for such doubles of local observational angles $[\\alpha_0,\\beta_0]$ that satisfy the condition\n\n\n\n\\begin{figure}[ht]\n\n \\includegraphics[width=10cm]{fig19}\n\n\\caption{\\label{fig19}\\emph{Direct} image of the rotating ring in the equatorial plane at $r_e=6M$ around braneworld Kerr black hole with rotational parameter $a^2=0.5$ for four representative values of tidal charge parameter $b=-3.0$, $b=-0.4$, $b=0.0$ and $b=0.4$. The observer is located at $r_0=10^4 M$ and $\\theta_0=85^\\circ$. }\n\\end{figure}\n\n\\begin{figure}[ht]\n\n \\includegraphics[width=8cm]{fig20}\n\n\\caption{\\label{fig20}\\emph{Indirect} image of the rotating ring in the equatorial plane at $r_e=6M$ around braneworld Kerr black hole with rotational parameter $a^2=0.5$ for four representative values of tidal charge parameter $b=-3.0$, $b=-0.4$, $b=0.0$ and $b=0.4$. The observer is located at $r_0=10^4 M$ and $\\theta_0=85^\\circ$. }\\label{indirect_a_05_th85_on_b}\n\\end{figure}\n\n\n\\begin{equation}\n \tI_U(\\alpha_0,\\beta_0;n_u,u_{sgn}) - I_M(\\alpha_0,\\beta_0;n,p,s)=0.\\label{bvp}\n\\end{equation}\nHere we introduced the modified radial coordinate $u=1\/r$ and cosine of latitudinal coordinate $\\mu=\\cos\\theta$ \\cite{Rau-Bla:1994:}. In the condition (\\ref{bvp}) $n_u$ is the number of turning points in $u$ coordinate, $n$ is the number of turning points passed in $\\mu$ coordinate, $p=mod(n,2)$, $s=(1-\\mu_{sgn})\/2$. In terms of $u$ and $\\mu$ we define the functions $I_U$ and $I_M$ by\n\n\\begin{equation}\n\tI_U(\\alpha_0,\\beta_0;n_u,u_{sgn})\\equiv\\left\\{\\begin{array}{lcr}\n\t\t\t\t\t-u_{sgn}\\left(\\int^{u_0}_{u_t} +\\int^{u_e}_{u_t}\\right) & \\textrm{for} & n_u=1\\\\\n\t\t\t\t\tu_{sgn}\\int^{u_e}_{u_0} & \\textrm{for} & n_u=0\n\t\t\t\t\t\\end{array}\\right.\n\\end{equation}\nand\n\\begin{eqnarray}\n\tI_M(\\alpha_0,\\beta_0;n,p,s)&\\equiv&\\mu_{sgn}\\left[\\int^{\\mu_+}_{\\mu_0} + (-1)^{n+1}\\int^{\\mu_+}_{\\mu_e}+\\right.\\\\ \\nonumber\n&+&\\left.(-1)^s[(1-p)n+p[(1-s)(n-1)+s(n+1)]]\\int^{\\mu_+}_{\\mu_-} \\right]\n\\end{eqnarray}\n with\n\\begin{eqnarray}\n\t\\int^{u_2}_{u_1}&\\equiv&\\int^{u_2}_{u_1}\\frac{\\mathrm{d} u}{\\sqrt{U(u)}},\\label{u_int}\\\\\n\tU(u)&=&1+(a^2-\\lambda^2-q)u^2+2[(\\lambda^2-a^2)^2+q]u^3 - \\nonumber\\\\\n\t&-&[q(a^2+b)+b(a-\\lambda)^2]u^4\n\\end{eqnarray}\nand\n\n\\begin{eqnarray}\n\t\\int^{\\mu_2}_{\\mu_1}&\\equiv&\\int^{\\mu_2}_{\\mu_1}\\frac{\\mathrm{d} \\mu}{\\sqrt{M(\\mu)}},\\label{mu_int}\\\\\n\tM(\\mu)&=&q+(a^2-\\lambda^2-q)\\mu^2-a^2\\mu^4.\n\\end{eqnarray}\n\n\\subsection{Integration of photon trajectories}\n\nWe express the integrals (\\ref{u_int}) and (\\ref{mu_int}) in the form of the standard elliptic integrals of the first kind. Rauch and Blandford presented the tables of reductions of $u$-integrals and $\\mu$-integrals for the case of photons in Kerr geometry \\cite{Rau-Bla:1994:}. Here we extended those reductions for the case of nonzero braneworld parameter $b$. Because the integration of the $\\mu$-integral does not depend on braneworld parameter $b$, the transformations are the same as in the case of Kerr metric \\cite{Rau-Bla:1994:}, but we include them for completeness. \n\n\nThere are two cases we distinguish in latitudinal integral (see table \\ref{tableEIM}). In the first case there is one positive, $M_+>0$, and one negative, $M_-<0$ root of $M(m^2)$ it implies that there are two turning points located symmetrically about the equatorial plane given by $\\pm\\sqrt{M_+}$ (so called orbital motion \\cite{Bic-Stu:1976:,Fel-Cal:1972:}. In the second case there are two positive roots, $0\\beta_2>\\beta_2>0$ and $\\beta_4<0$. The value of modified constant of motion $\\tilde{q}>0$.\n\\item\nThe \\textbf{case II}: four real roots as in the case I but their values form the following order: $\\beta_1>\\beta_2>0$ and $\\beta_4<\\beta_3<0$. The value of modified constant of motion $\\tilde{q}<0$. \n\\item\nThe \\textbf{case III}: two real and two complex roots of $U(u)=0$: $\\beta_1$ being a complex root, $\\beta_2=\\bar{\\beta_1}$ and $\\beta_4<\\beta_3<0$. The value of modified constant of motion $\\tilde{q}<0$.\n\\item\nThe \\textbf{case IV}: only complex roots: $\\beta_2=\\bar{\\beta_1}$ and $\\beta_4=\\bar{\\beta_3}$. The value of modified constant of motion $\\tilde{q}<0$. \n\\item\nThe \\textbf{case V}: two real and two complex roots of $U(u)=0$: $\\beta_1>0$, $\\beta_4<0$, $\\beta_2$ being a complex root and $\\beta_3=\\bar{\\beta_2}$. \n\\end{itemize}\n\n\\begin{table}[!ht]\n\\tbl{The reductions of $\\int^m_{m_1}\\mathrm{d} m'\/\\sqrt{M(m')}=I_M$} \n{\\begin{tabular}{@{}lllll@{}}\\toprule\n \tCase & $\\tan\\Psi$ & $m$ & $c_1$ & $m_1$\\\\ \\colrule\n\t\\\\\n\t$M_-<0$ & $\\sqrt{\\frac{M_+}{m^2}-1}$ & $\\frac{M_+}{M_+-M_-}$ & $\\frac{1}{\\sqrt{a^2(M_+-M_-)}}$ & $\\sqrt{M_+}$\\\\\n\t\\\\\n\t$M_->0$ & $\\sqrt{\\frac{M_+-m^2}{m^2-M_-}}$ & $\\frac{M_+-M_-}{M_+}$ &\n $\\frac{1}{a^2}$ & $\\sqrt{M_+}$\\\\ \\botrule\n \\end{tabular}\\label{tableEIM}}\n\\end{table} \n\n\\begin{table}[!ht]\n\\tbl{The reductions of $\\int^u_{u_1}\\mathrm{d} u'\/\\sqrt{U(u')}=I_U$}\n{\\begin{tabular}{@{}lllll@{}}\\toprule\n\n \tCase & $\\tan\\Psi$ & $m$ & $c_1$ & $u_1$\\\\ \\colrule\n\t\n\tI & $\\sqrt{\\frac{(\\beta_1-\\beta_3)(u-\\beta_4)}{(\\beta_1-\\beta_4)(\\beta_3-u)}}$ & $\\frac{(\\beta_1-\\beta_2)(\\beta_3-\\beta_4)}{(\\beta_1-\\beta_3)(\\beta_2-\\beta_4)}$ & $\\frac{2}{\\sqrt{\\tilde{q}(b1-b3)(b2-b4)}}$ & $\\beta_4$\\\\\n\t\\\\\n\tII & $\\sqrt{\\frac{(\\beta_1-\\beta_2)(u-\\beta_3)}{(\\beta_1-\\beta_3)(\\beta_2-u)}}$ & $\\frac{(\\beta_2-\\beta_3)(\\beta_1-\\beta_4)}{(\\beta_1-\\beta_2)(\\beta_4-\\beta_3)}$ & $\\frac{2}{\\sqrt{-\\tilde{q}(b1-b2)(b3-b4)]}}$ & $\\beta_3$\\\\\n\t\\\\\n\tIII & $\\frac{2c_2(u)}{|1-c^2_2(u)|}$ & $\\frac{4c_4 c_5 - (\\beta_3 - \\beta_4)^2 - c_4 c_5}{4c_4 c_5}$ & $\\frac{1}{\\sqrt{-\\tilde{q}c_4 c_5}}$ & $\\beta_3$\\\\\n\t\\\\\n\tIV & $\\frac{u-c_3}{\\Im(\\beta_1)(1+c_2^2)+c_2(u-c_3)}$ & $1-\\left(\\frac{c_4-c_5}{c_4+c_5}\\right)^2$ & $\\frac{2}{(c_4+c_5)\\sqrt{-\\tilde{q}}}$ & $c_3$\\\\\n\t\\\\\n\tV & $\\frac{2c_2(u)}{|1-c^2_2(u)|}$ & $1-\\frac{(c_4+c_5)^2-(\\beta_1 -\n \\beta_4)^2}{4c_4 c_5}$ & $\\frac{1}{\\sqrt{\\tilde{q}c_4 c_5}}$ &\n $\\beta_4$\\\\ \\botrule\n \\end{tabular}\\label{tableEI}}\n\\end{table}\n\n\\begin{table}[!th]\n\\tbl{Definitions for Table \\ref{tableEI}.}\n{\\begin{tabular}{@{}lll@{}}\\toprule\n \n \tCase & $^1 c_2$ & $^1 c_3$\\\\ \\colrule\n\t\n\tIII & $\\left[\\frac{c5(u-\\beta_3)}{c_4(u-\\beta_4)}\\right]^{1\/2} $ & -\\\\\n\t\\\\\n\tIV & $\\left\\{\\frac{4[\\Im(\\beta_1)]^2-(c_4-c_5)^2}{(c_4+c_5)^2-4[\\Im(\\beta_1)]^2}\\right\\}^{1\/2}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\Re(\\beta_1)+c_2\\Im(\\beta_1)$}$\\\\\n\t\\\\\n\tV & $\\left[\\frac{c4(u-\\beta_4)}{c_5(\\beta_1-u)}\\right]^{1\/2} $& -\\\\ \\botrule\n\\end{tabular}\\label{tableEI2}}\n\\end{table}\n\n\n\\begin{table}[!th]\n\\tbl{Definitions for Table \\ref{tableEI} and Table \\ref{tableEI2}.}\n {\\begin{tabular}{@{}lll@{}}\\toprule\n \n \tCase & $^1 c_4$ & $^1 c_5$\\\\ \\colrule\n\t\n\tIII & $\\mbox{\\fontsize{8}{10}\\selectfont$\\left\\{\\left[\\Re(\\beta_1)-\\beta_3\\right]^2+[\\Im(\\beta_1)]^2\\right\\}^{1\/2}$}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_1)-\\beta_4\\right]^2+[\\Im(\\beta_1)]^2\\right\\}^{1\/2}$}$\\\\\n\t\\\\\n\tIV & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_1)-\\Re(\\beta_3)\\right]^2+[\\Im(\\beta_1)+\\Im(\\beta_3)]^2\\right\\}^{1\/2}$}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_1)-\\Re(\\beta_3)\\right]^2+[\\Im(\\beta_1)-\\Im(\\beta_3)]^2\\right\\}^{1\/2}$}$\\\\\n\t\\\\\n\tV & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_2)-\\beta_1\\right]^2+[\\Im(\\beta_2)]^2\\right\\}^{1\/2}$}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_2)-\\beta_4\\right]^2+[\\Im(\\beta_2)]^2\\right\\}^{1\/2}$}$\\\\ \\botrule\n\\end{tabular}}\n\\begin{tabular}{c}\n\t$^1$\\textit{The symbols $\\Re(x)$ and $\\Im(x)$ refer to real and imaginary part of $x$ here.}\n\\end{tabular}\n\\end{table}\n\n\n\\par\n\nUsing presented transformations we can write the integrals (\\ref{u_int}) and (\\ref{mu_int}) in the form\n\n\\begin{equation}\n \\int^{u}_{u_1}\\frac{1}{\\sqrt{U(\\tilde{u})}}\\mathrm{d} \\tilde{u} = c_1\\mathcal{F}(\\Psi;m)\\label{ellint}\n\\end{equation}\nand \n\\begin{equation}\n \\int^{\\mu}_{\\mu_1}\\frac{1}{\\sqrt{M(\\tilde{\\mu})}}\\mathrm{d} \\tilde{\\mu} = c_1\\mathcal{F}(\\Psi;m)\\label{ellintM}\n\\end{equation}\nwhere $\\mathcal{F}$ is the elliptic integral of the first kind and $u_1$(resp $\\mu_1$) depends on the case of root distribution of quartic equation $U(u)=0$ (resp. $M(\\mu)=0$) as given in Table \\ref{tableEI} (resp \\ref{tableEIM}). If, in the cases III and V, the value of $1-c_2^2(u)<0$, we have to take instead of (\\ref{ellint}) the form\n\n\\begin{equation}\n \\int^{u}_{u_1}\\frac{1}{\\sqrt{U(\\tilde{u})}}\\mathrm{d} \\tilde{u} = c_1(2\\mathcal{K}(m)-\\mathcal{F}(\\Psi;m)),\\label{ellint1}\n\\end{equation}\nwhere $\\mathcal{K}$ is the complete elliptic integral of the first kind. In the case that sign$(\\mu1\\cdot\\mu)<0$ we have to take instead of (\\ref{ellintM}) the form\n\\begin{equation}\n \\int^{\\mu}_{\\mu_1}\\frac{1}{\\sqrt{M(\\tilde{\\mu})}}\\mathrm{d} \\tilde{\\mu} = c_1(2\\mathcal{K}(m)-\\mathcal{F}(\\Psi;m)),\\label{ellintM1}\n\\end{equation}\nwhere $\\Psi$, $m$ and $c_1$ are taken from table \\ref{tableEIM}.\n We consider two basic possibilities of trajectories, namely those corresponding to direct and indirect images (Figures \\ref{fig19} and \\ref{fig20}).\n\n\\subsection{Disc images}\n\n\tIt is very important to demostrate the influence of the braneworld parameter on the shape of images of rings in the equatorial plane representing parts of Keplerian accretion discs. Of course, as well known from the Kerr (and even Schwarzchild) black holes, the images strongly depend on the latitude of the observer. We calculate the direct and indirect images of flat discs and combined, full image of the disc for two representative values of viewing angle $\\theta_0$ and appropriatelly chosen extension of radiating disc area.\n\n\n\tWe include the effect of frequency shift into the calculated images of part of the Keplerian discs assumed to be radiating at a given fixed frequency. The frequency shift $g$ is determined by the ratio of observed ($E_0$) to emitted ($E_e$) photon energy\n\\begin{equation}\n g=\\frac{E_0}{E_e}=\\frac{k_{0\\mu} u_0^\\mu}{k_{e\\mu} u_e^\\mu},\n\\end{equation}\nwhere $u_0^\\mu$($u_e^\\mu$) are components of the observer (emitter) 4-velocity and $k_{0\\mu}(k_{e\\mu})$ are components of the photon 4-momentum taken at the moment of emission (observation). For distant observers $u^\\mu_0=(1,0,0,0)$. The emitter follows an equatorial circular geodesics at $r=r_e$, $\\theta_e=\\pi\/2$. Therefore, $u_e^\\mu=(u^t_e,0,0,u_e^\\varphi)$, with components given by\n\n\\begin{eqnarray}\n u_e^t&=&\\left[1-\\frac{2}{r_e}(1-a\\Omega)^2-(r_e^2+a^2)\\Omega^2+\\frac{b}{r_e^2}(1-2a\\Omega)\\right]^{-1\/2},\\\\ u_e^\\varphi&=&\\Omega u_e^t,\n\\end{eqnarray}\nwhere $\\Omega=\\mathrm{d}\\varphi\/\\mathrm{d} t$ is the Keplerian angular velocity of the emitter related to distant observers, given by equation (\\ref{ang_vel_gf}).\n\nThe frequency shift including all relativistic effects is then given by\n\n\\begin{equation}\n g=\\frac{\\left[1-\\frac{2}{r_e}(1-a\\Omega)^2-(r_e^2+a^2)\\Omega^2+\\frac{b}{r_e^2}(1-2a\\Omega)\\right]^{1\/2}}{1-\\lambda\\Omega}\n\\end{equation}\nwhere $\\lambda\\equiv-k_\\varphi\/k_t$ is the impact parameter of the photon\nbeing a motion constant for an individual photon radiated at a specific\nposition of the radiating disc; notice that $g$ is independent of the second\nphoton motion constant (impact parameter) $q$. Of course, depending on the\nposition of the emitter along the circular orbit, the impact parameters\n$\\lambda$, $q$ of photons reaching a fixed distant observer will vary\nperiodically (see eg., \\cite{Bao-Stu:1992:}). For each position of the emitter\nthe impact parameters are determined by the procedure of integration of photon\ntrajectories. \n\nThe influence of the frequency shift in the disc images is demonstrated in Figures \\ref{fig24_a_i} and \\ref{fig26_a_i}. The role of the braneworld parameter is illustrated both for small ($\\theta_0=30^\\circ$) and high ($\\theta_0=80^\\circ$) inclination angles. We consider two cases of the radiating disc extension: first one with fixed inner and outer radii, independent of the black hole parameters, and the second one when the inner radius is identified with the marginally stable orbits, depending on the black hole parameters. \n\n\n\n\n\\begin{figure}[ht]\n\n \\begin{tabular}{ccc}\n \\includegraphics[width=3.6cm]{fig23d}\n& \\includegraphics[width=3.6cm]{fig23e}\n&\\includegraphics[width=3.6cm]{fig23f}\\\\\n \\includegraphics[width=3.6cm]{fig23g}\n& \\includegraphics[width=3.6cm]{fig23h}\n&\\includegraphics[width=3.6cm]{fig23i}\\\\\n \\includegraphics[width=3.6cm]{fig24d}\n& \\includegraphics[width=3.6cm]{fig24e}\n&\\includegraphics[width=3.6cm]{fig24f}\\\\\n \\includegraphics[width=3.6cm]{fig24g}\n& \\includegraphics[width=3.6cm]{fig24h}\n&\\includegraphics[width=3.6cm]{fig24i}\n \\end{tabular}\n\n\\caption{\\label{fig24_a_i}Radiating Keplerian disc images with fixed inner and outer radii. The modified frequency shift $\\bar{g}=(g-g_{min})\/(g_{max}-g_{min})$, with $g_{min}=0.4$ and $g_{max}=1.5$, of the radiation emitted from the thin disk with inner radius $r_{in}=7 M$ and outer radius $r_{out}=15 M$, encoded into colors is plotted for representative values of tidal charge parameter $b=-3.0$, $0.0$ and inclination of observer $\\theta_0=30^\\circ$, $80^\\circ$. In the left column direct images are ploted, the indirect images are ploted in the central column and the composition of direct and indirect images is plotted in the right column. The first two rows of images are plotted for the observer inclination $\\theta_0=30^\\circ$ and the second two rows of images are plotted for the observer inclination $\\theta_0=80^\\circ$. Top row images are plotted for $b=0.0$, the second row images are plotted for $b=-3.0$, the third row images are plotted for $b=0.0$ and bottom row images are plotted for $b=-3.0$.}\n\\end{figure}\n\nIn order to map the frequency shift $g$ into color palete we define modified frequency shift $\\bar{g}=(g-g_{min})\/(g_{max}-g_{min})$ where $g_{min}$ ($g_{max}$) is the minimal (maximal) value of frequency shift, which is fixed in a particular set of images.\n\nWe can see from Figs. \\ref{fig24_a_i} and \\ref{fig26_a_i} that the negative tidal charge has the tendency to enlarge and symmetrize the disc images.\n\n\n\\begin{figure}[ht]\n\\begin{tabular}{ccc}\n \\includegraphics[width=3.6cm]{fig25a}\n& \\includegraphics[width=3.6cm]{fig25b}\n&\\includegraphics[width=3.6cm]{fig25c}\\\\\n \\includegraphics[width=3.6cm]{fig25d}\n& \\includegraphics[width=3.6cm]{fig25e}\n&\\includegraphics[width=3.6cm]{fig25f}\\\\\n \\includegraphics[width=3.6cm]{fig26a}\n& \\includegraphics[width=3.6cm]{fig26b}\n& \\includegraphics[width=3.6cm]{fig26c}\\\\\n \\includegraphics[width=3.6cm]{fig26d}\n& \\includegraphics[width=3.6cm]{fig26e}\n& \\includegraphics[width=3.6cm]{fig26f}\n \\end{tabular}\n\n\n\\caption{\\label{fig26_a_i} Radiating Keplerian disc images with $r_{in}=r_{ms}$. The modified frequency shift $\\bar{g}=(g-g_{min})\/(g_{max}-g_{min})$, with $g_{min}=0.2$ and $g_{max}=1.8$, of the radiation emitted from the thin disk with inner radius $r_{in}=r_{ms}$ (with $r_{ms}(b=0;a=0.9981)=1.3$ and $r_{ms}(b=-3;a=0.9981)=6.3$) and outer radius $r_{out}=10$, encoded into colors is plotted for representative values of tidal charge parameter $b=-3.0$, $0.0$ and inclination of observer $\\theta_0=30^\\circ$, $80^\\circ$. In the left column direct images are ploted, the indirect images are ploted in the central column and the composition of direct and indirect images is plotted in the right column. The first two rows of images are plotted for the observer inclination $\\theta_0=30^\\circ$ and the second two rows of images are plotted for the observer inclination $\\theta_0=80^\\circ$. Top row images are plotted for $b=0.0$, the second row images are plotted for $b=-3.0$, the third row images are plotted for $b=0.0$ and bottom row images are plotted for $b=-3.0$. \n}\n\\end{figure}\n\n\n\n\\clearpage\n\\newpage\n \n\\section{Time delay}\n\nFor optical effects in vicinity of a black hole, the time delay in case of systems varying with time and observed along two different directions due to the light deflection in strong gravity can be important. The cordinate time that elapses from the instant of photon emission, $t_e$, to the instant of its reception, $t_o$, is integrated from the Carter equations and reads\n\n\\begin{eqnarray}\n t_o&=&t_e+\\mu_{sgn}\\int_{\\mu_e}^{\\mu_o}{a^2\\mu^2\\frac{\\mathrm{d}\\mu}{\\sqrt{M}}}\\nonumber\\\\\n\t&&+u_{sgn}\\int^{u_o}_{u_e}{\\frac{2a(a-\\lambda)u^3+a^2u^2+1+ab(\\lambda-a)u^4}{(u\/u_{+}-1)(u\/u_{-}-1)\\sqrt{U}}}\\mathrm{d} u\n\\end{eqnarray}\nIn order to succesfully integrate this formula, one must map all the turning points in $\\mu$ and $u$ motion to correctly set up the signs $u_{sgn}$ and $\\mu_{sgn}$.\n\nSuppose that the two light beams, direct and indirect, are emitted at the same coordinate time $t_e$. They generally reach the observer at different coordinate times $t_o^{\\mathrm{dir}}$($t_o^{\\mathrm{indir}}$ resp.). By time delay we define here the difference $\\Delta t\\equiv t_o^{\\mathrm{indir}}-t_o^{\\mathrm{dir}}$.\n\n\\begin{figure}[!ht]\n \\includegraphics[width=10cm]{fig27}\n\\caption{\\label{fig27}The illustration of the impact of tidal charge parameter on the time delay $\\Delta t$ in case of direct and indirect photons emitted from emitter $E$ at coordinate time $t_e$ and azimuthal position $\\varphi_e=\\pi$. They are received at observer $O$ at coordinate times $t_o^{\\mathrm{dir}}$($t_o^{\\mathrm{indir}}$ resp.). The emittor is on circular geodesic in equatorial plane of braneworld Kerr black hole at radial coordinate $r=r_e$. The observer is far from the center of the black hole at $r=r_o$. Its inclination is $\\theta=\\theta_o$.}\n\\end{figure}\n\n\nTo demonstrate the impact of the tidal charge $b$ on the time delay we\nconsider the following situation (see Figure \\ref{fig27}). Let the isotropicaly radiating monochromatic source orbits in the equatorial plane of the braneworld Kerr black hole at radial distance $r_e$. It can be switched on and off. When it reaches the azimuthal coordinate $\\varphi=\\pi$ it is switched on and we compare the coordinate times $t^{\\mathrm{dir}}_o$ and $t^{\\mathrm{indir}}_o$ of reception of the photons from the direct and indirect images of the source. \n\n\\begin{figure}[!ht]\n \\begin{tabular}{cc}\n \\includegraphics[width=6.2cm]{fig28a}&\\includegraphics[width=6.2cm]{fig28b}\n \\end{tabular}\n \n\\caption{\\label{fig28_a_b}The difference (``Time Delay``), $\\Delta t = t^{\\mathrm{indir}}_o -t^{\\mathrm{dir}}_o$, between coordinate times of reception of direct and indirect geodesics of photons emmited at the same coordinate time $t_e$ from the azimuthal coordinate $\\varphi=\\pi$ is plotted as a function of tidal charge $b$. Left figure: the inclination of the observer is $\\theta_0=20^\\circ$. Right figure: the inclination of the observer is $\\theta_0=80^\\circ$. }\n\n\\end{figure}\n\nThe results are demonstrated in the Figure \\ref{fig28_a_b}. We can directly see that time delay $\\Delta t$ between times of reception of the direct and indirect photons emitted at the same instant from the azimuthal position $\\varphi=\\pi$ increases as the value of the tidal charge parameter $b$ goes to higher negative values. When $b$ is fixed, the time delay $\\Delta t$ increases as the value of the inclination decreases. The same effects appear for other positions of the radiating spot ($\\varphi\\not= \\pi$). We can see that the time delay $\\Delta t$ depends strongly on the viewing angle $\\theta_0$. Therefore, it is extremely important to have a system with precisely determined viewing angle.\n\n\\section{Optical phenomena related to Sgr $A^*$}\nThere is an enormously growing evidence that the center of our Galaxy harbors a supermassive black hole whose position could be almost surely identified with the extremely compact radio source Sgr $A^*$. The chain of arguments seems to be very convincing; stars orbiting an unseen mass concentration on elliptical orbits with a common focal position, the unseen mass centered on Sgr $A^*$ that seems to be motionless at the dynamical center of the Galaxy, extremely compact emission of the center \\cite{Reid:2008:}. Recent measurements of Ghez and collaborators \\cite{Ghez-etal:2008:} from the W.M. Keck 10 - meter telescopes of a fully unconstrained Keplerian orbit of the short period star SO-2 provide the distance $R_0=8.0\\pm 0.6$ kpc and black hole mass $M=(4.1\\pm0.6)\\times 10^6 M_\\odot$. If the black hole is assumed to be at rest with respect to the Milky Way Galaxy (i.e., has no massive companion to induce its motion) as argued by Reid \\cite{Reid:2008:}, the fit can be further constrained to $R_0=8.4\\pm 0.4$kpc and $M=(4.5\\pm 0.4)\\times 10^6M_\\odot$ \\cite{Ghez-etal:2008:}.\n\nSuch a close and huge supermassive black hole could be clearly a very convenient object, probably the best one, for testing a wide variety of optical phenomena in strong gravity in its vicinity. The time delay of accidents happening behind the black hole and observed along two directions could be in principle easily measured. We could even expect possibility of black hole silhuette measurements. In this way the influence of the tidal charge could be properly tested and its value estimated, because for the Galaxy supermassive black hole we can determine relatively precisely the inclination angle of the observer (Solar system), although it is of course very close to $\\theta_0\\simeq 90^\\circ$.\n\nFor non-rotating , Schwarzchild black holes, the silhuette diameter is given by the impact parameter of the photon circular orbit\n\n\\begin{equation}\n\tD=2\\lambda_{ph}= 6\\sqrt{3} M.\n\\end{equation}\nUsing the Sgr $A^*$ mass estimate $M\\sim 4.5\\times 10^6M_\\odot$, we find $D\\simeq 55\\mu$arcsec while interferometer finges were reported at wavelength of $1.3$ mm and fringe spacing of $0.00005$, comparable with the expected value of $D$. Shorter wavelengths should enable detailed measurements of the black hole silhuette and relatively precise estimates of the black hole parameters due to very precise knowledge of the inclination angle. The angle can be given by the measurement of the Solar system position relative to Galaxy plane $z_\\odot\\sim 14pc$\\cite{Yoshi:2007:}. Then $\\theta_0\\sim 89.9^\\circ$ or more precisely, $\\theta_0$ lies between the values of $89.8772^\\circ$ ($z_\\odot=18pc$) and $89.9318^\\circ$($z_\\odot=10pc$). Of course, considering the silhuette shape, it is quite enough to take $\\theta_0=90^\\circ$. \n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\\includegraphics[width=6.1cm]{diameter_of_silhuette_schw}\n\t\\end{center}\n\t\\caption{Diameter $D$ as a function of braneworld parameter $b$ is plotted for Schwarzchild black hole of mass $M=4.5\\times 10^6M_\\odot$. Observer is at $r_0=8.4$kpc lying in the equatorial plane.}\\label{diameter_of_silhuette_schw}\n\\end{figure}\n\nIn the case of spherically symmetric black holes, the influence of the tidal charge parameter $b$ on the silhuette diameter can be given by the simple formula for impact parameter of photon circular orbits that reads \\cite{Stu-Hle:2002:}\n\n\\begin{equation}\n\t\\lambda_{ph}(b)=\\frac{r_{ph}^2}{\\sqrt{r_{ph}-b}}M,\n\\end{equation}\nwhere\n\n\\begin{equation}\n\tr_{ph}(b)=\\frac{3}{2}\\left(1+\\sqrt{1-\\frac{8b}{9}}\\right).\n\\end{equation}\nThe resulting dependence of the diameter $D(b)$ is illustrated in Figure \\ref{diameter_of_silhuette_schw} . The diameter grows slowly with the descending of $b$; notice that its magnitude is twice the pure Schwarzchild value for $b=-12.8428$. Of course, for rotating black holes the silhuette is maximally deformed due to the influence of rotation since the viewing angle $\\theta_0\\sim 90^\\circ$ and is given by calculations and results presented above. Testing of the combined spin and tidal charge influence would be possible with measurement precision enlarged for 1-order relative to the recently expected state mentioned above. Clearly, we can expect that the observational accuracy in near future will be high enough to measure the Sgr $A^*$ black hole silhuette implying relevant estimates of the black hole parameters.\n \n\n\n\n\n\n\n\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\includegraphics[width=6.1cm]{td_tab4}\n\t\\end{center}\n\t\\caption{\\label{td_table4}Comparizon of time delay effect as a function of braneworld parameter $b$ between two rotating black holes with rotational parametes $a=0.5$ and $a=0.998$. For each $b$ the emitter is radiating from marginally stable orbit. The relevant values of radii $r_{ms}$ of marginally stable orbits are arranged in the Table \\ref{tabulka2}. }\n\\end{figure}\n\n\\begin{table}[ht]\n\t\t\\tbl{Table of relevant values of $r_{ms}$ used in plots oin Fig \\ref{td_table4}}\n\t\t{\\begin{tabular}{@{}ccccccc@{}} \n\t\t\\toprule\n\t\t$b$ & 0.0 & -0.5 & -1.0 & -2.0 & -3.0 & -10.0\\\\ \n\t\t\\colrule\n\t\t$r_{ms}(a=0.5)$ & 4.24M & 5.05M & 5.73M & 6.88M & 7.85M & 12.88M\\\\\n\t\t\\colrule\n\t\t$r_{ms}(a=0.998)$ & 1.24M & 3.03M & 3.91M & 5.22M & 6.28M & 11.44M\\\\\n\t\t\\botrule\n\t\t\\end{tabular}\\label{tabulka2}}\n\\end{table}\n\nConsidering the time delay effects, the exact value of $\\theta_0$ is crucial since it plays a fundamental role in determining the time delay effect whose scale is given by the value of $t\\sim 1sec$. We illustrate the influence of the tidal charge on the time delay effects at the astrophysically important radii corresponding to marginally stable circular geodesics, i.e. in the strong gravity regime, for two representative fixed values of black hole spin (see Figure \\ref{td_table4} and Table \\ref{tabulka2}). We can expect importance of the regions close to $r_{ms}$ for relevant optical effects due to the idea of the low angular momentum accretion in Sgr $A^*$ advocated by B. Czerny \\cite{Cze-etal:2007:}. Clearly, we can see in Figure \\ref{td_table4} that the time delay effects could be well measurable and the tidal charge influence could be well tested, if the black hole spin is properly estimated.\n\n\\section{\\label{sec:Conclusions}Conclusions}\nOne of the most promising ways of estimating influence of hypothetical hidden external dimensions, considered in the framework of the braneworld model with infinite external dimension as developed by \\cite{Ran-Sun:1999:}, seems to be investigation of the optical phenomena caused by the black hole backgrounds. It is so because black holes represent the only case when the non-local influence of the bulk space on the braneworld spacetime structure can be fully described by a single, braneworld parameter called tidal charge, the sign of which can be both positive and negative, with the second possibility beeing more realistic one \\cite{Ali-Gum:2005:,Dad-etal:2000:}.\n\nHere, we focused our attention to developing a theoretical background for treating the optical phenomena in vicinity of braneworld rotating black holes and bringing general tendencies of the tidal charge effect in some basical optical phenomena. \n\nWe have shown qualitatively how the braneworld tidal charge affects the basical optical phenomena, especially the black-hole silhuette, the accretion disc image with the frequency shift of area of the disc radiating at a specific frequency, and the time delay between the direct and indirect images of the hot spot orbiting the black hole. We have shown that these phenomena could be measured and used to put limits on the tidal charge in case of Galaxy Center Sgr $A^*$ supermassive black hole.\n\nWe generalized the approaches based on the transfer-function method as introduced and developed in Schwarzchild and Kerr backgrounds \\cite{fab-rees-ste-whi:1989:,Mat-Fab-Ros:1993:,Bao-Stu:1992:,Stu-Bao:1992:,Laor:1991:,Dov-Karas-Mas-Mar:2005:,Fan-Cal-Fel-Cad:1997:,Rau-Bla:1994:} where equations of photon motion are solved in terms of the elliptic integrals (see \\cite{Rau-Bla:1994:,Kra:2005:,Kra:2007:}). For purposes of the present work, the transfer-function method seems to be most efficient. Nevertheless, we prepared the ray-tracing method too, since that could be usefull in treating other optical phenomena.\n\n Generally, rising negative value of the tidal charge strenghtens the black hole field and suppresses the rotational phenomena, when the black-hole rotation parameter is fixed. The magnitude of the optical phenomena grows with decreasing of the negatively-valued tidal charge, but the rotation induced asymmetry of the phenomena like the black-hole silhuette, or the accretion disc image, decreases. The black-hole silhuette is characterized by two parameters, namely the shift of the center and ellipticity, that could be in principle measurable in the Galactic Center black-hole system Sgr $A^*$, after expected development of observational techniques that at present enable measurement of the black hole diameter, not details of the shape. \nThe Galaxy center (Sgr $A^*$) seems to be also a promising candidate for testing the time delay effects both for phenomena related to the accretion disc and flares observed there , and for some expected lensing phenomena connected to the observed stars orbiting the Sgr $A^*$ central black hole. \n\nWe have found that observable phenomena could be expected for the time-delay effects. Of special interest is comparison of time delays generated for sources in vicinity of the Sgr $A^*$ black hole (both stars and disc hot spots) and those related to weak lensing of some distant sources \\cite{Zak:2003:,Sereno:2006}. \n\nSimilarly, keeping rotational parameter fixed, the negative tidal charge has tendency to make the isoradial curve images (both direct and indirect) larger and less deformed while the positive tidal charge influence is of opposite character. On the other hand, for fixed rotational parameter of the black hole and disc radiating at the innermost part above the innermost stable orbit at $r=r_{ms}$, the negative tidal charge restricts the radiating ring image simply because the radius $r_{ms}$ grows with decreasing value of braneworld parameter $b$. Suppresion of the relativistic effects can be measurable also in the spectral line profiles generated by the inner hot part of the disc radiating at special X-ray line \\cite{SS:b:RAGTime:2007:Proceedings}. \n\nThe optical tests have to be confronted with the data obtained from quasiperiodic oscillations observed in some black-hole systems (microquasars \\cite{Rem-McCli:2006:ARASTRA:}). The orbital resonance model gives good estimates of the black-hole parameters \\cite{Tor-Abr-Klu-Stu:2005:,Tor:2005a:,Tor:2005b:}; this model has been recently generalized to the case of braneworld Kerr black holes \\cite{Stu-Kot:2008}. It is shown that in the case of microquasar GRS 1915+105 and Galactic Center Sgr $A^*$ black holes with the negative braneworld parameter $b$ are allowed by the observational data \\cite{Stu-Kot:2008}. Detailed modelling of optical phenomena connected to the oscillating discs or orbiting (oscillating) hot spots and related resonant phenomena between the oscillation modes could be very promising in putting limits on allowed values of the tidal charge of the black hole. We plan to elaborate such modelling in future. \n\n\n\\section*{Acknowledgements}\nResearch supported by the Czech grant MSM 4781305903 and LC 06014. One of the authors (Zden\\v{e}k Stuchl\\'{i}k) would like to express his gratitude to the Czech Committee for Collaboration with CERN for support.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn Nielsen theory, the purpose is to find a homotopy-invariant lower bound for the number of fixed points of a map. Similarly, Nielsen periodic point theory will try to find a homotopy-invariant lower bound for the number of periodic points. Both of these theories work for all continuous maps on connected, compact absolute neighborhood retracts, but in this paper we will mainly focus on manifolds.\n\n\\medskip\n\nSo, let $f:X\\to X$ be a continuous map on a manifold. We call $x\\in X$ a periodic point if $f^n(x)=x$ for a certain integer $n>0$. The smallest $n$ for which this holds, will be called the period of $n$. In Nielsen theory, we partition the fixed point set into fixed point classes and subsequently count the number of fixed point classes that can not disappear by using a homotopy. The resulting number is the Nielsen number $N(f)$. We can do a similar thing for $f^n$ and try to approximate the number of $n$-periodic points of $f$ by the number $N(f^n)$. However, this lower bound is not sharp in general. By studying the relations between fixed point classes for different iterates of $f$ more closely, it is possible to define a better lower bound, namely the full Nielsen-Jiang periodic number $NF_n(f)$. This number has already been studied extensively for maps on tori and nilmanifolds and in this paper we will extend some of these results to infra-nilmanifolds.\n\n\\medskip\n\nIn the first two sections, we will give an introduction to the theory of infra-nilmanifolds and to Nielsen periodic point theory. Subsequently, we will prove that infra-nilmanifolds are essentially reducible to the GCD (Theorem \\ref{thmessredgcd}) and essentially toral (Theorem \\ref{thmesstor}). These structural properties also hold for maps on nilmanifolds and tori and are a necessary first step in order to be able to compute $NF_n(f)$ on infra-nilmanifolds. We will also prove that for all $n$, $NF_n(f)=N(f^n)$ for a large class of maps on infra-nilmanifolds, namely the class of semi-hyperbolic maps (Theorem \\ref{thmNF=Nf)}).\n\n\\medskip\n\nBecause maps on tori and nilmanifolds are weakly Jiang, the computation of $NF_n(f)$ turns out to be very doable on these manifolds. On infra-nilmanifolds, however, this is not the case. In the penultimate section, we will therefore develop a method that makes the computation of $NF_n(f)$ easier. Sometimes, though, the computation still can be quite hard, but this might be inherent to the problem, because, by applying our method to several examples, it becomes apparent that the expression for $NF_n(f)$ can be very complex.\n\n\\medskip\n\nIn the last section, we will look specifically at affine maps on infra-nilmanifolds. In general, these maps behave better than arbitrary continuous maps. In Theorem \\ref{theorem uiteindelijk boost inessentieel}, we prove that, under very mild conditions, these affine maps can only be Wecken (which means that $\\#\\Fix(f)=N(f)$) at every level if and only if they are semi-hyperbolic. This allows us to determine exactly for which maps $NF_n(f)=N(f^n)$, for all $n$ (Corollary \\ref{cor NF=Nf}). \n\\section{Infra-nilmanifolds}\nLet $G$ be a connected, simply connected, nilpotent Lie group. The group of affine transformations on $G$, $\\Aff(G)= G\\semi \\Aut(G)$, admits a natural left action on $G$:\n\\[ \\forall (g,\\alpha)\\in \\Aff(G),\\, \\forall h \\in G: \\;\\;^{(g,\\alpha)}h= g \\alpha(h).\\]Define $p:\\Aff(G)=G\\semi \\Aut(G) \\to \\Aut(G)$ as the natural projection onto the second factor of the semi-direct product. \n\n\\begin{definition} A subgroup $\\Gamma \\subseteq \\Aff(G)$ is called \\textbf{almost-crystallographic} if and only if $p(\\Gamma)$ is finite and $\\Gamma\\cap G$ is a uniform and discrete subgroup of $G$. The finite group $F=p(\\Gamma)$ is called the holonomy group of $\\Gamma$.\n\\end{definition}\n\nWith these properties, the natural action of such a group $\\Gamma$ on $G$ becomes properly discontinuous and cocompact. Moreover, when $\\Gamma$ is torsion-free, this action is free, which makes the resulting quotient space $\\Gamma\\backslash G$ a compact manifold. This idea leads to the following definition. \n\n\\begin{definition}\nA torsion-free almost-crystallographic group $\\Gamma\\subseteq \\Aff(G) $ is called an \\textbf{almost-Bieberbach group}, and the corresponding manifold $\\Gamma\\backslash G$ is called an \\textbf{infra-nilmanifold} (modeled on $G$). \n\\end{definition} \n\nWhen the holonomy group is trivial, $\\Gamma$ can be considered to be a lattice in $G$ and the corresponding manifold $\\Gamma\\backslash G$ is a nilmanifold. When $G$ is abelian, i.e. $G$ is isomorphic to $\\R^n$, $\\Gamma$ will be called a Bieberbach group and $\\Gamma\\backslash G$ a compact flat manifold. When $G$ is abelian and the holonomy group of $\\Gamma$ is trivial, then $\\Gamma\\backslash G$ is a torus. Hence, infra-nilmanifolds are a natural generalization of nilmanifolds and tori.\n\n\\medskip\n\nNow, define the semigroup $\\aff(G)=G\\semi \\Endo(G)$. Note that $\\aff(G)$ acts on $G$ in a similar way as $\\Aff(G)$:\\[ (d,D): \\; G \\rightarrow G:\\; h \\mapsto d D(h).\\]The elements of this semigroup will be called affine maps, since $\\aff(G)$ is merely a generalization of the semigroup of affine maps $\\aff(\\R^n)$ to the nilpotent case. One of the main advantages of working with infra-nilmanifolds, is the fact that every continuous map lies in the same homotopy class as a map induced by such an affine map with similar properties. These maps are often easier to handle and are therefore ideal to use in proving several theorems. This strategy will be often used throughout this paper, for example in the last section of this paper.\n\n\\begin{theorem}[K.B.\\ Lee \\cite{lee95-2}]\n\\label{leemaps} Let $G$ be a connected and simply connected nilpotent Lie group and suppose that $\\Gamma, \\Gamma'\\subseteq \\Aff(G)$ are two almost-crystallographic groups modeled on $G$. \nThen for any homomorphism $\\varphi: \\Gamma\\rightarrow \\Gamma'$ there \nexists an element $ (d, D)\\in \\aff(G)$ such that \n\\[ \\forall \\gamma \\in \\Gamma: \\; \\varphi(\\gamma) (d,D) = (d,D) \\gamma.\\] \n\\end{theorem}\n\nWe can consider the equality $ \\varphi(\\gamma) (d,D) = (d,D) \\gamma$ in $\\aff(G)$, since $\\Aff(G)$ is a subgroup of $\\aff(G)$. With this equality in mind, when $\\Gamma$ and $\\Gamma'$ are torsion-free, it is easy to see that the affine map $(d,D)$ induces a well-defined map between infra-nilmanifolds:\\[\\overline{(d,D)}: \\Gamma \\backslash G \\rightarrow \\Gamma' \\backslash G: \\; \\Gamma h \\rightarrow \\Gamma' d D(h),\\]\nwhich exactly induces the morphism $\\varphi$ on the level of the fundamental groups.\n\n\\medskip\n\nOn the other hand, if we choose an arbitrary map $f:\\Gamma\\backslash G\\ra \\Gamma'\\backslash G$ between two infra-nilmanifolds and choose a lifting $\\tilde{f}:G \\to G$ of $f$, then there exists a morphism $\\tilde{f}_\\ast:\\Gamma\\to \\Gamma'$ such that $\\tilde{f}_\\ast(\\gamma) \\circ \\tilde{f} = \\tilde{f}\\circ \\gamma$, for all $\\gamma\\in \\Gamma$. By Theorem~\\ref{leemaps}, an affine map $(d,D)\\in \\aff(G)$ exists which also satisfies $\\tilde{f}_\\ast(\\gamma) \\circ (d,D)= (d,D)\\circ \\gamma$ for all $\\gamma\\in \\Gamma$. Therefore, the induced map $\\overline{(d,D)} $ and $f$ are homotopic. Hence, whenever we are studying homotopy-invariant properties for maps on infra-nilmanifolds, we are free to replace an arbitrary map $f$ by its affine counterpart.\n\n\\medskip\n\nThe map $(d,D)$ will be called an \\textbf{affine homotopy lift} of $f$, while we will denote the map $\\overline{(d,D)}$ as an \\textbf{affine map on an infra-nilmanifold}.\n\n\\medskip\n\nIt might be noteworthy to mention that $(d,D)$ is not unique in the sense that it depends on the choice of lifting $\\tilde{f}:G\\to G$. For example, from \\cite{lee95-2} we know that $D$ is only determined up to an inner automorphism of $G$. \n\n\\medskip \nIn \\cite{ll09-1}, J.B. Lee and K.B. Lee gave a formula to compute Lefschetz and Nielsen numbers on infra-nilmanifolds. Pick an infra-nilmanifold $\\Gamma\\backslash G$, determined by the almost-Bieberbach group $\\Gamma\\subseteq \\Aff(G)$ and let $F\\subseteq \\Aut(G)$ denote the holonomy group of $\\Gamma$. We will write $\\lie$ for the Lie algebra of $G$. Because $G$ is a nilpotent, connected and simply connected Lie group, the map $\\exp:\\lie\\to G$ will be a diffeomorphism. Therefore, $\\Endo(G)$ and $\\Endo(\\lie)$ are isomorphic and for every endomorphism $A\\in \\Endo(G)$, we have a unique $A_\\ast\\in \\Endo(\\lie)$, which is determined by the relation $A \\circ \\exp= \\exp \\circ A_\\ast$. This $A_\\ast$ will be called the differential of $A$. Of course, $A$ is invertible if and only if $A_\\ast$ is invertible.\n\\begin{theorem}[J.B.\\ Lee and K.B.\\ Lee \\cite{ll09-1}] \\label{LeeForm}Let $\\Gamma\\subseteq \\Aff(G)$ be an almost-Bieberbach group with holonomy group $F\\subseteq \\Aut(G)$. Let $M=\\Gamma\\backslash G$ be the associated infra-nilmanifold. If \n $f:M\\ra M$ is a map with affine homotopy lift $(d,D)$, then \n\\[L(f)=\\frac{1}{\\# F}\\sum_{A \\in F}\\det(I-A_\\ast D_\\ast)\\]\nand\n\\[N(f)=\\frac{1}{\\# F}\\sum_{A \\in F}|\\det(I-A_\\ast D_\\ast)|.\\]\n\\end{theorem}\n\nWe will now list a couple of properties that we will need in this paper.\n\n\\medskip\n\nThe following lemma can be found in \\cite{ddm05-1}. We have adapted the formulation very slightly, but in essence it is the same lemma and it can be proved in a similar way.\n\n\\begin{lemma}\\label{Lemma Bram}\nSuppose that $F\\subset \\GL_n(\\C)$ is a finite group, $D\\in \\C^{n\\times n}$ and for all $A\\in F$, there exists a $B\\in F$, such that $DA=BD$. Take an arbitrary element $A_1\\in F$ and build the sequence $(A_j)_{j\\in \\N_0}$, such that $DA_i=A_{i+1}D$, for all $i$. Then,\n\\begin{enumerate}\n\\item $\\forall j\\in \\N_0: \\det(I-A_1D)=\\det(I-A_jD).$\n\\item $\\exists l, j\\in \\N_0: (A_jD)^l=D^l.$\n\\end{enumerate}\n\\end{lemma}\n\nIn \\cite{dp11-1}, we can find the following theorems.\n\n\\begin{theorem}\\label{thmRInf}\nLet $\\Gamma\\subseteq \\Aff(G)$ be an almost-Bieberbach group with holonomy group $F\\subseteq \\Aut(G)$. Let $M=\\Gamma\\backslash G$ be the associated infra-nilmanifold. If \n $f:M\\to M$ is a map with affine homotopy lift $(d,D)$, then\n\\[\n R(f)=\\infty \\iff \\exists A \\in F \\text{ such that } \\det(I - A_\\ast D_\\ast)=0.\n\\]\n\\end{theorem}\n\n\\begin{theorem}\\label{thmN=R}\nLet $f$ be a map on an infra-nilmanifold, such that $R(f)<\\infty$, then $N(f)=R(f)$.\n\\end{theorem}\n\nWe will also mention the following definition.\n\n\\begin{definition}\nLet $M$ be an infra-nilmanifold and $f:M\\to M$ be a continuous map, with $(d,D)$ as an affine homotopy lift. We say that $f$ is a \\textbf{hyperbolic} map if $D_\\ast$ has no eigenvalues of modulus $1$. We say that $f$ is \\textbf{semi-hyperbolic} if $D_\\ast$ has no eigenvalues which are roots of unity.\n\\end{definition}\n\nThis class of (semi-)hyperbolic maps contains for example the class of expanding maps and the class of Anosov diffeomorphisms.\n\n\\section{Nielsen periodic point theory}\n\nIn this section, we will mostly follow the outline of \\cite{jm06-1}. Many of the difficult aspects of Nielsen periodic point theory will disappear when working on infra-nilmanifolds. Therefore, we will try to present all the necessary results in a swift way and skip most of the proofs and unnecessary details. More information about Nielsen periodic point theory in general can be found in \\cite{hk97-1},\\cite{heat99-1},\\cite{jm06-1} or \\cite{jian83-1}.\n\n\\medskip\n\nWhen $f^n(x)=x$, we call $x$ a periodic point. If $n$ is the smallest integer for which this holds, $x$ is a periodic point of pure period $n$. We can apply similar techniques as in Nielsen fixed point theory to achieve Nielsen periodic point theory. Just like Nielsen fixed point theory divides $\\Fix(f)$ into different fixed point classes, Nielsen periodic point theory divides $\\Fix(f^n)$ into different fixed point classes, for all $n>0$ and looks for relations between fixed point classes on different levels. This idea is covered in the following definition.\n\n\\begin{definition}\\label{defboost}\nLet $f:X \\to X$ be a self-map. If $\\F_k$ is a fixed point class of $f^k$, then $\\F_k$ will be contained in a fixed point class $\\F_{kn}$ of $(f^k)^n$, for all $n$. We say that $\\F_k$ \\textbf{boosts} to $\\F_{kn}$. On the other hand, we say that $\\F_{kn}$ \\textbf{reduces} to $\\F_k$.\n\\end{definition}\n\nThis idea of boosting a fixed point class also has a more algebraic interpretation. Fix a lifting $\\tilde{f}$ of $f$ to the universal covering $(\\tilde{X},p)$ of $X$. Then $\\tilde{f}$ induces a homomorphism $f_\\ast$ on the group of covering transformations by using the following relation:$$f_\\ast (\\alpha)\\circ \\tilde{f}=\\tilde{f}\\circ \\alpha.$$Let us denote the set of Reidemeister classes of $f$ by $\\mathcal{R}(f)$. Any element of this set will be denoted by the Reidemeister class $[\\alpha]$, where $\\alpha$ is the coordinate of a lifting $\\alpha\\circ \\tilde{f}$. Let $k,n$ be integers, such that $k|n$. We then define the following boosting function:\n$$\\gamma_{nk}: \\mathcal{R}(f^k)\\to \\mathcal{R}(f^n):[\\alpha]\\mapsto [\\alpha f_\\ast^k(\\alpha)f_\\ast^{2k}(\\alpha)\\dots f_\\ast^{n-k}(\\alpha)].$$The idea behind this boosting function, is the fact that $$(\\alpha\\circ \\tilde{f}^k)^\\frac{n}{k}=\\alpha f_\\ast^k(\\alpha)f_\\ast^{2k}(\\alpha)\\dots f_\\ast^{n-k}(\\alpha)\\circ \\tilde{f}^n.$$This equality immediately shows that if $\\gamma_{nk}([\\beta])=[\\alpha]$, the fixed point class $p\\Fix(\\beta\\circ \\tilde{f}^k)$ will be contained in the fixed point class $p\\Fix(\\alpha \\circ \\tilde{f}^n)$. Hence, our algebraic definition makes sense in retrospect to Definition \\ref{defboost}.\n\n\\medskip\n\nIn this paper, we will often make a slight abuse of notation. Whenever we use the expression $[\\alpha]_k$, we will simultaneously consider the Reidemeister class $[\\alpha]\\in \\mathcal{R}(f^k)$ and the fixed point class $p(\\Fix(\\alpha\\circ \\tilde{f}^k))$. We will also often switch between both of these interpretations, whenever necessary. Note that both interpretations are essentially the same due to the one-to-one correspondence between Reidemeister classes and fixed point classes. This also means that we make a distinction between empty fixed point classes that come from a different Reidemeister classes. In a certain sense, this approach coincides with the idea of \\textit{labeled fixed point classes}, in \\cite{jian83-1}.\n\n\\medskip \n\nNote that this description only depends on the homomorphism $f_*$. So, when $f$ and $g$ induce the same morphism on the group of covering transformations, then the structure of their periodic point classes will be the same. More specifically, this means that the whole description above (and everything that will follow) is homotopy-invariant. \n\n\\medskip \n\nWhen $k|m|n$, an easy computation shows that $\\gamma_{nm}\\gamma_{mk}=\\gamma_{nk}$. Also, $\\gamma_{nn}=\\Id_{\\mathcal{R}(f^n)}$.\n\n\\medskip\n\nActually, to give the precise definition of Nielsen periodic point theory, we need a little more than this definition in terms of classes, namely a definition in terms of orbits. Define the following map$$\\mathcal{R}_f:\\mathcal{R}(f^n)\\to \\mathcal{R}(f^n):[\\alpha]\\mapsto[f_\\ast(\\alpha)].$$One can easily see that this map is well-defined and that $\\mathcal{R}_f^n=\\Id_{\\mathcal{R}(f^n)}$. Furthermore, by using the commutativity property of the fixed point index on the maps $f$ and $f^{n-1}$, it is clear that this map preserves the index of the associated fixed point classes. By identifying $[\\alpha]$ with $[f_\\ast(\\alpha)]$ in $\\mathcal{R}(f^n)$, for all $\\alpha$, we find the quotient set $\\mathcal{OR}(f^n)$ of orbits of Reidemeister classes. Since the index is preserved in every orbit, it makes sense to talk about essential and inessential orbits. One can also notice that boosting functions make sense in terms of orbits, so we can talk about reducible and irreducible orbits, depending on whether they have a pre-image under a boosting function or not.\n\n\n\\begin{lemma} [\\cite{jm06-1}, 5.1.13]\\label{lemessirred}\nIf $\\mathcal{A}\\in \\mathcal{OR}(f^n)$ is essential and irreducible, then this orbit contains at least $n$ periodic points of period $n$.\n\\end{lemma}\n\nThis lemma gives us the idea for the following definition.\n\n\\begin{definition}\nWe define the \\textbf{prime Nielsen-Jiang periodic number} $NP_n(f)$ as $$n \\times \\textrm{(number of irreducible essential orbits in }\\mathcal{OR}(f^n)).$$\n\\end{definition}\n\nIf $P_n(f)$ is the set of periodic points of $f$ of pure period $n$, then Lemma \\ref{lemessirred} ensures us that $NP_n(f)$ is a homotopy-invariant lower bound of $\\# P_n(f)$.\n\n\\medskip\n\nWe would also like to find a similar lower bound for $\\# \\Fix(f^n)$. Pick an arbitrary $\\mathcal{A}\\in \\mathcal{OR}(f^n)$. We define the depth $d(\\mathcal{A})$ to be the least divisor $k$ of $n$, such that $\\mathcal{A}\\in \\textrm{Im}(\\gamma_{nk})$.\n\n\\begin{definition}\nLet $n$ be a fixed positive integer. A subset $\\mathcal{PS}\\subset \\bigcup_{k|n}\\mathcal{OR}(f^k)$ is called a \\textbf{preceding system} if every essential orbit $\\mathcal{A}$ in $\\bigcup_{k|n}\\mathcal{OR}(f^k)$ is preceded by an element of $\\mathcal{PS}$. Such a preceding system is called \\textbf{minimal} if the number $\\sum_{\\mathcal{A}\\in \\mathcal{PS}}d(\\mathcal{A})$ is minimal.\n\\end{definition}\n\n\\begin{definition}\nThe \\textbf{full Nielsen-Jiang periodic number} $NF_n(f)$ is defined as $\\sum_{\\mathcal{A}\\in \\mathcal{PS}}d(\\mathcal{A})$, where $\\mathcal{PS}$ is a minimal preceding system.\n\\end{definition}\n\n\\begin{theorem}[\\cite{jm06-1}, 5.1.18]\n$NF_n(f)$ is a homotopy-invariant lower bound for the number $\\# \\Fix(f^n)$.\n\\end{theorem}\nNote that every preceding system must contain every essential irreducible orbit of $\\mathcal{OR}(f^n)$. Since every of these orbits has a depth of $n$, we know that the following inequality holds:$$\\sum_{k|n}NP_k(f)\\leq NF_n(f).$$\n\nAn important definition that gives some structure to the boosting and reducing relations is the following.\n\n\\begin{definition}\nA self-map $f:X\\to X$ will be called \\textbf{essentially reducible} if, for all $n,k$, essential fixed point classes of $f^{kn}$ can only reduce to essential fixed point classes of $f^k$. A space $X$ is called essentially reducible if every self-map $f:X\\to X$ is essentially reducible.\n\\end{definition}\n\nIt can be shown that the fixed point classes for maps on infra-nilmanifolds always have this nice structure for their boosting and reducing relations.\n\n\\begin{theorem}[\\cite{lz07-1}]\\label{thmleezhao}\nInfra-nilmanifolds are essentially reducible.\n\\end{theorem}\n\nA nice consequence of being essentially reducible, is the following lemma.\n\n\\begin{lemma}[\\cite{jm06-1}, 5.1.22]\\label{lemNPNF}\nIf $f:X\\to X$ is essentially reducible, then it has a unique minimal preceding system, namely the set of all the essential irreducible orbits in $\\bigcup_{k|n}\\mathcal{OR}(f^k)$. As a consequence, the following equality holds:$$\\sum_{k|n}NP_k(f)=NF_n(f).$$\n\\end{lemma}\n\nOf course, by using the M\\\"{o}bius inversion formula, we can also write$$NP_n(f)=\\sum_{k|n}\\mu\\left(\\frac{n}{k}\\right)NF_k(f),$$where $\\mu$ denotes the M\\\"{o}bius function.\n\n\\medskip\n\nAs a generalization of being essentially reducible, we can define two other structures on the boosting and reducing relations.\n\n\\begin{definition}\nA map $f:X\\to X$ is called \\textbf{essentially reducible to the greatest common divisor} (GCD) if it is essentially reducible and if for every essential fixed point class $[\\alpha]_n$ that reduces to both $[\\beta]_k$ and $[\\gamma]_l$, there exists a fixed point class $[\\delta]_d$, with $d=\\gcd(k,l)$, such that $[\\alpha]_n$ reduces to $[\\delta]_d$. If this holds for every self-map on $X$, we will say that $X$ is essentially reducible to the GCD.\n\\end{definition}\n\nAn easy consequence of this definition is the following lemma.\n\n\\begin{lemma}[\\cite{jm06-1}, 5.1.26]\\label{lemessredgcd}\nIf $f:X\\to X$ is essentially reducible to the GCD, then every essential fixed point class $[\\alpha]_n$ in $\\mathcal{R}(f^n)$ is preceded by a unique irreducible essential fixed point class $[\\beta]_k$. Moreover, $d([\\alpha]_n)=k$.\n\\end{lemma}\n\nBy the length $l([\\alpha]_n)$, we mean the minimal number $l|n$, such that $\\mathcal{R}_f^l([\\alpha]_n)=[\\alpha]_n$. Alternatively, this is the number of fixed point classes in an orbit $\\mathcal{A}\\in \\mathcal{OR}(f^n)$.\n\n\\medskip\n\nIt is immediately clear that $d([\\alpha])\\geq l([\\alpha])$, because every class in an orbit that reduces to depth $d$ will be a fixed point of the map $\\mathcal{R}_f^d$.\n\n\\begin{definition}\nA map $f:X\\to X$ is called \\textbf{essentially toral} if it is essentially reducible and if the following two conditions are fulfilled:\\begin{enumerate}\n\\item For every essential fixed point class in $\\mathcal{R}(f^n)$, the length and depth coincide.\n\\item If $[\\alpha]_n$ is essential and $\\gamma_{nk}([\\beta]_k)=\\gamma_{nk}([\\gamma]_k)=[\\alpha]_n$, then $[\\beta]_k=[\\gamma]_k$.\n\\end{enumerate}\nIf this holds for every self-map on $X$, we will say that $X$ is essentially toral.\n\\end{definition}\n\nBecause lengths and depths coincide, the following lemma follows easily.\n\\begin{lemma}[\\cite{jm06-1}, 5.1.30]\\label{lemclasses}\nIf $f:X\\to X$ is essentially toral, then $NP_n(f)$ equals the number of irreducible essential fixed point classes in $\\mathcal{R}(f^n)$.\n\\end{lemma}\nThis lemma actually tells us that if we are working on an essentially toral space, we are free to replace the orbit theory by a theory in terms of classes. By combining Lemma \\ref{lemNPNF} and Lemma \\ref{lemclasses}, the following can also be easily deduced.\n\n\\begin{corollary}\\label{coresstor}\nIf $f:X\\to X$ is essentially toral, then $NF_n(f)$ equals the number of irreducible essential fixed point classes in $\\bigcup_{k|n}\\mathcal{R}(f^k)$.\n\\end{corollary}\n\nIn \\cite{hk97-1}, the following theorem is proved.\n\n\\begin{theorem}\nNilmanifolds are essentially reducible to the GCD and essentially toral.\n\\end{theorem}\n\nNote that they actually proved a more general version of this theorem, as they showed that the theorem above also holds for solvmanifolds.\n\n\\begin{definition}\nA map $f:X\\to X$ is called \\textbf{weakly Jiang} if $N(f)=0$ or $N(f)=R(f)$. This means that all fixed point classes are simultaneously essential or inessential.\n\\end{definition}\n\n\\begin{theorem}[\\cite{hk97-1}, Theorem 5.1]\\label{thmhk}\nSuppose that $X$ is essentially toral and essentially irreducible to the GCD. If $f:X\\to X$ is a map such that $f^n$ is weakly Jiang and $N(f^n)\\neq 0$, then $$NF_n(f)=N(f^n)$$and the same formula holds for every divisor of $n$.\n\\end{theorem}\nThe idea behind this proof is very simple. Since every fixed point class at level $n$ is essential, by Lemma \\ref{lemessredgcd}, we know that every such class is preceded by a unique irreducible essential class. On the other hand, every irreducible essential fixed point class in $\\bigcup_{k|n}\\mathcal{R}(f^k)$ has to boost essentially, since there are simply no inessential fixed point classes to boost to at level $n$. Because of these observations, there exist a bijection between the essential fixed point classes at level $n$ and the irreducible essential fixed point classes in $\\bigcup_{k|n}\\mathcal{R}(f^k)$. Corollary~\\ref{coresstor} then proves the theorem.\n\n\\medskip\n\nIt is known that every map on a nilmanifold is weakly Jiang, due to the result of Anosov (\\cite{anos85-1}) or Fadell and Husseini (\\cite{fh86-1}). Consequently, Theorem \\ref{thmhk} holds for nilmanifolds. Unfortunately, not every map on an infra-nilmanifold is weakly Jiang. Even on the Klein bottle, the smallest example of an infra-nilmanifold which is not a nilmanifold, it is possible to find counterexamples.\n\n\\begin{example}\\label{Example not weakly Jiang}\nSuppose we have the following presentation of the Klein bottle group: $$<\\alpha,\\beta | \\alpha \\beta =\\beta^{-1} \\alpha>.$$Let $k\\neq 1$ be odd. Now, let $f_*:\\alpha\\mapsto \\alpha^k, \\textrm{ } \\beta\\mapsto \\beta^{-1}$ be the induced morphism for a map $f$ on the Klein bottle. One can check that this morphism indeed induces a map on the Klein bottle, for which it holds that $R(f)=\\infty$, while $N(f)\\neq 0$.\n\\end{example}\nAn algebraic argument for the fact that maps on nilmanifolds are weakly Jiang, while maps on infra-nilmanifolds are generally not, can be found by combining Theorem \\ref{LeeForm} with Theorem~\\ref{thmRInf} and Theorem \\ref{thmN=R}. When working on nilmanifolds, the formula in Theorem \\ref{LeeForm} reduces to a single determinant. By Theorem \\ref{thmRInf}, we know that this determinant will be equal to $0$ (and hence $N(f)=0$) if and only if $R(f)=\\infty$. By combining this fact with Theorem \\ref{thmN=R}, it follows that nilmanifolds are weakly Jiang. When working on infra-nilmanifolds, the sum generally consists of multiple determinants. Therefore, it is possible that some of these determinants are $0$ and some are not. If this is the case, a similar argument as before will show that the map is not weakly Jiang, as $R(f)=\\infty$, while $N(f)\\neq 0$.\n\\section{Structure on the periodic point classes of infra-nilmanifolds}\nIn this section, we will show that infra-nilmanifolds are both essentially reducible to the GCD and essentially toral. As a result of these structural properties, we will be able to show a theorem similar to Theorem \\ref{thmhk} for semi-hyperbolic maps on infra-nilmanifolds.\n\n\\medskip\n\nWe will prove both of these structural properties for affine maps on infra-nilmanifolds and because the theory described in the previous section is homotopy-invariant, this will be sufficient. As already mentioned before, affine maps are often much easier to deal with. This fact is exemplified in the following proposition, which can be found in \\cite{fl13-1}.\n\n\\begin{proposition}\\label{propaff}\nIf $\\overline{(d,D)}:M\\to M$ is an affine map on an infra-nilmanifold, then every non-empty fixed point class is path-connected and\n\\begin{enumerate}\n\\item every essential fixed point class of $\\overline{(d,D)}$ consists of exactly one point.\n\\item every non-essential fixed point class of $\\overline{(d,D)}$ is empty or consists of infinitely many points.\n\\end{enumerate} \n\\end{proposition}\n\nNow we can prove the two main theorems of this section.\n\n\\begin{theorem}\\label{thmessredgcd}\nInfra-nilmanifolds are essentially reducible to the GCD.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{thmleezhao}, we already know that infra-nilmanifolds are essentially reducible.\n\\medskip\nIt is known from \\cite{ll06-1} that every almost-Bieberbach group $\\Gamma$ has a fully characteristic subgroup $\\Lambda$ of finite index, such that $\\Lambda\\subset G$. Therefore, every infra-nilmanifold of the form $\\Gamma\\backslash G$ is finitely covered by a nilmanifold $\\Lambda\\backslash G$, such that every continuous map $f:\\Gamma\\backslash G\\to \\Gamma\\backslash G$ can be lifted to a map $\\overline{f}:\\Lambda\\backslash G\\to \\Lambda\\backslash G$. All in all, we have the following commuting diagram, where $\\beta_n$ is a covering transformation and $\\overline{\\beta}_n$ is the natural projection of $\\beta_n$ into $\\Gamma\/ \\Lambda$.\n\\begin{displaymath}\n \\xymatrix{ G \\ar[dd]_p \\ar[dr]_{p'} \\ar[rrrrr]^{\\beta_n \\tilde{f}^n}& & & & & G \\ar[dd]^{p} \\ar[dl]^{p'} \\\\\n & \\Lambda\\backslash G \\ar[rrr]^{\\overline{\\beta}_n\\overline{f}^n} \\ar[dl]_{\\overline{p}} & & & \\Lambda\\backslash G \\ar[dr]^{\\overline{p}} & \\\\\n \\Gamma\\backslash G \\ar[rrrrr]^{f^n}& & & & & \\Gamma\\backslash G }.\n\\end{displaymath} \n\nSuppose that $g$ is the affine map on $\\Gamma\\backslash G$ that is induced by an affine homotopy lift $\\tilde{g}$ of $f$. Let $p(\\Fix(\\beta_n\\tilde{g}^n))$ be an essential fixed point class on level $n$, such that, for $r,s|n$, this fixed point class reduces to $p(\\Fix(\\beta_r\\tilde{g}^r))$ and $p(\\Fix(\\beta_s\\tilde{g}^s))$.\n\n\\medskip\n\nBecause of Proposition \\ref{propaff}, we know that there exists $x\\in \\Fix(g^n)$, such that all these fixed point classes are equal to the set $\\{x\\}$. The fixed point index is a local property and a covering map is a local homeomorphism, hence, the fixed point class $p'(\\Fix(\\beta_n\\tilde{g}^n))$ is also essential. By using Proposition \\ref{propaff} again, we know this fixed point class will consist of one point, namely a $\\overline{x}\\in \\overline{p}^{-1}(x)$. By a similar reasoning, there will exist $\\gamma_r,\\gamma_s \\in \\Gamma$ and accordingly, $\\overline{\\gamma_r},\\overline{\\gamma_s} \\in \\Gamma \/\\Lambda$ such that $$p'(\\Fix(\\beta_r\\tilde{g}^r))=\\{\\overline{\\gamma_r}\\cdot \\overline{ x}\\}\\textrm{ and }p'(\\Fix(\\beta_s\\tilde{g}^s))=\\{\\overline{\\gamma_s}\\cdot \\overline{ x}\\}.$$An easy calculation then shows that $$p'(\\Fix(\\gamma_r^{-1}\\beta_r g_\\ast^r(\\gamma_r)\\tilde{g}^r))=\\{\\overline{x}\\}\\textrm{ and }p'(\\Fix(\\gamma_s^{-1}\\beta_sg_\\ast^s(\\gamma_s)\\tilde{g}^s))=\\{\\overline{x}\\}.$$This actually means that if we choose good representatives in the Reidemeister classes of $[\\beta_r]_r$ and $[\\beta_s]_s$, $p'(\\Fix(\\beta_n\\tilde{g}^n))$ will reduce to both $p'(\\Fix(\\beta_r\\tilde{g}^r))$ and $p'(\\Fix(\\beta_s\\tilde{g}^s))$ on our nilmanifold $\\Lambda\\backslash G$. Since nilmanifolds are known to be essentially reducible to the GCD, there exists a $\\beta_d$, with $d=\\gcd(r,s)$, such that $p'(\\Fix(\\beta_r\\tilde{g}^r))$ and $p'(\\Fix(\\beta_s\\tilde{g}^s))$ both reduce to $p'(\\Fix(\\beta_d\\tilde{g}^d))$. By applying $\\overline{p}$ to this fixed point class, the statement is proved.\n\\end{proof}\n\n\\begin{theorem}\\label{thmesstor}\nInfra-nilmanifolds are essentially toral.\n\\end{theorem}\n\\begin{proof}\nAgain, we already know that infra-nilmanifolds are essentially reducible and again, by homotopy-invariance, it suffices to prove this theorem for affine maps $g$.\n\n\\medskip\n\nLet $[\\alpha]_n$ be an essential fixed point class of $g^n$. Since we already know that $d([\\alpha]_n)\\geq l([\\alpha]_n)$, we only need to prove that the strict inequality is impossible. So, suppose that $d=d([\\alpha]_n)> l([\\alpha]_n)=l$. Because of Proposition \\ref{propaff}, we know that there exists $x\\in \\Fix(g^n)$ such that $\\{x\\}$ is the fixed point class associated to $[\\alpha]_n$. Furthermore, $\\{g(x)\\}$ will be the fixed point class associated to $\\mathcal{R}_g([\\alpha]_n)$. By definition and because there is only one fixed point in each essential fixed point class, $g^l(x)=x$. Therefore, $[\\alpha]_n$ reduces to a fixed point class on level $l$, which is a contradiction to the fact that $d>l$. This proves the first condition.\n\n\\medskip\n\nIf $[\\beta]_k$ and $[\\gamma]_k$ are both boosted to $[\\alpha]_n$, then we know that they are both essential fixed point classes. Hence, they both have the set $\\{x\\}$ as associated fixed point class, which means that $[\\beta]_k=[\\gamma]_k$. This proves the second condition of essential torality.\n\\end{proof}\n\nNow we will use these newly obtained structural properties for infra-nilmanifolds to establish a few results concerning Nielsen periodic points. We start with the following definition.\n\n\\begin{definition}\nWe say that an essential fixed point $[\\alpha]_k$ is \\textbf{(in)essentially boosted to level $n$}, if $[\\alpha]_k$ is boosted to an (in)essential fixed point class $[\\beta]_{n}$. \n\\end{definition}\n\nLet us denote the set of all irreducible fixed point classes which are inessentially boosted to level $n$ for a continuous self-map $f$ by $IIB_n(f)$. Note that this is a subset of $\\bigcup_{k|n}\\mathcal{R}(f^k)$, since this set contains all fixed point classes on all levels that will boost to level $n$.\n\n\\begin{proposition}\\label{propIIB}\nWhenever a map $f$ is essentially reducible to the GCD and essentially toral, we have that\n$$NF_n(f)=N(f^n)+\\#IIB_n(f).$$\n\\end{proposition}\n\\begin{proof}\nWhen a map $f$ is essentially toral, we know by Corollary \\ref{coresstor} that $NF_n(f)$ equals the number of irreducible essential classes in $\\bigcup_{k|n}\\mathcal{R}(f^k)$. Now, pick an arbitrary irreducible essential class. We can distinguish two disjoint cases. \n\n\\medskip\n\nOn the one hand, suppose this class boosts essentially to level $n$. As $f$ is essentially reducible to the GCD, we can apply Lemma \\ref{lemessredgcd} and we know that every essential fixed point class reduces to a unique irreducible essential fixed point class. This means that there is a bijection between the irreducible essential classes that are essentially boosted to level $n$ and the essential fixed point classes of $\\mathcal{R}(f^n)$.\n\n\\medskip\n\nIf, on the other hand, our class boosts inessentially to level $n$, it belongs to $IIB_n(f)$. Since both cases are disjoint, the equality follows.\n\\end{proof}\n\nIt is quite easy to see that this proposition is a generalization of Theorem \\ref{thmhk}. In fact the proof is a slightly adapted version where we take inessential boosting into account.\n\n\\begin{theorem}\\label{thmNF=Nf)}\nWhen $f$ is a semi-hyperbolic map on an infra-nilmanifold, then for all $n>0$ $$NF_n(f)=N(f^n).$$\n\\end{theorem}\n\\begin{proof}\nSuppose that $(d,D)$ is an affine homotopy lift of $f$. By combining Theorem \\ref{thmRInf} and Theorem \\ref{thmN=R} we know that every fixed point class on level $n$ is essential if and only if for all $A\\in F$ (where $F$ is the holonomy group of our infra-nilmanifold),$$\\det(I-A_\\ast D_\\ast^n)\\neq 0.$$By Lemma \\ref{Lemma Bram}, we know that there exists $B \\in F$ and an integer $l$, such that $$(B_\\ast D_\\ast^n)^l=D_\\ast^{ln} \\textrm{ and } \\det(I-A_\\ast D_\\ast^n)=\\det(I-B_\\ast D_\\ast^n).$$Note that $\\det(I-B_\\ast D_\\ast^n)=0$ implies that $B_\\ast D_\\ast^n$ has an eigenvalue $1$, but this would mean that $D_\\ast^{ln}$ had an eigenvalue $1$, which is in contradiction with the fact that $f$ is semi-hyperbolic. Therefore, we know that every fixed point class on level $n$ is essential, which implies that $IIB_n(f)$ is the empty set. The theorem then follows from Proposition \\ref{propIIB}.\n\\end{proof}\n\nNote that the proof of this theorem actually also proves the following proposition, since we proved that every fixed point class on every level is essential.\n\n\\begin{proposition}\nWhen $f$ is a semi-hyperbolic map on an infra-nilmanifold, then for all $n>0$, $f^n$ is a weakly Jiang map.\n\\end{proposition}\n\nWith this proposition in mind, one can easily see that Theorem \\ref{thmNF=Nf)} is a special case of Theorem \\ref{thmhk}.\n\n\\medskip\n\nLater on, in the last section, we will show, under mild conditions, that semi-hyperbolic maps are the only maps for which a non-trivial equality $NF_n(f)=N(f^n)$ holds.\n\n\\medskip\n\nTheorem \\ref{thmNF=Nf)} actually has a nice corollary in the area of dynamical zeta functions. By $N_f(z)$, we mean the Nielsen zeta function, as defined in \\cite{fels00-2} ,\\cite{fels88-1} or \\cite{fp85-1}. In \\cite{fels00-2}, the following definition of the \\textbf{minimal dynamical zeta function} can be found: $$NF_f(z)=\\exp\\left(\\sum_{k=1}^\\infty \\frac{NF_k(f)z^k}{k}\\right).$$\n\n\nWe now have the following corollary.\n\n\\begin{corollary}\nLet $f$ be a semi-hyperbolic map on an infra-nilmanifold, then $N_f(z)=~NF_f(z)$.\n\\end{corollary}\n\nBy using the main result of \\cite{dd13-2}, which states that Nielsen zeta functions are rational for self-maps on infra-nilmanifolds, we can also conclude the following.\n\n\\begin{corollary}\nLet $f$ be a semi-hyperbolic map on an infra-nilmanifold, then $NF_f(z)$ is a rational function.\n\\end{corollary}\n\n\\section{A method for computing $NF_n(f)$}\nIn theory, we are now capable to compute $NF_n(f)$, due to Proposition \\ref{propIIB}. By using the standard formula for Nielsen numbers for maps on infra-nilmanifolds (Theorem \\ref{LeeForm}), the computation of $N(f^n)$ becomes very simple and therefore, the only thing left to check is how many fixed point classes lie in $IIB_n(f)$. \\medskip\n\nIn some cases, for example for semi-hyperbolic maps, the computation of $\\# IIB_n(f)$ becomes trivial. However, in a more general setting, this number can be a very tedious thing to compute. In this section, we will try to develop a method to make this computation a bit easier.\n\n\\subsection{$\\sim_f$-equivalence classes}\nWe start this section with the following definition.\n\\begin{definition}\nLet $f: \\Gamma\\backslash G \\to \\Gamma\\backslash G$ be a continuous map, such that $F$ is the holonomy group of $\\Gamma$. We will say that $A, B \\in F$ are \\textbf{$f$-conjugated}, if there exist $a,b\\in G$ and $\\gamma\\in \\Gamma$ such that $(a,A)$ and $(b, B)$ are elements of $\\Gamma$ and $$\\gamma\\circ (a,A) \\circ f_*(\\gamma^{-1})=(b,B).$$We will write $A\\sim_f B.$\n\\end{definition}\n\nAn alternative for this definition is given in the following lemma. In general, the definition will be more useful when one quickly wants to find elements that are $f$-conjugated. The lemma below is often more useful when it comes to finding properties of the $\\sim_f$-relation. \n\\begin{lemma}\\label{lemma alternatief definitie}\nLet $f: \\Gamma\\backslash G \\to \\Gamma\\backslash G$ be a continuous map, such that $F$ is the holonomy group of $\\Gamma$. Then $A\\sim_f B$ if and only if for all $(a,A)\\in \\Gamma$, there exist $(b,B), \\gamma \\in \\Gamma$, such that$$\\gamma\\circ (a,A) \\circ f_*(\\gamma^{-1})=(b,B).$$\n\\end{lemma} \n\\begin{proof}\nOne direction is obvious. For the other direction, pick an arbitrary $(a,A)\\in \\Gamma$ and suppose that $A\\sim_f B$. This means that there exist $a_0, b_0 \\in G$ and $\\gamma_0 \\in \\Gamma$, such that $(a_0,A),(b_0,B) \\in \\Gamma$ and $$\\gamma_0\\circ (a_0,A) \\circ f_*(\\gamma_0^{-1})=(b_0,B).$$Then $$\\gamma_0\\circ (a,A) \\circ f_*(\\gamma_0^{-1})=\\left(\\gamma_0\\circ (a_0,A) \\circ f_*(\\gamma_0^{-1})\\right) \\circ \\left(f_*(\\gamma_0) \\circ (A^{-1}(a_0^{-1}a), \\Id) \\circ f_*(\\gamma_0^{-1})\\right).$$As $\\Gamma \\cap G$ is a normal divisor of $\\Gamma$, there exists a $(c,\\Id)\\in \\Gamma\\cap G$, such that $$\\gamma_0\\circ (a,A) \\circ f_*(\\gamma_0^{-1})=(b_0,B)\\circ (c,\\Id).$$\n\\end{proof}\n\nA simple consequence of the previous lemma is the following.\n\\begin{corollary}\n$\\sim_f$ is an equivalence relation.\n\\end{corollary}\n\\begin{proof}\nThe fact that this relation is reflexive and symmetric is easy to see, so the only thing left to prove is the transitivity. Suppose that $A\\sim_f B$ and $B\\sim _f C$. By definition, there exist $a, b \\in G$ and $\\gamma_1 \\in \\Gamma$, such that $(a,A), (b,B)\\in \\Gamma$ and $$\\gamma_1\\circ (a,A) \\circ f_*(\\gamma_1^{-1})=(b,B).$$By Lemma \\ref{lemma alternatief definitie}, we know there exist $\\gamma_2, (c,C) \\in \\Gamma$, such that $$\\gamma_2\\circ (b,B) \\circ f_*(\\gamma_2^{-1})=(c,C).$$By combining both equations, we see$$(\\gamma_2 \\circ \\gamma_1)\\circ (a,A)\\circ f_*((\\gamma_2\\circ \\gamma_1)^{-1})=(c,C),$$which means that $A\\sim_f C$.\n\\end{proof}\n\nThe fact that $\\sim_f$ is an equivalence relation implies that we can partition $F$ into \\textbf{$\\sim_f$-equivalence classes}. There is an even more convenient way to look at the $\\sim_f$-equivalence classes for which we only need to work in the holonomy group $F$ of our infra-nilmanifold. This will be the ideal tool to compute these classes in a more effective way.\n\n\\begin{definition}\nBy $f_\\#(\\Id)$, we mean the set of all $A\\in F$, such that there exist $(g,\\Id), (a,A)\\in \\Gamma$, for which $f_*(g,\\Id)=(a,A)$. Analogously, $f_\\#(C)$ is the set of all $A\\in F$, such that there exist $(c,C), (a,A)\\in \\Gamma$, for which $f_*(c,C)=(a,A)$.\n\\end{definition}\n\nNote that it is known that $\\Gamma\\cap G$ is finitely generated. With this in mind, we can deduce the following lemma. \n\n\\begin{lemma}\\label{lemma f-equivalentieklassen}\nPick an arbitrary $(c,C)\\in \\Gamma$. Again, $p:\\Aff(G)=G\\semi \\Aut(G) \\to \\Aut(G)$ denotes the natural projection onto the second factor of the semi-direct product. Suppose that $(g_i,\\Id)_{i=1}^n$ is a set of generators for $\\Gamma\\cap G$. Then we can describe $f_\\#(\\Id)$ and $f_\\#(C)$ as follows:\n\\begin{itemize}\n\\item $f_\\#(\\Id)=\\grp\\{p(f_*(g_i,\\Id))\\}.$\n\\item $f_\\#(C)=p(f_*(c,C))f_\\#(\\Id)=f_\\#(\\Id)p(f_*(c,C)).$\n\\end{itemize}\n\n\\end{lemma}\n\\begin{proof}\nIt is clear that $f_\\#(\\Id)$ contains all elements $p(f_*(g_i,\\Id))$ and it is also clear that $f_*(\\Id)$ is precisely the set $p(f_*(\\Gamma\\cap G))=p(f_*(\\grp \\{ (g_i, \\Id)\\ \\| \\ i=1\\dots n\\}))$. As $p\\circ f_*$ is a morphism, this will be equal to $\\grp\\{p(f_*( g_i, \\Id)) \\ \\| \\ i=1\\dots n\\}$, which proves the first statement.\n\n\\medskip\n\nTake an arbitrary element of the form $(c_1, C)\\in \\Gamma$. It is clear that $p(f_*(c_1,C))\\in f_\\#(C)$. Now, an easy computation shows that $$p(f_*(c_1,C))=p(f_*(c,C))p(f_*(c,C)^{-1})p(f_*(c_1,C))=p(f_*(c,C))p(f_*(C^{-1}(c^{-1}c_1),\\Id)).$$As $p(f_*(C^{-1}(c^{-1}c_1),\\Id))\\in f_\\#(\\Id)$, the first equality of second statement is proved. The second equality can be proved in a similar way, by multiplying with $p(f_*(c,C)^{-1})p(f_*(c,C))$ on the right.\n\\end{proof}\n\nAs a side remark, note that an easy consequence of this lemma is the fact that if $p\\circ f_*:\\Gamma\\to F$ is a surjective morphism, then $f_\\#(\\Id)$ will be a normal divisor of $F$.\n\n\\medskip \n\nBy using this lemma, we can derive an easier way of determining $\\sim_f$-equivalence classes.\n\n\\begin{proposition}\nSuppose $A,B$ are elements in $F$. Then, $A\\sim_f B$ if and only if there exists a $C\\in F$, such that $B\\in CAf_\\#(C)^{-1}$. Here $f_\\#(C)^{-1}$ denotes the set of all inverses of elements in $f_\\#(C)$, or equivalently $f_\\#(C^{-1})$.\n\\end{proposition}\n\\begin{proof}\nOne direction is obvious. For the other direction, suppose that there exists a $C\\in F$, such that $B\\in CAf_\\#(C)^{-1}$. Then, there exist $a,c\\in G$, such that $(c,C), (a,A)\\in \\Gamma$. By Lemma~\\ref{lemma f-equivalentieklassen}, any element in $f_\\#(C)^{-1}$ will come from an element of the form $f_*(c,C)^{-1}f_*(g,\\Id)$, with $(g,\\Id)\\in \\Gamma$. As a result, we find that there exists a $b\\in G$, such that $$(c,C)(a,A)f_*(c,C)^{-1}f_*(g,\\Id)=(b,B).$$Note that $(b,B)$ will also be an element of $\\Gamma$. By multiplying both sides on the left with $(g^{-1},\\Id)$ (which is also in $\\Gamma$), we get$$ (g^{-1},\\Id)(c,C)(a,A)f_*((g^{-1},\\Id)(c,C))^{-1}=(g^{-1},\\Id)(b,B)=(g^{-1}b,B).$$This proves that $A\\sim_f B$.\n\n\\medskip\n\nIn order to see that $f_\\#(C^{-1})=f_\\#(C)^{-1}$, note that Lemma \\ref{lemma f-equivalentieklassen} tells us that $f_\\#(\\Id)$ is a group and that $f_\\#(C)=p(f_*(c,C))f_\\#(\\Id)=f_\\#(\\Id)p(f_*(c,C))$, from which this fact follows immediately, as $$f_\\#(C)^{-1}=f_\\#(\\Id)^{-1}p(f_*(c,C))^{-1}=f_\\#(\\Id)p(f_*(c,C)^{-1})=f_\\#(C^{-1}).$$\n\\end{proof}\n\n\\begin{corollary}\nThe $\\sim_f$-equivalence class of $A$ equals the set $$\\bigcup_{C\\in F} CAf_\\#(C)^{-1}.$$\n\\end{corollary}\n\n\\begin{corollary}\\label{corollary D invertible}\nLet $(d,D)$ be an affine homotopy lift of a continuous map $f$ on an infra-nilmanifold $\\Gamma\\backslash G$. When $D$ is invertible in $\\Endo(G)$, for any $C\\in F$, $f_\\#(C)$ will be a singleton.\n\\end{corollary}\n\\begin{proof}\nTake an arbitrary element $(g,\\Id)$ of $\\Gamma\\cap G$. Then $f_*(g,\\Id)\\circ (d,D)=(d,D)\\circ (g,\\Id)$ and hence also, $p(f_*(g,\\Id))\\circ D=D$. As $D$ is invertible, $p(f_*(g,\\Id))=\\Id$. As $(g,\\Id)$ was chosen arbitrarily, this means $f_\\#(\\Id)=\\{\\Id\\}$. By Lemma \\ref{lemma f-equivalentieklassen}, $f_\\#(C)$ is also a singleton. \n\\end{proof}\n\n\\subsection{Properties of $\\sim_f$-equivalence classes}\nA first sign that shows that elements in the same $\\sim_f$-equivalence class are strongly connected, can be found in the following lemma.\n\n\\begin{lemma}\\label{lemdet}\nWhen $A\\sim_f B$ and $(d,D)$ is an affine homotopy lift of $f$, then $$\\det(I-A_\\ast D_\\ast)=\\det(I-B_\\ast D_\\ast).$$\n\\end{lemma}\n\\begin{proof}\nAs $A\\sim_f B$, there exists a $(c,C) \\in \\Gamma$, such that $$(c,C)\\circ(a,A)\\circ f_*(c, C)^{-1}=(b,B).$$Of course, we can compose both sides with $(d,D)$. As a result, we get the following equality$$(c,C)\\circ(a,A)\\circ (d,D)\\circ (c,C)^{-1}=(b,B)\\circ(d,D).$$Therefore, we have $CADC^{-1}=BD$ in $\\Aut(G)$. From this, the statement follows easily.\n\\end{proof}\n\nThe following theorem is in a certain sense the heart of our computational method. It splits the formula from Theorem \\ref{LeeForm} into several parts, one for each $\\sim_f$-equivalence class. The proof is heavily influenced by the proofs in \\cite{kll05-1} and \\cite{ll06-1}.\n\n\\begin{theorem}\\label{thmNAf}\nLet $f:\\Gamma\\backslash G \\to \\Gamma\\backslash G$ be a continuous map on an infra-nilmanifold, with affine homotopy lift $(d,D)$ and holonomy group $F$. Let $(G,p)$ be a universal covering of $\\Gamma\\backslash G$, such that $\\tilde{f}$ is a reference lifting of $f$. If we fix $A \\in F$, then, the number of essential fixed point classes that can be written as $p(\\Fix((a,A)\\circ \\tilde{f}))$, which we will denote by $N_A(f)$, equals $$\\frac{1}{\\#F}\\sum_{B\\sim_f A}|\\det(I-B_\\ast D_\\ast)|.$$\n\\end{theorem}\n\\begin{proof}\nFrom now on, $\\Lambda$ will be the fully characteristic subgroup of $\\Gamma$ described in \\cite{ll06-1}. Similarly to Theorem \\ref{thmessredgcd}, we have the following commuting diagram:\n\n\\begin{displaymath}\n \\xymatrix{ G \\ar[dd]_p \\ar[dr]_{p'} \\ar[rrrrr]^{(a,A) \\circ \\tilde{f}}& & & & & G \\ar[dd]^{p} \\ar[dl]^{p'} \\\\\n & \\Lambda\\backslash G \\ar[rrr]^{\\overline{(a,A)} \\circ \\overline{f}} \\ar[dl]_{\\overline{p}} & & & \\Lambda\\backslash G \\ar[dr]^{\\overline{p}} & \\\\\n \\Gamma\\backslash G \\ar[rrrrr]^{f}& & & & & \\Gamma\\backslash G }.\n\\end{displaymath} \n\nFor $\\alpha \\in \\Gamma$, we will denote the Reidemeister class of $\\alpha$ in $\\Gamma$ by $[\\alpha]_\\Gamma$. Now, define an equivalence relation $\\sim{_\\Lambda}$ on $\\Gamma$ as follows:$$\\alpha\\sim{_\\Lambda}\\beta \\text{ iff } \\exists \\lambda\\in \\Lambda: \\beta=\\lambda\\circ \\alpha\\circ f_*(\\lambda)^{-1}.$$In a similar way as before, $[\\alpha]_\\Lambda$ will denote the equivalence class with respect to $\\sim_\\Lambda$ that contains $\\alpha$. It is straightforward to prove that $\\beta\\in [\\alpha]_\\Lambda$ implies that $p'(\\Fix(\\beta\\circ \\tilde{f}))=p'(\\Fix(\\alpha\\circ \\tilde{f}))$. In a similar way, one can prove that $\\beta\\not \\in [\\alpha]_\\Lambda$ implies that $p'(\\Fix(\\beta\\circ \\tilde{f}))\\cap p'(\\Fix(\\alpha\\circ \\tilde{f}))=\\emptyset$. Note that this can also mean that $p'(\\Fix(\\beta\\circ \\tilde{f}))$ and $ p'(\\Fix(\\alpha\\circ \\tilde{f}))$ are fixed point classes for different maps on the nilmanifold $\\Lambda\\backslash G$. Now, by labeling the possibly empty fixed point sets, we can say that there is a one-to-one relation between the sets $[\\alpha]_\\Lambda$ and the fixed point classes $p'(\\Fix(\\alpha\\circ \\tilde{f}))$ of liftings of $f$ to $\\Lambda\\backslash G$. Let us denote the set of $\\sim_\\Lambda$-equivalence classes by $\\mathcal{R}_\\Lambda(f)$ and the set of Reidemeister classes of $f$ by $\\mathcal{R}(f)$. As $\\Lambda\\leq \\Gamma$, we know that the map $$\\Psi:\\mathcal{R}_\\Lambda(f)\\to \\mathcal{R}(f): [\\alpha]_\\Lambda\\mapsto [\\alpha]_\\Gamma$$is a well-defined function, which is clearly surjective. We also know that $p(\\Fix(\\alpha\\circ \\tilde{f}))$ is essential if and only if $p'(\\Fix(\\alpha\\circ \\tilde{f}))$ is essential, because the fixed point index is a local property and $\\overline{p}$ is a local homeomorphism. Hence, when $[\\alpha]_\\Gamma$ is (in)essential, then every element in $\\Psi^{-1}([\\alpha]_\\Gamma)$ corresponds to an (in)essential fixed point class of a lift of $f$ to $\\Lambda\\backslash G$. Because of this property and the fact that $\\Psi$ is surjective and well-defined, we know that \\begin{equation}\\label{ineq}\nN_A(f)\\leq \\sum_{\\begin{subarray}{c}\n\\overline{(b,B)}\\in \\Gamma \/ \\Lambda \\\\ B\\sim_f A\n\\end{subarray}}N(\\overline{(b,B)}\\circ \\overline{f}).\n\\end{equation}\n\n\\medskip\n\n\nSuppose that $\\mathbb{F}$ is a fixed point class of the desired form. Then $$\\mathbb{F}=p(\\Fix((b,B)\\circ \\tilde{f})=[(b,B)]_\\Gamma,$$with $B\\sim_f A$.\nWhen $\\F$ is an inessential fixed point class, $[(b,B)]_\\Lambda$ corresponds to an inessential fixed point class of the map $\\overline{(b,B)}\\circ \\overline{f}$ on the nilmanifold $\\Lambda\\backslash G$. Due to the main result from \\cite{anos85-1} or \\cite{fh86-1}, we now know that every fixed point class of $\\overline{(b,B)}\\circ \\overline{f}$ is inessential, so that $N(\\overline{(b,B)}\\circ \\overline{f})=0$. This also means that $\\det(I-B_\\ast D_\\ast) =0$. By Lemma \\ref{lemdet} we know that $\\det(I-C_\\ast D_\\ast)=0$ for all $C\\sim_f B$, or equivalently, for all $C\\sim_f A$. This also means that $N(\\overline{(c,C)}\\circ \\overline{f})=0$. By definition, $N_A(f)$ is a non-negative integer and hence, it follows by inequality (\\ref{ineq}) that $$N_A(f)=0=\\frac{1}{\\#F}\\sum_{B\\sim_f A}|\\det(I-B_\\ast D_\\ast)|.$$\n\nNow, suppose that $\\F=[(b,B)]_\\Gamma$ is an essential fixed point class. The fact that$$N_A(f)\\leq \\sum_{\\begin{subarray}{c}\n\\overline{(b,B)}\\in \\Gamma \/ \\Lambda \\\\ B\\sim_f A\n\\end{subarray}}N(\\overline{(b,B)}\\circ \\overline{f})$$is not necessarily an equality comes from the fact that $\\Psi$ is not injective. This is due to situations where$$[(b,B)]_\\Gamma=[(c,C)]_\\Gamma \\text{, while } [(b,B)]_\\Lambda\\neq [(c,C)]_\\Lambda.$$So, in order to find $N_A(f)$, we need to find the number of elements in $\\mathcal{R}_\\Lambda(f)$ which are mapped to the same element of $\\mathcal{R}(f)$ by $\\Psi$. First, we will show that this number has $\\left|\\Gamma\/ \\Lambda\\right|$ as an upper bound. Suppose that $\\overline{\\gamma}_1=\\overline{\\gamma}_2 \\in \\Gamma\/ \\Lambda$, then $\\gamma_2=\\lambda\\circ \\gamma_1$, for $\\lambda\\in \\Lambda$. If $$(c_1,C_1)=\\gamma_1 \\circ (b,B) \\circ f_*(\\gamma_1^{-1}) \\textrm{ and } (c_2,C_2)=\\gamma_2 \\circ (b,B) \\circ f_*(\\gamma_2^{-1}),$$then an easy computation shows that $$(c_2,C_2)=\\lambda\\circ(c_1,C_1)\\circ f_*(\\lambda^{-1}),$$which means that $[(c_1,C_1)]_\\Lambda= [(c_2,C_2)]_\\Lambda$.\n\n\\medskip\n\nNow we will show that this upper bound is always attained by showing that $[(c_1,C_1)]_\\Lambda= [(c_2,C_2)]_\\Lambda$ implies that $\\overline{\\gamma}_1=\\overline{\\gamma}_2$ in $\\Gamma \/ \\Lambda$. Let $(d,D)$ be an affine homotopy lift of $f$. Suppose there exist a $\\lambda \\in \\Lambda$, such that $$(c_1,C_1)=\\gamma_1\\circ (b,B)\\circ f_*(\\gamma_1^{-1})=\\lambda\\circ (\\gamma_2\\circ (b,B)\\circ f_*(\\gamma_2^{-1}))\\circ f_*(\\lambda^{-1})=\\lambda\\circ (c_2,C_2)\\circ f_*(\\lambda^{-1}).$$As an easy consequence, $$(b,B)=(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)\\circ (b,B)\\circ f_*(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}.$$Note that $p(\\Fix((b,B)\\circ \\tilde{f}))$ is an essential fixed point class and therefore $p(\\Fix((b,B)\\circ (d,D)))$ will also be an essential fixed point class. Hence, there exists an $x\\in G$ such that $(b,B)\\circ (d,D)(x)=x$. This also means that $$(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)\\circ (b,B)\\circ f_*(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}\\circ (d,D) (x)=x,$$which implies that $(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}\\cdot x$ is also in $p(\\Fix((b,B)\\circ (d,D)))$. By Proposition \\ref{propaff}, we know that such a fixed point class is a singleton and hence, $(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}\\cdot x=x$. By the free action of $\\Gamma$ on $G$, this implies that $\\lambda\\circ \\gamma_2=\\gamma_1$.\n\n\\medskip\n\nNow, we know that $\\Psi$ maps $\\left|\\Gamma\/ \\Lambda\\right|$ different elements of $\\mathcal{R}_\\Lambda(f)$ to the element $[(b,B)]_\\Gamma$. As $\\F$ was chosen arbitrarily, we know that this holds for every essential fixed point class $[(c,C)]_\\Gamma$ in $\\mathcal{R}(f)$, for which $C\\sim_f A$. So, this means that $$N_A(f)=\\frac{1}{[\\Gamma:\\Lambda]}\\sum_{\\begin{subarray}{c}\n\\overline{(b,B)}\\in \\Gamma \/ \\Lambda \\\\ B\\sim_f A\n\\end{subarray}}N(\\overline{(b,B)}\\circ \\overline{f}).$$In a similar way as in the proof of Theorem 3.4 in \\cite{ll06-1}, we can now derive that$$N_A(f)=\\frac{1}{\\#F}\\sum_{B\\sim_f A}|\\det(I-B_\\ast D_\\ast)|.$$\n\\end{proof}\n\n\n\\begin{remark}\\label{remark essential of inessential}\nDuring the proof of this theorem, we actually also proved that the fixed point class $p(\\Fix((a,A)\\circ \\tilde{f})) $ is essential if and only if $\\det(I-A_\\ast D_\\ast)\\neq 0$. This is due to the fact that $p(\\Fix((a,A)\\circ \\tilde{f}))$ can be lifted to a fixed point class $p'(\\Fix((a,A)\\circ \\tilde{f}))$ with the same index. As this is a fixed point class for a map on a nilmanifold, we can use the result from \\cite{anos85-1} or \\cite{fh86-1}, which tells us that $p'(\\Fix((a,A)\\circ \\tilde{f})) $ is essential if and only if $\\det(I-A_\\ast D_\\ast)\\neq 0$.\n\\end{remark}\n\nIt might be noteworthy to mention that a fixed point class $p(\\Fix((b,B)\\circ \\tilde{f}))$ can be written as $p(\\Fix((a,A)\\circ \\tilde{f}))$ if and only if $A\\sim_f B$. So, in a certain sense, it is justified to say that $N_A(f)$ is the number of essential fixed point classes \\textbf{above the $\\sim_f$-equivalence class of $A$}. We will denote this equivalence class by $[A]$. Also note that every fixed point class above $A$ is simultaneously essential or inessential. Hence, it makes sense to talk about the (in)essential $\\sim_f$-equivalence class $[A]$. We can also generalize these notions when considering $\\sim_{f^{k}}$-equivalence classes of $A$. For these classes, we will use the notation $[A]_k$.\n\n\\medskip\n\nSome easy corollaries of Theorem \\ref{thmNAf} are the following.\n\n\\begin{corollary}\n$$N_A(f)=\\frac{\\#\\{B\\in F\\|B\\sim_{f} A\\}}{\\# F}\\cdot |\\det(I-A_\\ast D_\\ast)|.$$\n\\end{corollary}\n\\begin{proof}\nThis follows easily by combining Lemma~\\ref{lemdet} and Theorem~\\ref{thmNAf}.\n\\end{proof}\n\n\\begin{corollary}\nIf all elements of $F$ are in the same $\\sim_f$-equivalence class, then $$N(f)=N_A(f)=|\\det(I-A_\\ast D_\\ast)|=|\\det(I- D_\\ast)|=|L(f)|.$$\n\\end{corollary}\n\nThis condition is for example satisfied when $f_*:\\Gamma\\to \\Gamma$ maps every element into $\\Gamma\\cap G$. If this is the case, then $f_*$ induces the trivial morphism on the holonomy group, from which it follows easily that all elements in $F$ are in the same $\\sim_f$-equivalence class.\n\n\\begin{corollary}\nIf every $\\sim_{f}$-equivalence class in $F$ consists of a single element, then for all $A\\in F$, $\\det(I-A_\\ast D_\\ast)$ will be divisible by $\\#F$.\n\\end{corollary}\n\nThis is for example the case when $f_*$ induces the identity morphism on $F$, while $F$ itself is an abelian group. For instance, in Example \\ref{example Z6}, in section \\ref{examples}.\n\n\\subsection{$\\sim_f$-equivalence classes on different levels}\n\n\\begin{definition}\\label{definition boosting equivalence classes}\nLet $A\\in F$ be an element of the holonomy group $F$ of $\\Gamma\\backslash G$. Let $k|n$. Then we define $\\gamma_{nk}(A)$ to be the following subset of $F$:$$\\{C\\in F \\| \\textrm{ there exists } (a,A)\\in \\Gamma, \\textrm{such that } (c,C)=\\gamma_{nk}(a,A)=(a,A)f_*^k(a,A)\\dots f_*^{n-k}(a,A)\\}.$$\n\\end{definition}\n\nWith this definition in mind, we can prove the following lemma.\n\n\\begin{lemma}\\label{lemma boosting funct on f-conjugacy classes}\nIf $A\\sim_{f^k} B$, then for any $ C \\in \\gamma_{nk}(A)$, there exists a $D\\in \\gamma_{nk}(B)$ such that $C \\sim_{f^n} D$.\n\\end{lemma}\n\\begin{proof}\nBy Lemma \\ref{lemma alternatief definitie}, for any $(a,A) \\in \\Gamma$, there exist $(b,B), \\gamma \\in \\Gamma$, such that $$\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1})=(b,B).$$Because we picked $(a,A)$ arbitrarily, $C$, with $(c,C)=\\gamma_{nk}(a,A)$, is also chosen arbitrarily in the set $\\gamma_{nk}(A)$. Now, the following $(d,D)$ fulfills the necessary conditions:$$(d,D)=(b,B)f_*^k(b,B)\\dots f_*^{n-k}(b,B).$$Indeed, by using the relation $\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1})=(b,B)$, we see$$(d,D)=(\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1}))f_*^k(\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1}))\\dots f_*^{n-k}(\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1})).$$Since $f_*$ is a morphism, a simple computation shows that $$(d,D)=\\gamma\\circ (c,C) \\circ f_*^n(\\gamma^{-1}).$$\n\\end{proof}\n\nWith Definition \\ref{definition boosting equivalence classes}, we actually try to define boosting functions in terms of $\\sim_f$-equivalence classes. This might not necessarily be well-defined, in the sense that it might happen that for $A \\sim_{f^k} B$, not every element in $\\gamma_{nk}(A)$ is automatically $\\sim_{f^n}$-conjugated with every element in $\\gamma_{nk}(B)$. Note that Lemma \\ref{lemma boosting funct on f-conjugacy classes} tells us that this would be the case if every element in $\\gamma_{nk}(A)$ is in the same $\\sim_{f^n}$-equivalence class. Because of Corollary \\ref{corollary D invertible}, we know that $\\gamma_{nk}(A)$ will be a singleton whenever $D$ is invertible, so in that case, these boosting functions are well-defined on the equivalence classes. Note that it might also not necessarily be true that a $\\gamma_{nk}([A]_k)$, by which we mean the set $$\\bigcup_{B\\sim_{f^{k}}A}\\gamma_{nk}(B),$$is a full $\\sim_{f^n}$-equivalence class.\n\\medskip\n\nAlthough boosting functions might generally not behave well on $\\sim_{f^k}$-equivalence classes, we can still use them as a tool for the computation of $NF_n(f)$. The reason for this is the fact that two fixed point classes above the same equivalence class $[A]$ will boost in the exact same way. So, if we know how one fixed point class behaves, every fixed point class above the same equivalence class will behave in the same way.\n\n\\medskip\n\nThe following lemma is actually all we need to prove the statement above.\n\\begin{lemma}\\label{lemma determinants for boosted}\nLet $(d,D)$ be an affine homotopy lift of $f$. If $B, C\\in \\gamma_{nk}(A)$, then $$\\det(I-B_\\ast D_\\ast^n)=\\det(I-C_\\ast D_\\ast^n).$$\n\\end{lemma}\n\\begin{proof}\nSince there exist $(a_1,A)$ and $(a_2,A)$, such that $\\gamma_{nk}(a_1,A)=(b,B)$ and $\\gamma_{nk}(a_2,A)=(c,C)$, for certain $b,c\\in G$, we know that $$((a_1,A)(d,D)^k)^{\\frac{n}{k}}=(b,B)(d,D)^n \\textrm{ and }((a_2,A)(d,D)^k)^{\\frac{n}{k}}=(c,C)(d,D)^n.$$So, by just looking at the rotational part, we see $$BD^n=(AD^k)^{\\frac{n}{k}}=CD^n.$$By taking the differential, we obtain the desired result.\n\\end{proof}\n\n\n\\begin{proposition}\\label{prop similar boosting}\nSuppose $p(\\Fix((a,A)\\circ \\tilde{f}^k))$ is a fixed point class at level $k$ of a continuous map $f$ on an infra-nilmanifold. If this fixed point class boosts (in)essentially to level $n$, then every fixed point class of the form $p(\\Fix((b,B)\\circ \\tilde{f}^k))$, with $B\\in [A]_k$ also boosts (in)essentially to level $n$.\n\\end{proposition}\n\\begin{proof}\nSuppose that $(d,D)$ is an affine homotopy lift of $f$. By Remark \\ref{remark essential of inessential}, we know that $p(\\Fix((a,A)\\circ \\tilde{f}^k))$ is an essential fixed point class if and only if $\\det(I-A_\\ast D_\\ast^k)\\neq 0$. Take an arbitrary fixed point class $p(\\Fix((b,B)\\circ \\tilde{f}^k))$, with $B\\in [A]_k$. Because of Lemma \\ref{lemdet}, we already know that $\\det(I-B_\\ast D_\\ast^k)\\neq 0$ and that $p(\\Fix((b,B)\\circ \\tilde{f}^k))$ is an essential fixed point class.\n\n\\medskip\n\nNow suppose that $\\gamma_{nk}(a,A)=(c,C)$ and $\\gamma_{nk}(b,B)=(e,E)$. This means that $C\\in \\gamma_{nk}(A)$ and $E\\in \\gamma_{nk}(B)$. By Lemma \\ref{lemma boosting funct on f-conjugacy classes}, we know that there exists $E_0\\in \\gamma_{nk}(B)$, such that $C\\in [E_0]_n$. Lemma \\ref{lemdet} now tells us that $$\\det(I-C_\\ast D_\\ast^n)=\\det(I-E_{0 \\ast} D_\\ast^n).$$By Lemma \\ref{lemma determinants for boosted} and by the fact that $E,E_0\\in \\gamma_{nk}(B)$, we also know that $$\\det(I-E_\\ast D_\\ast^n)=\\det(I-E_{0 \\ast} D_\\ast^n).$$Because $\\det(I-C_\\ast D_\\ast^n)=\\det(I-E_\\ast D_\\ast^n)$, we know that $[(a,A)]_k$ boosts essentially to level $n$ if and only if $[(b,B)]_k$ boosts essentially to level $n$. A similar thing applies for inessential boosting.\n\\end{proof}\n\n\\subsection{Examples}\\label{examples}\n\nSo, we know that every fixed point class above $[A]$ boosts in exactly the same way and we also know that the number of essential fixed point classes above $[A]$ equals $N_A(f)$. This is a tool we can use to compute $\\# IIB_n(f)$ in a more efficient way. We show how this can be done by looking at a few examples.\n\n\\begin{example}\nLet us first try to compute the Nielsen periodic numbers of the maps in Example~\\ref{Example not weakly Jiang}. We will use the matrix description from \\cite{duga14-1}.\nLet the Klein bottle group be generated by the following two affine transformations:\n$$\\alpha=(a,A)=\\left(\\begin{pmatrix}\n\\frac{1}{2}\\\\\n\\frac{1}{2}\\\\\n\\end{pmatrix}, \\begin{pmatrix}\n1&0\\\\\n0&-1\n\\end{pmatrix}\\right) \\textrm{ and } \\beta=(e_2, \\Id),$$where $e_2$ denotes the second element of the standard basis of $\\R^2$. Suppose that $k\\neq 1$ is odd and that $p\\in \\R$. Then, the map induced by $$(d,D)=\\left(\\begin{pmatrix}\np\\\\\n\\frac{1}{2}\\\\\n\\end{pmatrix}, \\begin{pmatrix}\nk&0\\\\\n0&-1\n\\end{pmatrix}\\right)$$will induce the same morphism described in Example \\ref{Example not weakly Jiang}.\n\n\\medskip\n\nIt is clear that $D$ commutes with both $A$ and $\\Id$, so $f_*$ induces the identity morphism on $F$ and therefore, every $\\sim_{f^n}$-equivalence class consists of precisely one element. Also, it easy to compute that $[A]_l$ is essential if and only if $l$ is even, while $[\\Id]_l$ is essential if and only if $l$ is odd. Now, suppose that $m=ql$, then a simple computation shows that $[\\Id]_l$ always boosts to $[\\Id]_m$. Also, $[A]_l$ will boost to $[A]_m$ if $q$ is odd and to $[\\Id]_m$ if $q$ is even. The reason for this, lies in the following computation:$$(AD^l)^q=A^qD^{ql}=A^qD^m.$$All together, we see that every even boost ($q$ is even) of an essential fixed point class is inessential while every odd boost is essential.\n\n\\medskip\n\nAs a consequence, we see that if $n$ is odd, $IIB_n(f)$ is the empty set and by Proposition \\ref{propIIB}, it follows that $$NF_n(f)=N(f^n).$$On the other hand, if $n$ is even, the only essential fixed point classes that boost inessentially to level $n$ pass through level $\\frac{n}{2}$. Every essential fixed point class at this level will boost inessentially to level $n$ and every element in $IIB_{\\frac{n}{2}}(f)$ will also boost inessentially to level $n$. Therefore: $$NF_n(f)=N(f^n)+NF_{\\frac{n}{2}}(f).$$\n\nFor the sake of completeness, the case where $k=1$ also gives us a map on the Klein bottle. In this case, an easy computation shows that $N(f^n)=0$ for every $n$. As a consequence, every fixed point class at every level is inessential and hence $NF_n(f)=0$, for every integer $n$.\n\\end{example}\n\nThe following example will illustrate the use of $\\sim_f$-equivalence classes a little more. \n\n\\begin{example}\\label{example Z3}\nLet $\\Gamma$ be the Bieberbach group with generators:$$(a,A)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n\\frac{1}{3}\\\\\n\\end{pmatrix}, \\begin{pmatrix}\n-1&1&0\\\\\n -1 & 0 & 0\\\\\n 0&0&1 \\\\\n\\end{pmatrix}\\right)\n\\textrm{, }(e_1, \\Id)\\textrm{ and }(e_2, \\Id).$$In \\cite{duga14-1} one can find that the affine map $$(d,D)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n0\\\\\n\\end{pmatrix}, \\begin{pmatrix}\n0&1&0\\\\\n 1 & 0 & 0\\\\\n 0&0&2 \\\\\n\\end{pmatrix}\\right)$$induces a continuous map on the flat manifold $\\Gamma\\backslash\\R^3 $.\n\n\\medskip\n\nAn easy computation shows that $DA=A^2D$ and that $f_*$ induces a morphism $\\overline{f}_*$ on the holonomy group $\\Z_3$, such that $\\overline{f}_*^2=\\Id$. So, whenever $k$ is even, every $\\sim_{f^k}$-equivalence class is a singleton.\n\\medskip\n\nTo determine $[\\Id]_k$, with $k$ odd, note that $\\overline{f}_*^k=\\overline{f}_*$. Also, $\\overline{f}_*(A)=A^2$ and $\\overline{f}_*(A^2)=A$. So, the following are certainly subsets of $[\\Id]_k$:\n$$A\\Id f_\\#(A)^{-1}=\\{A^2\\} \\textrm{ and } A^2\\Id f_\\#(A^2)^{-1}=\\{A\\}.$$Hence, $[\\Id]_k=F$ for all odd $k$. \n\n\\medskip\n\nAn easy computation shows that $[\\Id]_k$, with $k$ odd, is always inessential. As a consequence, for every odd $n$, $$NF_n(f)=N(f^n)=0.$$When $k$ is even, $[\\Id]_k$ is inessential, while $[A]_k$ and $[A^2]_k$ are essential. In this case, every element of $F$ commutes with $D^k$, as $\\overline{f}_*^2$ is the identity morphism. Hence, the class $[A^i]_k$ boosts to the class $[A^{ip}]_{pk}$. So, an essential class can only boost inessentially if $p\\equiv 0 \\mod 3$. \n\n\\medskip\n\nIn Figure \\ref{fig1}, a scheme can be found where all these boosting relations are shown up to level $6$. In this scheme, inessential and essential fixed point classes are denoted by a circle and a square respectively. Only the boosting from an essential to an inessential class are drawn, since these are the only ones that need to be considered for the computation of $NF_n(f)$.\n\n\\begin{figure}\n\n\\centering\n\\begin{tikzpicture}[line cap=round,line width = .5pt,line join=round, >=triangle 45, x=1.5cm,y=1.5cm]\n\n\\draw[color=black](0,1.2) circle (0.3);\n\\draw[color=black](1.2,1.2) circle (0.3);\n\\draw[color=black](.9,-.9) rectangle (1.5,-1.5);\n\\draw[color=black](.9,-.3) rectangle (1.5,.3);\n\\draw[color=black](2.4,1.2) circle (0.3);\n\\draw[color=black](3.6,1.2) circle (0.3);\n\\draw[color=black](3.3,-.9) rectangle (3.9,-1.5);\n\\draw[color=black](3.3,-.3) rectangle (3.9,.3);\n\\draw[color=black](4.8,1.2) circle (0.3);\n\\draw[color=black](6,1.2) circle (0.3);\n\\draw[color=black](5.7,-.9) rectangle (6.3,-1.5);\n\\draw[color=black](5.7,-.3) rectangle (6.3,.3);\n\n\\draw[color=black] (0,1.2) node {$[\\Id]_1$};\n\\draw[color=black] (1.2,1.2) node {$[\\Id]_2$};\n\\draw[color=black] (1.2,0) node {$[A]_2$};\n\\draw[color=black] (1.2,-1.2) node {$[A^2]_2$};\n\\draw[color=black] (2.4,1.2) node {$[\\Id]_3$};\n\\draw[color=black] (3.6,1.2) node {$[\\Id]_4$};\n\\draw[color=black] (3.6,0) node {$[A]_4$};\n\\draw[color=black] (3.6,-1.2) node {$[A^2]_4$};\n\\draw[color=black] (4.8,1.2) node {$[\\Id]_5$};\n\\draw[color=black] (6,1.2) node {$[\\Id]_6$};\n\\draw[color=black] (6,0) node {$[A]_6$};\n\\draw[color=black] (6,-1.2) node {$[A^2]_6$};\n\n\\draw[->,color=black] (1.55,0) -- (3.25,0);\n\\draw[->,color=black] (1.55,-1.2) -- (3.25,-1.2);\n\\draw[->,color=black] (1.55,.1) .. controls (3,.6) and (5,.6) .. (5.7,1.1);\n\\draw[->,color=black] (1.55,-1.1) .. controls (3.6,-.7) .. (5.75,1);\n\\end{tikzpicture}\n\n\\caption{A scheme of $\\sim_{f^k}$-equivalence classes at different levels for Example \\ref{example Z3}} \\label{fig1}\n\\end{figure}\n\n\\medskip\n\nSo, suppose that $n=3^pq$ is even, such that $\\gcd(3,q)=1$, then $$NF_n(f)=\\sum_{i=0}^{p} N(f^{3^iq}).$$\n\\end{example}\n\nBy the following example, we clarify the use of Theorem \\ref{thmNAf}.\n\n\\begin{example}\\label{example Z6}\nLet $\\Gamma$ be the Bieberbach group with generators:\n$$(a,A)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n\\frac{1}{6}\n\\end{pmatrix}, \\begin{pmatrix}\n1 & -1 &0\\\\\n1& 0 &0 \\\\\n0&0 & 1\n\\end{pmatrix}\\right), (e_1,\\Id)\\textrm{ and } (e_2,\\Id).$$\n\nIn \\cite{duga14-1}, one can find that the following affine map induces a map on the infra-nilmanifold $\\Gamma\\backslash \\R^3$:\n\n$$(d,D)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n0\n\\end{pmatrix}, \\begin{pmatrix}\n0 & 1 &0\\\\\n-1& 1 &0 \\\\\n0&0 &7\n\\end{pmatrix}\\right).$$\n\nNote that every element of $F$ commutes with $D$, so every $\\sim_f$-equivalence class consists of precisely one element. This also means that the class $[A^i]_k$ boosts to the class $[A^{ip}]_{pk}$. It is also quite easy to compute that $[A^p]_k$ is inessential if and only if $p\\equiv k \\mod 6$. This boosting scheme can be found in Figure \\ref{fig2}.\n\n\\begin{figure}\n\n\\centering\n\\begin{tikzpicture}[line cap=round,line width = .5pt,line join=round, >=triangle 45, x=1.8cm,y=1.5cm]\n\n\\draw[color=black](0,2) circle (0.3);\n\\draw[color=black](-.3,.9) rectangle (.3,1.5);\n\\draw[color=black](-.3,.1) rectangle (.3,.7);\n\\draw[color=black](-.3,-.7) rectangle (.3,-.1);\n\\draw[color=black](-.3,-1.5) rectangle (.3,-.9);\n\\draw[color=black](-.3,-2.3) rectangle (.3,-1.7);\n\n\\draw[color=black](2.1,1.7) rectangle (2.7,2.3);\n\\draw[color=black](2.4,1.2) circle (0.3);\n\\draw[color=black](2.1,.1) rectangle (2.7,.7);\n\\draw[color=black](2.1,-.7) rectangle (2.7,-.1);\n\\draw[color=black](2.1,-1.5) rectangle (2.7,-.9);\n\\draw[color=black](2.1,-2.3) rectangle (2.7,-1.7);\n\n\\draw[color=black](4.5,1.7) rectangle (5.1,2.3);\n\\draw[color=black](4.5,.9) rectangle (5.1,1.5);\n\\draw[color=black](4.8,.4) circle (0.3);\n\\draw[color=black](4.5,-.7) rectangle (5.1,-.1);\n\\draw[color=black](4.5,-1.5) rectangle (5.1,-.9);\n\\draw[color=black](4.5,-2.3) rectangle (5.1,-1.7);\n\\draw[color=black] (0,2) node {$[A]_1$};\n\\draw[color=black] (0,1.2) node {$[A^2]_1$};\n\\draw[color=black] (0,.4) node {$[A^3]_1$};\n\\draw[color=black] (0,-.4) node {$[A^4]_1$};\n\\draw[color=black] (0,-1.2) node {$[A^5]_1$};\n\\draw[color=black] (0,-2) node {$[\\Id]_1$};\n\n\\draw[color=black] (2.4,2) node {$[A]_2$};\n\\draw[color=black] (2.4,1.2) node {$[A^2]_2$};\n\\draw[color=black] (2.4,.4) node {$[A^3]_2$};\n\\draw[color=black] (2.4,-.4) node {$[A^4]_2$};\n\\draw[color=black] (2.4,-1.2) node {$[A^5]_2$};\n\\draw[color=black] (2.4,-2) node {$[\\Id]_2$};\n\n\\draw[color=black] (4.8,2) node {$[A]_3$};\n\\draw[color=black] (4.8,1.2) node {$[A^2]_3$};\n\\draw[color=black] (4.8,.4) node {$[A^3]_3$};\n\\draw[color=black] (4.8,-.4) node {$[A^4]_3$};\n\\draw[color=black] (4.8,-1.2) node {$[A^5]_3$};\n\\draw[color=black] (4.8,-2) node {$[\\Id]_3$};\n\n\\draw[->,color=black] (.35,1.15) -- (2.05,-.3);\n\\draw[->,color=black] (.35,.35) -- (2.05,-1.9);\n\\draw[->,color=black] (.35,-.35) -- (2.05,1.2);\n\\draw[->,color=black] (.35,-1.1) -- (2.05,-.4);\n\\draw[->,color=black] (.35,-1.95) -- (2.05,-2);\n\n\n\\draw[->,color=black] (.35,1.2) .. controls (3,2) .. (4.45,-1.9);\n\\draw[->,color=black] (.35,.4) .. controls (3,1) .. (4.45,.5);\n\\draw[->,color=black] (.35,-.4) .. controls (2,-2.5) and (3,-1) .. (4.45,-2);\n\\draw[->,color=black] (.35,-1.2) .. controls (2,-.1) and (3.7,-1.8) .. (4.5,.2);\n\\draw[->,color=black] (.35,-2.05) .. controls (2.4,-2.5) .. (4.45,-2.05);\n\n\\end{tikzpicture}\n\n\\caption{A scheme of $\\sim_{f^k}$-equivalence classes at the lowest levels for Example \\ref{example Z6}} \\label{fig2}\n\\end{figure}\n\n\\medskip\n\nThe only classes at level $1$ that boost inessentially to level $2$ (to $[A^2]_2$), are the inessential class $[A]_1$ and the essential class $[A^4]_1$. It is therefore clear that $\\# IIB_2(f)=N_{A^4}(f)$. By Theorem \\ref{thmNAf} and Proposition \\ref{propIIB}:$$NF_2(f)=N(f^2)+\\frac{|\\det(I-A^4D)|}{6}.$$\nIn a similar way, one can see that the only classes that boost to $[A^3]_3$ are the inessential class $[A]_1$ and the essential classes $[A^3]_1$ and $[A^5]_1$. Hence,$$NF_3(f)=N(f^3)+ \\frac{|\\det(I-A^3D)|}{6}+ \\frac{|\\det(I-A^5D)|}{6}.$$\nComputing $NF_4(f)$ becomes a little more tricky, since fixed point classes at both level $1$ and $2$ can boost to inessential fixed point classes of level $4$. With an easy computation, we see that the classes that boost to $[A^4]_4$ are $[A]_1,[A^4]_1,[A^2]_2$ and $[A^5]_2$. Note that we already knew that $[A]_1$ is inessential and that the essential class $[A^4]_1$ boosts inessentially to $[A^2]_2$. Therefore, there are no essential classes at level $1$ that boosts essentially to level $2$ and inessentially to level $4$. This means that $$NF_4(f)=N(f^4)+ \\frac{|\\det(I-A^5D^2)|}{6}+\\frac{|\\det(I-A^4D)|}{6}.$$\nAs $[A^i]_k$ boosts to $[A^{ip}]_{pk}$, we know that this boosting relation is a bijection between the $\\sim_{f^k}$-equivalence classes and the $\\sim_{f^{pk}}$-equivalence classes if and only if $p$ is invertible modulo $6$. Now, suppose $n>0$ is an integer, such that $\\gcd(n,6)=1$. Note that every divisor of $n$ will also be relatively prime to $6$. Because there is only one inessential class at each level and because there is a bijection between the classes at different levels and because maps on infra-nilmanifolds are essentially reducible, every essential class that boosts to level $n$ will do so in an essential way. Therefore, if $\\gcd(n,6)=1$, $$NF_n(f)=N(f^n).$$Whenever $n$ has many prime factors $2$ and $3$, it will be much harder to compute $NF_n(f)$, because many inessential boosts occur and we have to keep track of all these boostings in order to not count some of them multiple times. As an example, let us compute $NF_6(f)$. Note that $[\\Id]_6$ is the only inessential class at level $6$. The classes that boost to $[\\Id]_6$ are $[\\Id]_3, [A^3]_3, [\\Id]_2, [A^2]_2,[A^4]_2$ and all classes at level $1$. The only essential classes at level $1$ that boost to essential classes at both level $2$ and level $3$, are $[\\Id]_1$ and $[A^2]_1$. Also, there are no essential classes at level $1$ that boost to inessential classes at both level $2$ and level $3$. Hence,\n\\begin{equation}\\nonumber\n\\begin{split}\nNF_6(f)=N(f^6)+\\frac{|\\det(I-D^3)|}{6}+\\frac{|\\det(I-D^2)|}{6}+\\frac{|\\det(I-A^4D^2)|}{6}\\\\\n-\\frac{|\\det(I-D)|}{6}-\\frac{|\\det(I-A^2D)|}{6}.\n\\end{split}\n\\end{equation}\nHere, these last two terms are precisely the number of essential fixed point classes at level $1$ that boost essentially to level $2$ and level $3$. As they are counted double, we have to subtract them once.\n\\end{example}\n\nAs one can see from this last example, it can be very hard to compute $NF_n(f)$. The tools in this section are useful, but they still require a lot of manual labor. Looking at these examples, it is not unthinkable that there might not exist a general formula for $NF_n(f)$.\n\n\\section{Some properties of affine maps on infra-nilmanifolds}\nIn the last section of this paper, we will look specifically at affine maps on infra-nilmanifolds in order to derive a nice property of these maps (Theorem \\ref{theorem uiteindelijk boost inessentieel}).\n\n\\begin{lemma}\\label{lemma alternatief lemma bram 3.1}\nIf $A,B \\in \\GL_n(\\C)$ and $D\\in \\C^{n\\times n}$, such that $DA=BD$, then, for all $n>0$ it holds that $$\\det(I-(AD)^n)=\\det(I-(BD)^n).$$\n\\end{lemma}\n\\begin{proof}\nUsing the multiplicative properties of the determinant, we find the following equalities:$$\\det(I-(AD)^n)=\\det(A^{-1})\\det(I-(AD)^n)\\det(A)=\\det(I-(DA)^n)=\\det(I-(BD)^n).$$\\end{proof}\n\nRemember that a continuous map $f$ will be called \\textbf{Wecken} if and only if $\\#\\Fix(f)=N(f)$. \n\n\\begin{theorem}\\label{theorem uiteindelijk boost inessentieel}\nLet $\\overline{(d,D)}$ be an affine map on an infra-nilmanifold. Suppose that there exists at least one $k$, for which $N(\\overline{(d,D)}^k)\\neq 0$. Then, $D_\\ast$ is semi-hyperbolic if and only if $\\overline{(d,D)^k}$ is a Wecken map for every $k$. \n\\end{theorem}\n\\begin{proof}\nFirst, suppose that $D_\\ast$ is semi-hyperbolic. Just like in the proof of Theorem \\ref{thmNF=Nf)}, we know that every fixed point class at every level is essential. By Proposition \\ref{propaff}, it follows that $\\overline{(d,D)^k}$ is a Wecken map.\n\n\\medskip\n\nOn the other hand, suppose that $D_\\ast$ is not semi-hyperbolic. This means that there exists an eigenvalue $\\lambda$ of $D_\\ast$, such that $\\lambda^d=1$. Now we will show that every essential fixed point class will eventually be boosted to an inessential fixed point class. Pick an essential fixed point class $[(a,A)]_k$. This is possible, because not all Nielsen numbers are $0$. Let $l$ be an arbitrary positive integer and set $m=kl$. Consider the fixed point class $\\gamma_{km}([(a,A)])$. This coincides with the set $p(\\Fix(((a,A)\\circ(d,D)^k)^l))$. Now suppose that $$(b,B)\\circ (d,D)^{m}=((a,A)\\circ(d,D)^k)^l.$$This means that $BD^{m}=(AD^k)^l$ and by Remark~\\ref{remark essential of inessential}, we now know that $\\gamma_{km}([(a,A)])$ is inessential if and only if $$\\det(I-(A_\\ast D_\\ast^k)^l)=\\det(I-B_\\ast D_\\ast^{m})=0.$$By combining Lemma~\\ref{lemma alternatief lemma bram 3.1} and Lemma \\ref{Lemma Bram}, we see there exists a $C\\in F$ and a positive integer $p$, such that $$\\det(I-(A_\\ast D_\\ast^k)^l)=\\det(I-(C_\\ast D_\\ast^k)^l)\\textrm{ and } (C_\\ast D_\\ast^k)^p=D_\\ast^{kp}.$$By taking $l=\\lcm(p,d)$, we know that $(C_\\ast D_\\ast^k)^l=D_\\ast^{kl}$. Also, $1$ is an eigenvalue of $D_\\ast^{kl}$. By combining all of the above, we see that $$\\det(I-B_\\ast D_\\ast^{m})=\\det(I-(A_\\ast D_\\ast^k)^l)=\\det(I-(C_\\ast D_\\ast^k)^l)=\\det(I-( D_\\ast^k)^l)=0.$$As there is certainly one essential fixed point class $[(a,A)]_k$, we know that it will boost to an inessential fixed point class $[(b,B)]_m$. This actually means that $[(a,A)]_k\\subset [(b,B)]_m$, which implies that the inessential fixed point class $[(b,B)]_m$ is non-empty, which implies that $\\overline{(d,D)}^m$ is not a Wecken map.\n\\end{proof}\n\n\\begin{corollary}\\label{corWecken}\nSuppose that there exists at least one $k$, such that $N(\\overline{(d,D)}^k)\\neq 0$. Whenever $\\Fix(\\overline{(d,D)}^k)$ is finite for every $k$, $\\overline{(d,D)}$ will be Wecken at every level and $D_\\ast$ is semi-hyperbolic.\n\\end{corollary}\n\\begin{proof}\nDue to Proposition~\\ref{propaff}, we know that every non-empty inessential fixed point class contains infinitely many fixed points.\n\\end{proof}\n\n\\begin{corollary}\\label{cor NF_n(f)>Nfn}\nSuppose that $f$ is a continuous map on an infra-nilmanifold that is not semi-hyperbolic. Suppose that $N(f^k)\\neq 0$ for at least one $k$. Then, at certain levels, there exist non-empty inessential fixed point classes. Also, there exist $n>0$, such that $$NF_n(f)>N(f^n).$$\n\\end{corollary}\n\\begin{proof}\nBy examining the proof of Theorem~\\ref{theorem uiteindelijk boost inessentieel}, we see that there will exist an essential fixed point class which boosts to an inessential fixed point class. Therefore, this inessential fixed point class will be non-empty. On the other hand, due to Proposition~\\ref{propIIB}, the second statement follows.\n\\end{proof}\n\n\\begin{corollary}\\label{cor NF=Nf}\nSuppose that $f$ is a continuous map on an infra-nilmanifold. Then, $NF_n(f)=N(f^n)$, for all $n$ if and only if $f$ is a semi-hyperbolic map or $N(f^n)=0$, for all $n$.\n\\end{corollary}\n\\begin{proof}\nWhen dealing with semi-hyperbolic maps, the statement follows from Theorem \\ref{thmNF=Nf)} and Corollary \\ref{cor NF_n(f)>Nfn}. Hence, the only thing left to prove is that $NF_n(f)=0$ if $N(f^n)=0$ for all $n$. This actually follows from Proposition \\ref{propIIB}. As all fixed point classes at all levels are inessential, we know that $\\#IIB_n(f)=0$. As we already knew that $N(f^n)=0$, it follows by Proposition \\ref{propIIB} that $NF_n(f)=0$.\n\\end{proof}\n\nAgain, we can translate some of these results into comparable results concerning dynamical zeta functions.\n\n\\begin{corollary}\nSuppose that $f$ is continuous map on an infra-nilmanifold. Then, $NF_f(z)=N_f(z)$ if and only if $f$ is a semi-hyperbolic map or $N(f^n)=0$, for all $n$.\n\\end{corollary}\n\\begin{proof}\nThis follows immediately from Corollary \\ref{cor NF=Nf}.\n\\end{proof}\n\nWe can actually say something more about another zeta function. In \\cite{fels88-1}, the following dynamical zeta function was defined: $$M_g(z)=\\exp\\left(\\sum_{k=1}^\\infty \\frac{\\#\\Fix(g^k)z^k}{k}\\right).$$When working with affine semi-hyperbolic maps on infra-nilmanifolds, we actually know how this zeta function looks, due to the following result.\n\n\\begin{corollary}\nSuppose that $g$ is an affine semi-hyperbolic map on an infra-nilmanifold, then $$M_g(z)=NF_g(z).$$ \n\\end{corollary}\n\\begin{proof}\nEvery such a map is a Wecken map on every level, due to Theorem \\ref{theorem uiteindelijk boost inessentieel}. From this it follows, for all $k>0$, that $$\\#\\Fix(g^k)=N(g^k)=NF_k(g).$$ \n\\end{proof}\n\nThis result partially answers a question asked in \\cite{jezi03-01} in the case of infra-nilmanifolds. Given a map $f$ on a manifold, the author of this paper asked if it would be possible to find a map $g$ homotopic to $f$, such that $$\\#\\Fix(g^k)=NF_k(g)=NF_k(f),$$for all $k$. This question is equivalent to asking whether there exists a map $g$, homotopic to $f$, such that $M_g(z)$ and $NF_g(z)$ coincide.\n\n\\section*{Acknowledgements}\n\n\nI would like to thank my advisor Karel Dekimpe for all useful comments on the first versions of this paper.\n\n\\medskip\n\nThis work was supported by the research fund KU Leuven.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Acknowledgement}\n\\section{Acknowledgement}\nWe thank the reviewers for their valuable comments. The work was supported by the National Key Research and Development Program of China (No. 2019YFB1704003), the National Nature Science Foundation of China (No. 62021002 and No. 71690231), NSF under grants III-1763325, III-1909323, III-2106758, SaTC-1930941, Tsinghua BNRist and Beijing Key Laboratory of Industrial Bigdata System and Application.\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.4]{figures\/example.pdf}\n \n \\caption{Example images and their corresponding scene graphs. Given the query, the original scene graph (left) is modified to be the target scene graph (right). }\n \\label{fig:example}\n \n\\end{figure}\nA scene graph is a structural representation that captures the semantics of visual scenes by encoding object instances, attributes of objects, and relationships between objects.\n~\\citep{Johnson2015ImageRU}. As shown in Figure~\\ref{fig:example}, the scene graph encodes objects (e.g.\\ ``\\textit{Boy}'', ``\\textit{Racket}''), attributes (e.g.\\ ``\\textit{Girl is standing}''), and relations (``\\textit{Boy holding racket}''). Scene graphs are able to capture the interactions between text and images by associating objects in the graph with regions of an image and modeling the relations between objects. Therefore, it has been used in the cross modality task such as image retrieval, image captioning, and visual question answering~\\citep{Schuster2015GeneratingSP, shi2019explainable, YangTZC19, Wang2020CrossmodalSG}. \n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.63]{figures\/framework.pdf}\n \n \\caption{Examples of basic operations INSERT and DELETE for scene graph modification. $Q$ denotes the textual query, $G_{S}$ denotes the source scene graph, $G_{T}$ denotes the target scene graph and $G_{I}$ is the extended graph. }\n \\label{fig:ise}\n \n\\end{figure*}\n\nRecently, modifying the scene graph based on the input becomes an emerging research direction as cross-modal systems may need to resort to an interactive process through multiple iterations~\\citep{Ramnath2019SceneGB,He2020SceneGM}. Take text-based image retrieval as an example, users start with a query describing the main objects or topics they are looking for, then modify the query to add more constraints or provide additional information based on previous search results. Instead of directly manipulating images, scene graphs can be used to convert the image-editing problem into a graph-editing problem, conditioned on the textual query. As shown in Figure~\\ref{fig:example}, given a retrieved image from the last turn, if the user wants to see a girl rather than a boy holding a racket, he will enter the query ``\\textit{I would like to see a girl holding racket}'' to the system. According to the query, the object ``\\textit{Boy}'' in the original scene graph will be substituted with the object ``\\textit{Girl}''. The target image can be retrieved given the updated scene graph. The key challenge in this process is how to modify the corresponding partial structure in the original scene graph based on understanding the natural language query. \n\n\n\n\n\n\n\n\nPrior effort framed this scene graph modification (SGM) task as conditional graph generation~\\citep{He2020SceneGM}, where the scene graph is generated from the scratch condition on the original graph and query~\\citep{You2018GraphRNNGR,Guo2019DenselyCG,Cai2020GraphTF}. However, rebuilding the entire scene graph may not be an optimal solution, as the model has to generate the partial structure of the original graph that should be unmodified. Moreover, nodes and edges of the scene graph are constructed separately in their proposed framework, which generates all the nodes first then attaches edges between generated nodes in the second pass. Such an approach may lead to the lack of the modeling capability of interactions between node prediction and edge prediction.\n\nInstead of rebuilding the whole scene graph, we introduce a novel formulation for SGM -- incremental structure expanding (ISE), which is able to build the target graph by gradually expanding the original structure. At each step, ISE generates the connecting edges between the existing nodes and the newly generated node, upon which the type of the new node is jointly decided. Based on the formalism, our proposed model is able to iterate between finding the relevant part in the query and reading the partially constructed scene graph, inferring more accurate and harmonious expansion decisions progressively. \nExperiments on three SGM benchmarks demonstrate the effectiveness of the proposed approach, which is able to outperform previous state-of-the-art models by large margins. To test the ability of a model under a complex scenario, we further construct a more challenging dataset from the remote sensing domain~\\citep{lu2017exploring}, which has much more modification operations based on the more complicated queries compared with the existing scene graph modification datasets. \nOur key contributions are summarized as follows: \n\n\\begin{itemize}\n \\item We propose a novel formulation for scene graph modification, allowing incremental expansion of the source scene graph rather than the regeneration of the target graph.\n \\item We further construct a challenging dataset that contains more complicated queries and larger scene graphs. Extensive experiments on four SGM datasets show the effectiveness of our proposed approach.\n \\item Experiments on four benchmarks demonstrate the effectiveness of our approach, which surpasses the previous state-of-the-art model by large margins. \n\\end{itemize}\n\n\n\n\n\n\n\n\n\\section{Incremental Structure Expanding}\n\\label{sec:ise}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.67]{figures\/model.pdf}\n \\caption{Overview of the model architecture. }\n \\label{fig:model}\n \n\\end{figure*}\n\nIn scene graph modification, a node or multiple nodes can be inserted to, deleted from or replaced with other nodes in the scene graph. \\citet{He2020SceneGM} defined the scene graph modification task as a conditional graph generation problem. Formally, given the source scene graph $G_{S}$ and the natural language query $Q$, the target scene graph $G_{T}$ is generated by maximizing the conditional probability $p(G_{T}|G_{S}, Q)$. \n\nInstead of generating the entire target graph $G_{T}$, we frame the task as an incremental structure expanding, which extends the source scene graph $G_{S}$ one node at a time, as well as the edges associated with the node. Such a formulation does not require the model to rebuild the unmodified structure of the source scene graph.\n\nUnder this formulation, we first define two basic operations: INSERT and DELETE. Scene graph modification can be viewed as combining and applying these two operations multiple times. Formally, given the query $Q$, a sequence of $n$ operations ${a_{1}, a_{2}, ..., a_{n}}$ are selected from a set of graph modification operations $\\mathcal{A}$ = \\{INSERT, DELETE\\}. After applying the operations to the source scene graph $G_{s}$, the target scene graph $G_{t}$ is derived. Each operation is defined as: \n\n\\begin{itemize}\n \\item \\textbf{INSERT}: A new node $o$ is added to $G_{s}$, and edges are attached between $o$ and existing nodes in $G_{s}$. As shown in Figure~\\ref{fig:ise} (a), the node ``\\textit{Ball}'' is added to $G_{s}$ and an edge between ``\\textit{Ball}'' and ``\\textit{Holding}'' is attached, according to the query ``\\textit{holding a racket and a ball}''.\n \\item \\textbf{DELETE}: As shown in Figure~\\ref{fig:ise} (b). A node $o$ is removed from $G_{s}$, as well as its associated edges. As shown in Figure~\\ref{fig:ise} (b), the node ``\\textit{Blue}'' is removed from $G_{s}$ and the edge between ``\\textit{Racket}'' and ``\\textit{Blue}'' is removed either, according to the query ``\\textit{a racket not a blue}''.\n\\end{itemize}\n\n\n\n\nInspired by incremental parsing~\\citep{Nivre2004IncrementalityID,DyerBLMS15,Cai2020AMRPV,zhang2021crowdsourcing,zhang2022identifying}, we design a data structure called extended graph $G_{I}$, which can be used to model INSERT and DELETE under the graph expansion setting. As shown in Figure~\\ref{fig:ise} (a), the extended graph $G_{T}$ is identical with the target graph $G_{T}$ after applying INSERT. As for DELETE, we introduce a dummy node ``Delete'', which is attached to the node in the source graph $G_{S}$ that should be removed. For example, the dummy node ``Delete'' is attached to the node ``Blue'' in $G_{T}$. In the postprocessing stage, nodes attached with the dummy node ``Delete'' will be removed. Using this formulation, we are able to model scene graph modification by incrementally expanding the source graph $G_{S}$ to the extended graph $G_{I}$, which can be converted to the target graph $G_{T}$ without any losses. \n\nIf the modification requires multiple operations, there will exist multiple node orderings. Take node substitution as an example, replacing a node $o_{i}$ with $o_{j}$ in $G_{s}$ can be viewed as DELETE the node $o_{i}$ first, then INSERT the node $o_{j}$, or vice versa. In practice, we impose that the DELETE operation always comes before INSERT, then the breadth-first search is used to define a deterministic node ordering. \n\n\n\\section{Model Architecture}\n\\label{sec:model}\n\n\n\nIn this section, we will present the model based on the incremental structure expanding formulation. Figure~\\ref{fig:model} gives an overview of the proposed model, which consists of five components including query encoder, graph encoder, feature fusion, edge decoder and node decoder. \n\n\n\\paragraph{Query Encoder} This module is used to encode the query $Q$ by generating the representation of each token of it.\n\n\\paragraph{Graph Encoder} This module is used to encode the graph by generating the representation of each node of it. Note that the representations of the graph are constructed incrementally during the expanding progresses based on the updated graph of the last time step. The graph is the source graph $G_{S}$ at the first timestep.\n\n\\paragraph{Feature Fusion} this module aims to combine the representations from query and graph encoder, then served as a writable memory, which is updated based on the information from edge and node decoder during the incremental expansion. \n\n\\paragraph{Edge Decoder} this module is used to predict the edges between the newly generated node and existing nodes of the graph, then update the memory of the feature fusion module with edge information.\n\n\\paragraph{Node Decoder} this module is used to generate a new node of the graph, then update the memory of the feature fusion module with node information.\n\n\n\n\n\n\\subsection{Query Encoder \\& Graph Encoder}\nFor fair comparisons with the previous work~\\citep{He2020SceneGM}, our query encoder and graph encoder are based on the vanilla transformer~\\citep{VaswaniSPUJGKP17}, which consists of multi-head self attention (MSA) and position-wise feed-forward network (FFN) blocks. The FFN contains two layers with a ReLU non-linearity. Layer normalization (\\citealt{BaKH16}) is applied before every block, and residual connections~\\citep{HeZRS16} after every block. \n\nFormally, given an input query $Q$ with $n$ tokens, each token embedding is randomly initialized and positional encoding is added to the token embedding to retain positional information. The resulted embeddings are denotes as $\\mathbf{x} = \\{x_0, x_1,..., x_n\\}$. Similar to BERT~\\citep{DevlinCLT19}, a special token is appended to the query as $x_{0}$ for sentence encoding. Transformations in the query encoder can be denoted as:\n\n\\begin{align}\n \\mathbf{x}^{l^{\\prime}} = LN(MSA(\\mathbf{x}^{l-1}) + \\mathbf{x}^{l-1}), \\\\\n \\mathbf{x}^{l} = LN(FFN(\\mathbf{x}^{l^{\\prime}}) + \\mathbf{x}^{l^{\\prime}}).\n\\end{align}\n\nAfter stacking $L$ blocks, we obtained the contextualized token representations from the query encoder, denoted as $\\{x_0^L, x_1^L,..., x_n^L\\}$. The first vector $x_{0}$ is treated as the sentence-level representation of the query and will be used as the initial state during expansion. For clarity, we denote the vectors as $\\mathbf{x}$$\\in$$\\mathbb{R}^{(n+1) \\times d}$, where $d$ is the dimension. \n\n\nAs for the graph encoder, we treat the input graph as a sequence of nodes in the chronological order of when they are inserted into the graph as discussed in Section~\\ref{sec:ise}. Formally, given the graph $G_{t}$ at the time step $t$, we take its node sequence $\\{o_1, o_2, ..., o_{t-1}\\}$ as the input. A transformer architecture is also applied to obtain the contextualized node embeddings. Notice that the contextualized representation of the graph is constructed incrementally as the expanding progress. Therefore, we apply the vanilla transformer with masked self-attention as the graph encoder, which only allows each position in the node sequence to attend to all positions up to and including that position. For brevity, we denoted the resulted contextualized node representations as $\\mathbf{y}$$\\in$$\\mathbb{R}^{m \\times d}$.\n\n\n\n\n\\subsection{Feature Fusion}\nUnlike the conventional sequence-to-sequence model that only has one encoder, our model contains two encoders. Previous work~\\citep{He2020SceneGM} proposed to use gating mechanism and cross attention to combine the representations of resulted representations from query and graph encoders. We choose to use vanilla multi-head attention mechanism~\\citep{VaswaniSPUJGKP17} to fuse the features from these encoders. Formally, at each time step $t$, the feature fusion component combines the query and graph representations for gradually locating and collecting the most relevant information for the next expansion:\n\n\\begin{align}\n z_t^{l} = LN(MSA(h_t^{l-1}, \\mathbf{x}) + h_t^{l-1}), \\\\\n z_t^{l^{\\prime}} = LN(MSA(z_t^{l}, \\mathbf{y}) + z_t^{l}), \\\\\n h_t^{l} = LN(FFN(z_t^{l^{\\prime}}) + z_t^{l^{\\prime}}).\n\\end{align}\n\nThe initial expansion state of $h_t^{0}$ is initialized with $x_{0}$. For clarity, we denote the last hidden state $h_{t}^{L}$ as $h_{t}$, which is the expansion state at the time step $t$. We now proceed to present the details of each decision stage of one expansion step.\n\n\\subsection{Edge Decoder}\nAt the $t$-th time step, the edge decoder takes the expansion state $h_{t}$ from the feature fusion module and the contextualized representation $\\mathbf{y}$ from the graph encoder as the inputs, and predicts which nodes in the current graph should be attached to the new node. Inspired by~\\citet{CaiL19a} and~\\citet{Cai2020AMRPV}, we leverage multi-head attention and take the maximum over different heads as the final edge probabilities.\nFormally, given $h_t$ and $\\mathbf{y}$, a set of attention weights can be obtained by using multi-head attention mechanism:\n$\\{ \\alpha_{t}^{g_{i}}\\}_{i=1}^{k}$, where $k$ is the number of attention heads and $\\alpha_{t}^{g_{i}}$ is the $i$-th probability vector. The probability of the edge between the new node and the node $o_{j}$ is then computed by $\\alpha_{t}^{g} = max_{i}(\\alpha_{t}^{g_{i}})$. Intuitively, each head is in charge of a set of possible relations (though not explicitly specified). The maximum pooling reflects that the edge should be built once one relation is activated. \n\nFinally, the edge decoder passes the edge information to the feature fusion module by updating the expansion state $h_{t+1}$ as follows:\n\n\\begin{align}\n h_{t+1} = LN(MSA(h_{t},\\mathbf{y}) + h_{t} ).\n\\end{align}\n\n\n\\subsection{Node Decoder}\nThe node decoder needs to look at the input query and determine which tokens are the most important ones. This choice is a weighted matrix that gives an attention probability between each token in the query and generated nodes in the target graph. \nConcretely, a single-head attention $\\alpha_{t}^{s}$\nis computed based on the state $h_t$ and the sentence representation $s_{1:n}$, where $\\alpha_{t}^{s}$ denotes the attention weight of the word $w_i$ in the current time step.\nThis component then updates the parser state with the alignment information via the following equation:\n\\begin{align}\n h_{t+1} = LN(MSA(h_{t},\\mathbf{x}) + h_{t} ).\n\\end{align}\n\nWe then compute the probability distribution of the new node through a hybrid of two channels. The new node can either be a DELETE node or a token copied from the input query. First, $h_{t}$ is fed through a $softmax$ to obtain a probability distribution over a pre-defined vocabulary, which contains the DELETE node and other dummy nodes such as $\\mathsf{EOS}$. The probability of the new node is calculated as $P^{vocab} = softmax(W^{vocab}h_{t} + b^{vocab})$.\n\nSecond, we used the attention scores $\\alpha_{t}^{s}$ as the probability to copy a token from the input query as a node label similar to the copy mechanism~\\citep{GuLLL16,SeeLM17}. Therefore, the final prediction probability of a node $o$ is defined as:\n\n\\begin{align}\n P(o) = p_{gen} \\cdot P_{vocab}(o) + p_{copy} \\cdot \\sum_{i \\in T(c)} \\alpha_{t}^{s}[i],\n\\end{align}\n\nwhere $[i]$ indexes the $i$-th element, and\n$T(c)$ are index sets of tokens respectively that have the surface form as $o$. $P(gen)$ and $P(copy)$ are the probabilities of generating and copying a node, respectively. They are computed by using a single layer neural network with $softmax$ activation as:\n\\begin{align}\n [p_{gen}, p_{copy}] = softmax(W^{gate}h_{t}).\n\\end{align}\n\n\nThe whole expanding procedure is terminated if the newly generated node is the special node $\\mathsf{EOS}$.\n\n\n\n\n\\section{Dataset Construction}\n\\label{sec:dataset}\n\n\\input{tables\/dataset}\n\n\nExisting SGM datasets are synthetically constructed based on scene graphs from MSCOCO~\\citep{Lin2014MicrosoftCC} and GCC~\\citep{Sharma2018ConceptualCA}, and via crowd sourcing. To construct scene graphs, \\citet{He2020SceneGM} used an in-house scene graph parser to parse a random subset of MSCOCO description data and GCC captions, thus the constructed scene graph is relatively simple. In Table \\ref{tab:stats}, the average numbers of nodes and edges for each graph are limited to 2.9 and 1.9 respectively. GCC is more complicated than MSCOCO with a larger graph, but the percentage of nodes and edges from the development\/test set that does not appear in the training set ($\\mathrm{OOV}$ Nodes, $\\mathrm{OOV}$ Edges) are still low, which will cause the model easily overfit to the dataset. To verify the generalization ability and the scalability of the model to handle more complex scene graphs, we constructed our own Scene Graph Modification dataset based on the Remote Sensing Image Captioning Dataset (RSICD)~\\citep{lu2017exploring} in the remote sensing field for remote sensing image captioning task.\n\n\n\\input{tables\/main}\n\nInspired by the modification methods proposed by~\\citet{He2020SceneGM}. First, we adopt the parser~\\citep{Schuster2015GeneratingSP} to parse the caption for each graph and generate the original scene graph $\\textbf{x}$. Then we define three types of graph modification operations $\\mathcal{A}$ = \\{INSERT, DELETE, SUBSTITUTE\\}, and randomly apply them to the original scene graph to generate query ($\\textbf{q}$) and modified scene graph ($\\textbf{y}$). The data in RSICD consists of the triples ($\\textbf{x, y, q}$).\\footnote{We give three detailed operations and examples in the Appendix \\ref{operations}.}\n\nCompared with the existing SGM dataset, each graph of RSICD has more nodes and edges, with an average of 5.9 and 3.7 on the training\/development\/test set, which is almost twice that of User Generated and MSCOCO. In addition, the dataset comes from the field of remote sensing. Due to the large number of geographical terms, the $\\mathrm{OOV}$ Nodes of the development\/test sets compared with the training set reach 12\\%\/11\\%, and the $\\mathrm{OOV}$ Edges reach 8\\%\/8\\%, which are much higher than the MSCOCO and GCC datasets. Considering the complexity of RSICD, we construct it apart from User Generated, MSCOCO and GCC to further analysis the generalization and scalability of ISE.\n\n\n\\section{Experiments and Analyses}\n\\label{sec:experiments}\n\n\n\\subsection{Data}\nWe evaluated our model on four benchmarks, including User Generated, MSCOCO and GCC proposed by~\\citet{He2020SceneGM}, and RSICD dataset proposed in this work. MSCOCO, GCC and RSICD are constructed synthetically from publicly available datasets~\\citep{Lin2014MicrosoftCC, SoricutDSG18, lu2017exploring}, while the User Generated dataset is created via crowd sourcing. Detailed statistics of datasets are shown in Table~\\ref{tab:stats}.\n\n\n\n\\subsection{Setup}\n\nFor fair comparisons, we used the same data splits for User Generated, MSCOCO and GCC datasets as in ~\\citet{weber2021extend}. For RSICD, we randomly split the data into 8K\/1K\/1K for training\/development\/test. Following~\\citet{weber2021extend}, we use three automatic metrics for the evaluation, including node-level and edge-level F1 score, and graph-level accuracy. Graph-level accuracy is computed based on exact string match, which requires the generated scene graph to be identical to the target scene graph for a correct prediction. We reported the mean score and standard deviation by using 5 models from independent runs. We refer to the Appendix \\ref{Hyper-parameters} for the detailed implementation.\n\n\n\n\n\\subsection{Baselines}\nFor comprehensive comparisons, we include six baselines as follows. Except for the CopyGraph, all of them aim to rebuild the target scene graph. \n\n\\paragraph{CopyGraph} This baseline directly copies the source scene graph as the target scene graph, which can be viewed as the lower bound.\n\n\\paragraph{Text2Text} This baseline is introduced by~\\citet{He2020SceneGM}. They used the standard sequence-to-sequence architecture by linearizing the scene graph based on depth-first search. \n\n\\paragraph{GRNN} Graph RNN~\\citep{You2018GraphRNNGR} is used as the graph encoder and edge decoder. Specifically, the edges are represented by an adjacency matrix, which is then generated in an auto-regressive manner. Both the query encoder and node decoder are based on Gated Recurrent Units~\\citep{ChoMGBBSB14}.\n\n\\paragraph{DCGCN} Densely-Connected Graph Convolutional Networks ~\\citep{Guo2019DenselyCG} are used as the graph encoder. Other components are kept the same as the GRNN.\n\n\\paragraph{GTran} Graph Transformer~\\citep{Cai2020GraphTF} is used as the graph encoder, while other modules are the same as GRNN and DCGCN.\n\n\\paragraph{STran} The sparsely-connected transformer~\\citep{He2020SceneGM} is used to encode the source graph. In addition, a cross-attention mechanism is applied to fuse the features from graph encoder and query encoder. Node decoder and edge decoder are the same as GRNN.\n\n\\paragraph{EGraph} This is the state-of-the-art model on graph modification task. Concretely, \\citet{weber2021extend} considerably increases performance on the graph modification by phrasing it as a sequence labelling task.\n\n\n\n\\subsection{Main Results}\nAccording to Table~\\ref{tab:main}, our proposed approach (ISE) significantly outperforms the state-of-the-art model~\\citep{weber2021extend} on three datasets. Specifically, ISE outperforms EGraph 1.81, 1.11 and 1.33 percentage points in terms of graph accuracy on User Generated, MSCOCO and GCC datasets, respectively. We observe that the improvement is especially prominent on the User Generated dataset, which is \nmore challenging than the other two synthetic datasets in terms of the diversity in graph semantics and natural language expressions. All baseline models suffer from performance degradation as it is much harder to rebuild the entire target scene graph on this dataset. On the other hand, ISE constructs the target scene graph by incrementally expanding the source scene graph without changing the unmodified structure. We believe this formulation is able to effectively cope with this difficulty. \n\n\nWe also observe that both EGraph and ISE achieve lower graph accuracy on the GCC dataset. The main reason is the difficulty of predicting the correct edges between generated nodes. For example, EGraph achieves 98.62 Node F1 score on GCC, higher than 97.62 Node F1 score on the User Generated dataset. However, EGraph only achieves 75.01 Edge F1 score on GCC, while it can attain 88.26 Edge F1 score on User Generated. Our proposed model has larger improvements upon EGraph in terms of Edge F1 score on the same dataset (93.06 vs. 91.64). We attribute this stronger improvement to iterations between nodes prediction and edge prediction, which allows more accurate and harmonious expansion decisions progressively. On the other hand, EGraph predicts nodes and edges at two independent stages. Such an approach may lead to the lack of the modeling capability of interactions between node prediction and edge prediction.\n\n\\input{tables\/rsicd}\n\nWe further compare our model with EGraph on the newly constructed dataset RSICD as shown in Table~\\ref{tab:rs}. ISE is able to achieve a graph accuracy of 44.20\\% and improves upon the EGraph model by 21 percentage points. However, the graph accuracy of all the models is much lower than the one attained on the previous three SGM datasets. One reason is that RSICD has more complex queries paired with larger scene graph, which brings a challenge to existing models. The RSICD dataset also suffers from the data sparsity issue where many words (39\\%) and nodes (42\\%) only appear once in the training data. Incorrect node prediction will further propagate the errors to edge prediction. Our iterative node and edge prediction paradigm help to alleviate this issue. Specifically, ISE only outperforms EGraph 9.69 percentage points on Node F1 score, while the improvement on Edge F1 score is 13.05\\%. Therefore, ISE is able to achieve a higher accuracy. In order to further address this data sparsity issue, one potential solution is transfer learning, where the model is pretrained on User Generated dataset first then fine-tuned on RSICD. However, this approach may suffer from a domain-shift problem, as RSICD is constructed based on the remote sensing domain. We leave this direction as future works.\n\n\n\n\n\n\n\\subsection{Analysis and Discussion}\nIn this section, we provided a fine-grained analysis of our proposed model. We reported all the results on the development set by using the ISE model without contextualized embeddings from BERT.\n\n\\input{tables\/ablation}\n\n\\paragraph{Ablation Study} \nAs shown in Table~\\ref{tab:ablation}, we examine the contributions of two main components used in our model. The first one is the incremental structure expanding. We use the same model architecture but try to rebuild the target scene graph similar to previous efforts. We can observe significant drops on three SGM datasets, which further confirms the effectiveness of the extending strategy. The second one is the copy mechanism, which directly copies the token from the query as nodes in the target scene graph. It plays a significant role in predicting nodes especially when the training data is limited (User Generated).\n\n\\input{tables\/Robustness}\n\\paragraph{Performance against Training Data Size} Table~\\ref{tab:size} shows the performance of STran and ISE against different training settings on MSCOCO dataset. We considered four training settings (20\\%, 40\\%, 60\\%, 80\\%, 100\\% training data). ISE consistently outperforms STran under the same amount of training data. When the size of training data decreases, we can observe that the performance gap becomes more obvious. Particularly, using 40\\% of the training data, ISE is able to achieve a graph accuracy of 88.64\\%, higher than STran trained on the whole dataset. These results demonstrate that our model is more effective in terms of using training resources and more robust when the training data is limited.\n\n\\input{tables\/SenLength}\n\\paragraph{Performance against Query Length} Table~\\ref{tab:query} shows the results of STran and ISE under different query lengths on GCC dataset. We partitioned the sentence length into three classes (\\textless5, [5, 10), $\\geq$10). In general, ISE outperforms STran against various sentence lengths. When the length of the query increases, we can observe that the performance gap becomes more obvious in terms of graph accuracy. Intuitively, with the increase of the query length, it is more challenging for the model to comprehend the sentence. This suggests that ISE is able to handle more complex instructions.\n\n\n\\paragraph{Performance against Graph Size}\nTable~\\ref{tab:graph} shows the results of STran and ISE against different target scene graph sizes on GCC dataset. We partitioned the scene into three classes (\\textless5, [5, 10), $\\geq$10). Based on the formulation of extending the source scene graph, our model is required to deal with larger graphs. For example, deleting a node in the scene graph becomes adding a special ``Delete'' node in the extended graph. However, ISE is able to consistently outperform STran against various target graph sizes, even when the target scene graph is large. This result suggests the superiority of the proposed formulation.\\footnote{We give an error analysis in the Appendix \\ref{error}.} \n\\input{tables\/NodeLength}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.26]{figures\/case.pdf}\n \n \\caption{Two cases of STran and ISE for scene graph modification on User Generated. $Q$ denotes the textual query, $G_{S}$ denotes the source scene graph, $G_{T}$ denotes the target scene graph generated by STran and ISE.}\n \\label{fig:case}\n \n\\end{figure*}\n\n\\paragraph{Case Study}\n\\label{case}\n\n\n\nWe give two cases in Figure \\ref{fig:case}. STran generates scene graph from the scratch conditioned on the original graph and query may lead to the lack of the modeling capability of interactions between node prediction and edge prediction. For example, in Figure \\ref{fig:case} (a), STran omitted the attribute: ``Velvet'' during the node prediction. In addition, during the edge prediction, STran redundantly generated the relation: ``Of'' in Figure \\ref{fig:case} (b). However, these structures do not need to be modified in the source scene graph. ISE can infer more accurate target graph by incrementally expanding the source graph without changing the unmodified structure. \n\\section{Related Work}\n\\label{sec:related}\n\nWe refer to the Appendix \\ref{scene graph} for the detailed related work of scene graph. Scene graph builds a bridge between image domain and text domain. Vision and natural language are all tremendously promoted by studying into scene graphs. Recently, scene graph modification becomes an emerging research\ndirection. \\citet{Chen2020GraphED} proposed a framework based on scene graph editing for text-based image retrieval.\nOn the other hand, \\citet{He2020SceneGM} took the scene graph and the textual query as inputs and modified the source graph according to the query. They viewed the task as conditional graph generation, which is further decomposed into node prediction and edge prediction. For node prediction, all the nodes in the target scene graph is generated based on a graph-to-sequence model with dual encoder~\\citep{Song2018AGM,Beck2018GraphtoSequenceLU,ZhangGTLCLB20}, then a graph RNN is adopted to predict the edges between generated nodes~\\citep{You2018GraphRNNGR}. More recently, \\citet{weber2021extend} developed an alternative formulation of this problem in which they model the modification as an auto-regressive sequence labelling task.\n\nInstead of rebuilding the entire target graph, we framed the scene graph modification task as incremental graph expansion. This formulation is related to incremental parsing, where a sentence is scanned from left-to-right and the structured is built incrementally by inserting a node or attaching an edge. Incremental parsers are widely used in semantic parsing~\\citep{ZhouXUQLG16,ChengRSL17,GuoL18, Naseem2019RewardingST,liu2022semantic} and syntactic parsing~\\citep{HuangS10,DyerBLMS15,LiuZ17a}, as they are computationally efficient, and can use machine learning to predict actions based on partially generated structures. Our feature fusion module can be viewed as the parser state as it carries the structural information and serves as a writable memory during the expansion step. Unlike \\citet{weber2021extend} linearize the scene graph and label it in an auto-regressive manner, our model iterates between finding the relevant part in the query and reading the partially constructed scene graph, inferring more accurate and harmonious expansion decisions progressively. \n\n\n\\section{Conclusion}\n\nIn this paper, we designed a novel formulation for scene graph modification, which allows us to incrementally expand the source scene graph instead of rebuilding the entire graph. Based on the formalism, we further propose a model that is able to leverage the mutual causalities between node prediction and edge prediction. Experiments on three SGM benchmarks demonstrate the effectiveness.\nTo test our model under a complex scenario, we constructed a more challenging dataset from the remote sensing domain, which has more modification operations based on the more complicated queries compared with existing SGM datasets. For future work, we would like to explore how to integrate the model into the text-based image retrieval task.\n\n\n\\section{Appendix}\n\n\n\\subsection{Operations in RSICD}\n\n\n\\input{tables\/RSICD_Case}\n\\label{operations}\nWe introduce three operations in RSICD in details:\n\\begin{itemize}\n \\item \\textbf{\\texttt{DELETE}}: The original scece graph is $\\textbf{x}$. We randomly select a node $\\textbf{o}$ in $\\textbf{x}$, and delete it both with related edges. The deleted graph is defined as $\\textbf{y}$. We choose a random sentence from the \\textit{DELETE Template}~\\citep{manuvinakurike-etal-2018-edit}, for example, `` I do not want \\textbf{**}.'' We replace \\textbf{**} with $\\textbf{o}$ to get modification operation $\\textbf{q}$.\n \\item \\textbf{\\texttt{INSERT}}: It is the reverse process of \\textbf{\\texttt{DELETE}}. The graph before deleting the node is regarded as $\\textbf{y}$, and the corresponding graph after deletion is treated as $\\textbf{x}$. The modification operation is randomly selected from the \\textit{INSERT Template}~\\citep{manuvinakurike-etal-2018-edit}, for example, `` Show me \\textbf{**}.'' We replace \\textbf{**} with $\\textbf{o}$ to obtain query $\\textbf{q}$.\n \\item \\textbf{\\texttt{SUBSTITUTE}}: We randomly select a node $\\textbf{o}$, use the AllenNLP toolkit~\\citep{gardner-etal-2018-allennlp} to find the three most similar semantics nodes compared with $\\textbf{o}$. We randomly choose a node $\\textbf{m}$, and select a sentence from the \\textit{SUBSTITUTE Template}~\\citep{manuvinakurike-etal-2018-edit}, for example, `` I prefer \\textbf{@@} to \\textbf{**}, modify \\textbf{**} to \\textbf{@@}.'' We replace \\textbf{**} and \\textbf{@@} with $\\textbf{o}$ and $\\textbf{m}$, and get modification operation $\\textbf{q}$. Note that SUBSTITUTE operation could be viewed as DELETE the node $\\textbf{o}$ first and then INSERT the node $\\textbf{m}$, or vice versa.\n\\end{itemize}\n\nIn Table \\ref{tab:RSICD_case_study}, we give the simple examples in RSICD to better understand three types of graph modification operations.\n\n\n\\subsection{Implementation Details}\n\\label{Hyper-parameters}\nHyper-parameters of the model are tuned on the development set. All transformer~\\citep{VaswaniSPUJGKP17} layers share the same hyper-parameter settings. Following~\\citet{He2020SceneGM}, we randomly initialized the word and node embeddings. We also report results with contextualized embeddings from BERT~\\citep{DevlinCLT19}. Specifically, we used the BERT-base-uncased implemented by~\\citep{wolf-etal-2020-transformers}. The parameters in BERT are fixed during training. To mitigate over-fitting, we apply dropout~\\citep{SrivastavaHKSS14} with the drop rate 0.2 between different layers. Following~\\citet{Cai2020AMRPV}, we use a special UNK token to replace the out-of-vocabulary lemmas of the input query and remove the UNK token in the generated graph. Parameter optimization is performed with the ADAM optimizer~\\citep{KingmaB14} with $\\beta_{1}$ = 0.9 and $\\beta_{2}$ = 0.999. The learning rate schedule is similar to that in~\\citet{VaswaniSPUJGKP17}, where warm-up steps being set to 2K. We used early stopping on the development set for choosing the best model. \nPlease refer to Table \\ref{tab:hyper-parameters} for the detailed hyper-parameters settings for ISE.\n\n\\begin{table}\n\\centering\n\\scalebox{0.9}{\n\\begin{tabular}{lr}\n\\toprule\n\\multicolumn{2}{l}{\\textbf{Embeddings}}\\\\\n\\midrule\nconcept & 300 \\\\\nword & 300 \\\\\nrelation & 100\\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Query Encoder}}\\\\\ntransformer layers & 4 \\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Graph Encoder}}\\\\\ntransformer layers & 2 \\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Feature Fusion}}\\\\\nheads & 8 \\\\\nhidden size & 512 \\\\\nfeed-forward hidden size & 1024 \\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Node Decoder\/ Edge Decoder}}\\\\\nheads & 8 \\\\\nfeed-forward hidden size & 1024 \\\\\n\n\n\\bottomrule\n\\end{tabular}}\n\n\\caption{Hyper-parameters settings for ISE.}\n\\label{tab:hyper-parameters}\n\n\\end{table}\n\n\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[scale=0.26]{figures\/error.pdf}\n \n \\caption{Two errors of ISE for scene graph modification on User Generated. $Q$ denotes the textual query, $G_{S}$ denotes the source scene graph, $G_{T}$ denotes the target scene graph generated by ISE. $G_{G}$ denotes the gold target scene graph.}\n \\label{fig:error}\n \n\\end{figure*}\n\n\\subsection{Scene Graph and Application}\n\\label{scene graph}\nDeep learning has significantly promoted the advancement of computer vision~\\citep{liang2017deep, ren2021comprehensive}. Simple visual understanding tasks such as object detection and recognition are no longer sufficient. To depict the relationship between objects in the scene as a driving force, higher-level visual understanding and reasoning skills are frequently necessary. Scene graphs were created specifically to address this issue. Scene graph was first proposed by~\\citet{Johnson2015ImageRU} for image retrieval, which describes objects, their attributes, and relationships in images with a graph. A complete scene graph could represent the semantics of a dataset's scenes, not just a single image or video; additionally, it contains powerful representations that encode 2D\/3D images~\\citep{Johnson2015ImageRU,armeni20193d}, and videos~\\citep{qi2018scene,wang2020storytelling} into their abstract semantic elements. Scene graph is beneficial for various downstream tasks, such as information extraction \\cite{hu2020selfore,hu2021semi,hu2021gradient,liu2022hierarchical}, natural language summarization \\cite{liu2022psp}, and natural language inference \\cite{li2022pair}.\n\nFollowing the graph representation paradigm, different methods have been proposed to generate scene graphs from images~\\citep{XuZCF17,WangLZY18, ZellersYTC18}. Many cross-modal tasks that require understanding and reasoning on image and text are able to benefit from incorporating scene graphs, such as visual question answering~\\citep{TeneyLH17,shi2019explainable}, grounding referring expressions~\\citep{wang19}, image captioning~\\citep{YangTZC19,yao2018exploring}, and image retrieval~\\citep{Wang2020CrossmodalSG,Schroeder2020StructuredQI}. \n\n\n\n\n\n\\subsection{Error Analysis} \n\\label{error}\nWe give two wrong scene graphs generated by ISE in Figure \\ref{fig:error}. We can observe in Figure \\ref{fig:error} (a) that although ISE successfully predicts the need to insert a relation between object ``Plants'' and attribute ``Surface'', since the User Generated dataset contains a total of 2078 relations and the relations have serious long-tail effects. It is difficult for ISE to learn sparseness relations with few occurrences, leading to incorrectly predicting relation ``in growing over'' as ``on''. We attempt to address the long-tail effects of relations in future work.\nSince a node can be attached to multiple nodes, when Edge Decoder determines which nodes in the current graph should be attached to the new node, a common error is predicting the wrong node that needs to be attached. As shown in Figure \\ref{fig:error} (b), ISE incorrectly connects relation ``behind'' between ``Giraffe'' and ``Tree'' instead of ``Head'' and ``Tree''.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA phased mission system (PMS) is one that performs several different tasks or functions in sequence. The periods in which each of these successive tasks or functions takes place are known as phases \\citep{xing2008reliability,la2004phased}. Examples of PMSs can be found in many practical applications, such as electric power systems, aerospace systems, weapon systems and computer systems. A typical example of a PMS is the monitoring system in a satellite-launching mission with three phases: launch, separation, and orbiting.\n\nA PMS is considered to be functioning if all of its phases are completed without failure, and failed if failure occurs in any phase. Therefore, the reliability of a PMS with $N$ phases is the probability that it operates successfully in all of its phases:\n\\begin{equation}\n R_S = \\mathbb{P}(\\mbox{Phase 1 works} \\cap \\mbox{Phase 2 works} \\cap \\dots \\cap \\mbox{Phase $N$ works})\n \\label{eq:allphases} \n\\end{equation}\n\nThe calculation of the reliability of a PMS is more complex than that of a single phase system, because the structure of the system varies between phases and the component failures in different phases are mutually dependent \\citep{xing2008reliability}.\n\nOver the past few decades, there have been extensive research efforts to analyze PMS reliability. Generally, there are two classes of models to address such scenarios: state space oriented models \\citep{kim1994phased,chew2008phased,lu2014reliability,wang2017competing} and combinatorial methods \\citep{xing2015binary,ma1999algorithm,la2004phased,zang1999bdd,tang2006bdd,mo2009variable,reed2011improved,xing2007reliability,xing2013bdd}. The main idea of state space oriented models is to construct Markov chains and\/or Petri nets to represent the system behaviour, since these provide flexible and powerful options for modelling complex dependencies among system components. However, the cardinality of the state space can become exponentially large as the number of components increases. The remaining approaches exploit combinatorial methods, Boolean algebra and various forms of decision diagrams for reliability analysis of PMSs.\n\nIn particular, in recent years the Binary Decision Diagram (BDD) --- a combinatorial method --- has become more widely used in reliability analysis of PMSs due to its computationally efficient and compact representation of the structure function compared with other methods. Zang \\emph{et al.}\\ \\citep{zang1999bdd} first used the BDD method to analyze the reliability of PMSs. Tang \\emph{et al.}\\ \\citep{tang2006bdd} developed a new BDD-based algorithm for reliability analysis of PMSs with multimode failures. Mo \\citep{mo2009variable} and Reed \\emph{et al.}\\ \\citep{reed2011improved} improved the efficiency of Tang's method by proposing a heuristic selection strategy and reducing the BDD size, respectively. Xing \\emph{et al.}\\ \\citep{xing2007reliability,xing2013bdd} and Levitin \\emph{et al.}\\ \\citep{levitin2013reliability} proposed BDD based methods for the reliability evaluation of PMSs with common-cause failures and propagated failures. Wang \\emph{et al.}\\ \\citep{wang2007reliability} and Lu \\emph{et al.}\\ \\citep{lu2015reliability} studied modular methods for reliability analysis of PMSs with repairable components, by combining BDDs with state-enumeration methods.\n\nWhile the BDD method has been shown to be a very efficient combinatorial method, it is still difficult to analyze large systems without considerable computational expense \\citep{xing2008reliability,reed2011improved}. In this paper, we propose a combinatorial analytical approach providing a new survival signature methodology for reliability analysis of PMSs. This paper is organized as follows: \\cref{sec:PMS} gives a brief background on PMSs; \\cref{sec:survsig} first shows how the standard survival signature can be used to evaluate PMSs with similar component types in each phase, before providing a novel methodology which facilitates heterogeneity of components across the phases. \\Cref{sec:examples} presents illustrative examples showing numerical agreement with existing literature, but where the full benefits of the interpretability of survival signatures is now available due to this work. Finally, \\cref{sec:conclusion} presents some conclusions ideas for future work.\n\n\\section{Phased mission systems}\n\\label{sec:PMS}\n\n\\Cref{fig:pms1} shows a simple system that performs a series of functions or tasks which are carried out over consecutive periods of time to achieve a certain overall goal (or `mission'). Such a system --- where the structure (and possibly operating environment) of the system changes over time --- is known as a Phased Mission System (PMS), with each period of operation being referred to as a `phase'. Each phase therefore corresponds to one structural configuration and components in different phases are taken to be mutually dependent.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS1}\n \\caption{A PMS with similar components in each phase.}\n \\label{fig:pms1}\n\\end{figure}\n\nLet us consider a system consisting of $N \\ge 2$ phases, with $n_i$ components in phase $i \\in \\{1, \\dots, N\\}$. The binary state indicator variable $X_{ij}, j \\in \\{1, \\dots, n_i\\}$ denotes the operational status of the $j$th component in phase $i$:\n\\[ X_{ij} = \\begin{cases}\n1 & \\mbox{if component $j$ works for all of phase $i$} \\\\\n0 & \\mbox{if component $j$ fails before the end of phase $i$}\n\\end{cases} \\]\n\nThe vectors $\\mathbf{X}_i = (X_{i1}, \\dots, X_{in_i}), i \\in \\{1, \\dots, N\\}$, represent the states of all components in the $i$th phase and the full vector $\\mathbf{X} = (\\mathbf{X}_1, \\dots, \\mathbf{X}_N) = (X_{11}, \\dots, X_{1n_1}, \\dots, X_{N1}, \\dots, X_{Nn_N})$ represents the states of all components during the full mission.\n\nThe state of the system in each phase is also a binary random variable, which is completely determined by the states of the components in that phase. Let $\\phi_i$ represent the system state in the $i$th phase, that is:\n\\[ \\phi_i = \\varphi_i(\\mathbf{X}_i) = \\varphi_i(X_{i1}, \\dots, X_{in_i}) \\]\nwhere $\\varphi_i(\\cdot)$ is the structure function of the system design in phase $i$. The structure function evaluates to $\\phi_i = 1$ if the system functions for state vector $\\mathbf{X}_i$, and $\\phi_i = 0$ if not.\n\nSimilarly, the structure function of the full PMS (that is, the operational state of the system across \\emph{all} phases) is also a binary random variable, which is completely determined by the states of all the components in the PMS\n\\begin{equation}\n \\phi_S = \\varphi_S(\\mathbf{X}) \\triangleq \\prod_{i=1}^N \\varphi_i(X_{i1}, \\dots, X_{in_i})\n \\label{eq:strfnpms}\n\\end{equation}\n\nThe structure function as shown in \\cref{eq:strfnpms} is again a Boolean function which is derived from the truth table of the structure functions for each phase of operation. The truth tables depend uniquely on the system configurations and simply provide a means of tabulating all the possible combinational states of each component to realise the operational state of the system in each case. The state vectors for which $\\varphi_S(\\mathbf{X})=1$ provide a logical expression for the functioning of the system, while the states when $\\varphi_S(\\mathbf{X})=0$ provide a logical expression for the failure of the system. It should be noted that, unlike non-PMSs, there exist impossible combinations of states which should be deleted from the truth table when performing a reliability analysis. For example, if both the system and its components are non-repairable during the mission, then if a component is failed in a certain phase it cannot be working in subsequent phases.\n\nFinally, if all phases are completed successfully, the mission is a success, that is:\n\\[ \\phi_S = \\prod_{i=1}^N \\phi_i = 1 \\iff \\phi_i = 1 \\ \\forall\\,i \\]\n\n\\section{Survival signature}\n\\label{sec:survsig}\n\nFor larger systems, working with the full structure function can be complicated and as the system size grows it becomes hard to intuit anything meaningful from the particular algebraic form it takes. In particular, one may be able to summarize the structure function when it consists of exchangeable components of one or more types \\citep{samaniego2007system,coolen2013generalizing,coolen2014nonparametric}.\n\nRecently, the concept of the survival signature has attracted substantial attention, because it provides such a summary which enables insight into the system design even for large numbers of components of differing types. Coolen and Coolen-Maturi \\citep{coolen2013generalizing} first introduced the survival signature, using it to analyze complex systems consisting of multiple types of component. Subsequently, \\citep{coolen2014nonparametric,coolen2015predictive,Aslett2015} presented the use of the survival signature in an inferential setting, with nonparametric predictive inference and Bayesian posterior predictive inference respectively, and \\citep{feng2016imprecise} presented methods for analyzing imprecise system reliability using the survival signature. Patelli \\emph{et al.}\\ \\citep{patelli2017simulation} developed a survival signature-based simulation method to calculate the reliability of large and complex systems and \\citep{Aslett2017} presents a simulation method which can be used if the dependency structure is too complex for a survival signature approach. Walter \\emph{et al.}\\ \\citep{walter2017condition} proposed a new condition-based maintenance policy for complex systems using the survival signature. Moreover, Eryilmaz \\emph{et al.}\\ \\citep{eryilmaz2016generalizing} generalized the survival signature to multi-state systems.\n\nEfficient computation of the survival signature was addressed by Reed \\citep{reed2017efficient}, using reduced order binary decision diagrams (ROBDDs). The survival signature of a system can be easily computed by specifying the reliability block diagram as a simple graph by using the \\texttt{ReliabilityTheory} R package \\citep{Aslett2012}.\n\nIn this section, the survival signature is first shown to apply directly to full mission-length PMSs where there is a single component type in each phase. Thereafter, an extension is presented which enables heterogeneity of component types across phases, providing novel methodology for reliability analysis of PMSs.\n\n\\subsection{PMSs with similar components in each phase}\n\\label{sec:pms.same}\n\nWe consider a system with $N \\ge 2$ phases, with $n$ components in each phase (e.g.\\ the PMS as shown in \\cref{fig:pms1}), and let phase $i \\in \\{1, \\dots, N\\}$ run from time $\\tau_i$ to time $\\tau_{i+1}$ with $\\tau_1 \\triangleq 0$ and $\\tau_i < \\tau_{i+1} \\ \\forall\\,i$. Thus the full mission time is denoted $\\tau_{N+1}$.\n\nWe assume that the random failure times of components in the same phase are fully independent, and in addition that the components are exchangeable. Let $\\Phi(l_1, \\dots, l_N)$ denote the probability that the PMS functions by the end of the mission given that precisely $l_i, i \\in \\{1, \\dots, N\\}$, of its components functioned in phase $i$. Both the system and its components are non-repairable during the mission, so $n \\ge l_1 \\ge l_2 \\ge \\dots \\ge l_N \\ge 0$ and the number of components that function at the beginning of phase $i$ is $m_i = l_{i-1},$ with $m_1=n$ --- so all components appear in all phases. Subject to these constraints which do not apply in a non-PMS, the survival signature can then be applied without further modification for the mission completion time.\n\nThere are $\\binom{m_i}{l_i}$ state vectors where precisely $l_i$ components function. Because the random failure times of components in the same phase are independent and exchangeable, the survival signature is equal to:\n\\begin{equation}\n \\Phi(l_1, \\dots, l_N) = \\left[ \\prod_{i=1}^N \\binom{m_i}{l_i}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\varphi_S(\\mathbf{X})\n \\label{eq:survsig0}\n\\end{equation}\nwhere $\\mathcal{S}$ denotes the set of all possible state vectors for the whole system where $l_i$ components in phase $i$ are functioning. This step is of the same form as the standard survival signature for a static system \\citep{coolen2013generalizing}, but note one immediate subtle difference: as noted above, $m_i$ is not fixed across evaluations of $\\Phi(\\cdot)$, but rather is determined by $l_{i-1}$, since the maximum number of functioning components in the $i$th phase is determined by how many components completed phase $i-1$ still functioning.\n\nA further subtlety arises as soon as we consider any time leading up to the mission completion time, because the structure of the system changes. Although the standard survival signature can be used in computing the reliability of a static system at any point in its life \\citep{coolen2013generalizing}, this is no longer true in this extension to PMSs. Consequently, \\eqref{eq:survsig0} is the survival signature which represents the probability that the whole mission completes successfully given that $l_i$ components are working in phase $i$. For the survival function of a PMS, we must extend the survival signature to create a family of survival signatures which account for the temporally changing structure. Let $\\Phi_p(l_1, \\dots, l_p)$ denote the survival signature of a PMS up to and including phase $p \\le N$, which is the probability that the mission has not yet failed by phase $p$ given that $l_i$ components are working in phase $i \\in \\{1, \\dots, p\\}$. Then,\n\\begin{equation}\n \\Phi_p(l_1, \\dots, l_p) = \\left[ \\prod_{i=1}^p \\binom{m_i}{l_i}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\prod_{i=1}^p \\varphi_i(\\mathbf{X})\n \\label{eq:survsig0t}\n\\end{equation}\n\nWe define a function mapping mission time $t$ to the current phase\n\\begin{equation}\n \\rho(t) : [0,\\tau_{N+1}] \\to \\{1, \\dots, N\\}, \\mbox{ as } \\rho(t) \\triangleq \\max\\{ i \\,:\\, \\tau_i < t \\} \\label{eq:currentphase}\n\\end{equation}\n\nFrom \\cref{eq:allphases} and \\eqref{eq:survsig0t}, the reliability of the PMS at time $t$ can then be expressed pointwise as:\n\\begin{equation}\n R(t) = \\sum_{l_1=0}^{m_1} \\cdots \\sum_{l_{\\rho(t)}=0}^{m_{\\rho(t)}} \\left[ \\Phi_{\\rho(t)}(l_1, \\dots, l_{\\rho(t)}) \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\left\\{ C_i(t) = l_i \\right\\} \\right) \\right]\n \\label{eq:survsig1}\n\\end{equation}\nwhere $C_i(t)$ is the random variable denoting the number of components in phase $i$ which function at time $t \\in [\\tau_i, \\tau_{i+1})$. If $R(t)$ is being evaluated at $t \\ge \\tau_{i+1}$ then $C_i(t) \\triangleq C_i(\\tau_{i+1})$. By the definition of $\\rho(t)$, $R(t)$ will never be evaluated for $t < \\tau_{i}$.\n\nBecause components are of the same type they share a common lifetime distribution as long as they all appear in all phases (and hence age together). As a result, the sequential nature of a PMS means that components in the same phase have common conditional CDF, $F_i(t)$, for phase $i$, where conditioning is on the component having worked at the beginning of phase $i$. That is, if the components have common CDF $F(t)$ and all components appear in every phase (in possibly different configurations), then the conditional CDF in phase $i$ is:\n\\begin{align}\n F_i(t) &= \\mathbb{P}(T < t \\,|\\, \\tau_i, \\tau_{i+1}, T > \\tau_i) \\nonumber \\\\\n &= \\frac{1}{1-F(\\tau_i)} \\int_{\\tau_i}^{\\min \\{t, \\tau_{i+1}\\}} dF(z) \\nonumber \\\\\n &= \\frac{F(\\min \\{t, \\tau_{i+1}\\}) - F(\\tau_i)}{1-F(\\tau_i)} \\label{eq:condcdf}\n\\end{align}\nwhere $\\tau_i$ is the start time of phase $i$ ($\\tau_1 \\triangleq 0$) and $T$ is the random variable representing component lifetime.\n\nProceeding with this conditional CDF, the last term in \\cref{eq:survsig1} can be simplified as\n\\begin{align*}\n \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\left\\{ C_i(t) = l_i \\right\\} \\right) &= \\prod_{i=1}^{\\rho(t)} \\mathbb{P}\\left( C_i(t) = l_i \\right) \\\\\n &= \\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right]\n\\end{align*}\nwhere\n\\begin{equation}\n R_i(t) = 1-F_i(t) = \\frac{1-F(\\min \\{t, \\tau_{i+1}\\})}{1-F(\\tau_i)} \\label{eq:comprel}\n\\end{equation}\nis the reliability of the components at time $t$ in phase $i$.\n\nThus, \\cref{eq:survsig1} can be rewritten pointwise in $t$ as\n\\begin{align}\n R(t) &= \\sum_{l_1=0}^{m_1} \\cdots \\sum_{l_{\\rho(t)}=0}^{m_{\\rho(t)}} \\left\\{ \\Phi_{\\rho(t)}(l_1, \\dots, l_{\\rho(t)}) \\phantom{\\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right]} \\right. \\nonumber \\\\\n &\\qquad\\qquad\\qquad\\qquad \\times \\left. \\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right] \\right\\} \\label{eq:survsigPMS1a}\n\\end{align}\n\nSince in the general case (see special case exception in the sequel) every component appears in every phase, this can be written\n\\begin{align}\n R(t) &= \\sum_{l_1=0}^{l_0} \\cdots \\sum_{l_{\\rho(t)}=0}^{l_{\\rho(t)-1}} \\left\\{ \\Phi_{\\rho(t)}(l_1, \\dots, l_{\\rho(t)}) \\phantom{\\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right]} \\right. \\nonumber \\\\\n &\\qquad\\qquad\\qquad\\qquad \\times \\left. \\prod_{i=1}^{\\rho(t)} \\left[ \\binom{l_{i-1}}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{l_{i-1}-l_i} \\right] \\right\\} \\label{eq:survsigPMS1b}\n\\end{align}\nwhere we define $l_0 \\triangleq n$. Writing in this final form stresses the sequential dependence in the computation, in stark contrast to the standard survival signature for a static system.\n\n\\subsubsection{Special case: Exponentially distributed component lifetime}\n\nThere are two simplifications that arise when components are Exponentially distributed. Firstly, $F_i(t) \\equiv F(t) \\ \\forall\\, i$, so that $R_i(t) = R(t) = 1-F(t-\\tau_i) \\ \\forall i$.\n\nThe second simplification is that not all components need to appear in all phases. It may be that some components appear only in later phases (but continue to appear after the first phase they are in). In this case, one should be careful not to use \\eqref{eq:survsigPMS1b}, but instead \\eqref{eq:survsigPMS1a} where now $m_i=l_{i-1}+m_i^\\star$ where $m_i^\\star$ is the number of components appearing in the system for the first time at phase $i$.\n\n\\subsubsection{Modelling constraints}\n\nNote that considerable care is required in the specification of --- and implicit assumptions made for --- $F_i(t)$. In particular, when a component is not present in a phase, then whether ageing continues (i.e.\\ time passes) or not is crucial in determining whether the assumption of identical component lifetime distribution still holds in all phases. For example, in \\cref{fig:pms1} each component appears in all phases and therefore experiences the same wear, but in \\cref{fig:pms2} each component is in precisely 2 of the 3 phases. Consequently, even though one might assume all components are of the same type initially, if component $C$ is considered not to `age' during phase 1 (where it is not present) then it will in fact not have identical conditional lifetime distribution to $A$ and $E$ during phase 2, since the latter will have already experienced wear from phase 1.\n\nThis imposes rather unattractive modelling strictures: all components of similar type must appear in the same phases; or all components must have constant failure rate (Exponentially distributed lifetime). These modelling strictures severely limit applicability to real world systems, thus motivating the novel methodological extension of survival signatures hereinafter.\n\n\\subsection{PMSs with different components in different phases}\n\\label{sec:pms.diff}\n\nMost practical PMSs for which the reliability is modelled consist of heterogeneous component types both within and between phases. Therefore, a more interesting challenge is to extend the methodology of survival signatures to this more general setting.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS2}\n \\caption{A PMS with multiple types of components.}\n \\label{fig:pms2}\n\\end{figure}\n\nWe now consider this setting in generality and show that the problem again simplifies in the special case of Exponentially distributed lifetimes, which is the only case that most of the literature has addressed to date. The only constraint we impose is that components of the same type appear in the same phases (since then the conditional CDFs within phases remain in agreement). However, note that this does not limit the scenarios that can be modelled, since components of the same physical type can still be split into multiple `meta-types'.\n\n\\begin{definition}{(Meta-type)}\n Components are defined to be of the same \\emph{meta-type} when they are of the same physical type and appear in the same phases.\n\\end{definition}\n\nLet there be a total of $K$ different meta-types of component. We take the multi-type, multi-phase survival signature to be denoted by the function $\\Phi(l_{11}, \\dots, l_{1K}, \\dots, l_{N1}, \\dots, l_{NK})$, the probability that the system functions given that precisely $l_{ik}$, components of type $k$ function in phase $i$. That is,\n\\[ \\Phi(l_{11}, \\dots, l_{1K}, \\dots, l_{N1}, \\dots, l_{NK}) = \\left[ \\prod_{i=1}^N \\prod_{k=1}^{K} \\binom{m_{ik}}{l_{ik}}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\varphi_S(\\mathbf{X}) \\]\nwhere $\\mathcal{S}$ denotes the set of all possible state vectors for the whole system. Not all component types need necessarily appear in all phases, so we admit the possibility that $m_{ik}=0$ when a component type is absent from a phase and observe the standard definition that $\\binom{0}{0} \\triangleq 1$ --- this simplifies notation versus having varying numbers of $l_{i\\cdot}$ for each phase.\n\nAs before, the above survival signature is only applicable to the full mission time and we define a family of survival signatures corresponding the successive phases of the mission. Let $\\Phi_p(l_{11}, \\dots, l_{1K}, \\dots, l_{p1}, \\dots, l_{pK})$ denote the survival signature of a PMS up to and including phase $p \\le N$, which is the probability that the mission has not yet failed by phase $p$ given that $l_{ik}$ components of type $k$ are working in phase $i \\in \\{1, \\dots, p\\}$. Then,\n\\begin{equation}\n \\Phi_p(l_{11}, \\dots, l_{1K}, \\dots, l_{p1}, \\dots, l_{pK}) = \\left[ \\prod_{i=1}^p \\prod_{k=1}^{K} \\binom{m_{ik}}{l_{ik}}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\prod_{i=1}^p \\varphi_i(\\mathbf{X})\n \\label{eq:survsig0t2}\n\\end{equation}\n\nWe retain the definition of $\\rho(t)$ given in \\eqref{eq:currentphase}. It then follows from \\cref{eq:allphases} and \\cref{eq:survsig0t2} that the reliability of the PMS can be characterised as:\n\\begin{align}\n R(t) &= \\sum_{l_{11}=0}^{m_{11}} \\cdots \\sum_{l_{\\rho(t),K}=0}^{m_{\\rho(t),K}} \\left[ \\Phi_{\\rho(t)}(l_{11}, \\dots, l_{1K}, \\dots, l_{\\rho(t),1}, \\dots, l_{\\rho(t),K}) \\vphantom{\\mathbb{P}\\left( \\bigcap_{i=1}^N \\bigcap_{k=1}^{K_i} \\left\\{ C_{ik}(t) = l_{ik} \\right\\} \\right)} \\right. \\nonumber \\\\\n & \\qquad\\qquad\\qquad\\qquad\\qquad \\left. \\times \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\bigcap_{k=1}^{K} \\left\\{ C_{ik}(t) = l_{ik} \\right\\} \\right) \\right] \\label{eq:survsig2}\n\\end{align}\nwhere $C_{ik}(t)$ is the random variable denoting the number of components of type $k$ in phase $i$ which function at time $t \\in [\\tau_i, \\tau_{i+1})$. In the same vein as \\cref{sec:pms.same}, if $R(t)$ is being evaluated at $t \\ge \\tau_{i+1}$ then $C_{ik}(t) \\triangleq C_{ik}(\\tau_{i+1})$. By the definition of $\\rho(t)$, $R(t)$ will never be evaluated for $t < \\tau_{i}$.\n\nWe can simplify, by defining that $\\mathbb{P}\\left( C_{ik}(t) = 0 \\right) = 1$ when $m_{ik}=0$.\n\\begin{align*}\n \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\bigcap_{k=1}^{K} \\left\\{ C_{ik}(t) = l_{ik} \\right\\} \\right) &= \\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\mathbb{P}\\left( C_{ik}(t) = l_{ik} \\right) \\\\\n &= \\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\left[ \\binom{m_{ik}}{l_{ik}} (R_{ik}(t))^{l_{ik}} (1-R_{ik}(t))^{m_{ik}-l_{ik}} \\right]\n\\end{align*}\nwith\n\\begin{equation}\n R_{ik}(t) = \\frac{1-F_k(\\min \\{t, \\tau_{i+1}\\})}{1-F_k(\\tau_i)} \\label{eq:comprel2}\n\\end{equation}\nwhere $F_k(\\cdot)$ is the CDF of the component lifetime distribution for the meta-type $k$.\n\nConsequently, for any time $t$ during the mission, we have the reliability of the system characterised by:\n\n\\begin{align}\n R(t) &= \\sum_{l_{11}=0}^{m_{11}} \\cdots \\sum_{l_{\\rho(t),K}=0}^{m_{\\rho(t),K}} \\left\\{ \\Phi_{\\rho(t)}(l_{11}, \\dots, l_{1K}, \\dots, l_{\\rho(t),1}, \\dots, l_{\\rho(t),K}) \\vphantom{\\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\left[ \\binom{m_{ik}}{l_{ik}} (R_{ik}(t))^{l_{ik}} (1-R_{ik}(t))^{m_{ik}-l_{ik}} \\right]} \\right. \\nonumber \\\\\n & \\qquad\\qquad\\ \\left. \\times \\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\left[ \\binom{m_{ik}}{l_{ik}} (R_{ik}(t))^{l_{ik}} (1-R_{ik}(t))^{m_{ik}-l_{ik}} \\right] \\right\\} \\label{eq:survsigPMS2}\n\\end{align}\nwhere $m_{ik} = l_{jk}$ for $j = \\max \\{ j : j < i, m_{jk} > 0 \\}$. That is, $m_{ik}$ is the number components which were working in the most recent preceding phase where this component meta-type appears.\n\n\\subsubsection{Special case: Exponential component lifetimes}\n\nExponentially distributed component lifetimes again provide simplifications. Now, the $R_{ik}(t) \\equiv R_k(t)$ due to the memoryless property of the Exponential distribution.\n\nFurthermore, we can relax the definition of a meta-type of component. The definition of component meta-types serves two purposes: (i) to ensure that $m_{ik}$ can be determined without tracking the individual functioning status of all components; and (ii) to ensure that the conditional CDFs of all components of the same meta-type in a phase are the same. The second purpose is made entirely redundant by the memoryless nature of the Exponential distribution. The first purpose remains, but can be achieved with a weaker definition of meta-type.\n\n\\begin{definition}{(Exponential meta-type)}\n Components are defined to be of the same \\emph{exponential meta-type} when they are of the same Exponentially distributed physical type, and if once any pair of components of the same \\emph{exponential meta-type} appear in a phase together, they both appear in all subsequent phases where either component appears.\n\\end{definition}\n\nIn other words, components of the same exponential meta-type may first appear in the system at different phases, but thereafter should appear whenever at least one such exponential meta-type component appears. This definition enables the determination of $m_{ik}$ as $m_{ik} = l_{jk} + m_{ik}^\\star$ for $j = \\max \\{ j : j < i, m_{jk} > 0 \\}$, where $m_{ik}^\\star$ is the number of components of exponential meta-type $k$ appearing for the first time in phase $i$.\n\nThe benefits of Exponential component lifetimes can be mixed in a system containing both meta-type and exponential meta-types since a crucial feature of survival signatures is the factorisation of such types so that they do not interact.\n\n\\section{Numerical examples}\n\\label{sec:examples}\n\n\\subsection{Example 1}\n\nWe first consider the PMS shown in \\cref{fig:pms1}. The duration of each phase is taken to be 10 hours, and the failure rate of each component in each phase is $10^{-4}$\/hour.\n\nThe survival signatures of this PMS can be obtained using \\cref{eq:survsig0}. The elements of the survival signature which are non-zero are shown in \\cref{tab:pms1.survsig} --- that is, rows where $\\Phi(l_1)=0, \\Phi(l_1, l_2)=0$ and $\\Phi(l_1, l_2, l_3)=0$ are omitted. The table is grouped into a nested sequence of phases, with just the first phase shown, followed by the first two phases together and finally all phases --- this helps emphasise and clarify the sequential dependence of phases, where $m_k$ depends on $l_{k-1}$.\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular}{ccccccccc}\n \\hline\n \\multicolumn{2}{c}{First phase} & \\multicolumn{3}{c}{Phase 1+2} & \\multicolumn{4}{c}{All Phases} \\tabularnewline\n \\multicolumn{2}{c}{$0 \\le t \\le 10$} & \\multicolumn{3}{c}{$10 < t \\le 20$} & \\multicolumn{4}{c}{$20 < t \\le 30$} \\tabularnewline\n \\hline \n $l_1$ & $\\Phi(l_1)$ & $l_1$ & $l_2$ & $\\Phi(l_1, l_2)$ & $l_1$ & $l_2$ & $l_3$ & $\\Phi(l_{1},l_{2},l_{3})$ \\tabularnewline\n \\hline \n \\hline\n 3 & 1 & 3 & 1 & 1 & 3 & 2 & 2 & $\\frac{2}{3}$ \\tabularnewline\n & & 3 & 2 & 1 & 3 & 3 & 2 & $\\frac{2}{3}$ \\tabularnewline\n & & 3 & 3 & 1 & 3 & 3 & 3 & 1 \\tabularnewline\n \\hline \n \\end{tabular}\n \\caption{Survival signature of the PMS shown in \\cref{fig:pms1}}\n \\label{tab:pms1.survsig}\n\\end{table}\n\nWe can obtain the conditional reliability of components using the conditional failure rate of the component in each phase. \\Cref{eq:survsigPMS1a} then renders the reliability of the PMS as a whole. The results are shown in \\cref{tab:pms1.R} and \\cref{fig:pms1.R}. These results concord with those found using an independent method in \\citep{zang1999bdd}.\n\nOf note is the jump discontinuity in the reliability function at $t=20$, as shown in \\cref{fig:pms1.R}. This occurs because a failure of component $A$ during phase 2 does not necessarily cause failure of the system at that point, so long as at least one of components $B$ or $C$ work. However, in this situation the PMS will fail instantaneously upon commencing phase 3 at $t=20^+$. Consequently, the size of the jump discontinuity in fact corresponds to the probability of the event $\\{ A$ fails in phase 2, but the system still functions$\\}$.\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.85\\textwidth}{@{\\extracolsep{\\fill}}ccccccc@{\\extracolsep{\\fill}}}\n \\hline\n $t$ & $0$ & $10^{-}$ & $10^{+}$ & $20^{-}$ & $20^{+}$ & $30$ \\tabularnewline\n $R$ & 1 & 0.99700 & 0.997700 & 0.997700 & 0.99601 & 0.99501 \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Reliability of the PMS in example 1}\n \\label{tab:pms1.R}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS1R}\n \\caption{Reliability of the PMS in example 1.}\n \\label{fig:pms1.R}\n\\end{figure}\n\n\n\\subsection{Example 2}\n\nFor the PMS shown in \\cref{fig:pms2}, phases 1, 2 and 3 last for 10, 90 and 100 hours respectively. All components in each phase are of the same type and the lifetime distribution of these components follows a two-parameter Weibull distribution. \\Cref{tab:pms2.pars} summarises the distribution information of the components in each phase.\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.75\\textwidth}{@{\\extracolsep{\\fill}}lccc@{\\extracolsep{\\fill}}}\n \\hline\n Parameter & Phase 1 & Phase 2 & Phase 3 \\tabularnewline\\hline \n Scale & 250 & 1000 & 300 \\tabularnewline\n Shape & 2.6 & 3.2 & 2.6 \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Conditional distribution information of the components in each phase}\n \\label{tab:pms2.pars}\n\\end{table}\n\nAs described in \\cref{sec:pms.diff}, if some components of the same type appear in a phase and also appear in some subsequent phases --- but not simultaneously --- then these components should be considered as different types of component. For this example, this means that despite the fact they all share a common failure rate within phases, components $A$ and $E$ need to be labelled as type 1 and the remainder as type 2, because ageing will have been different.\n\nThe survival signatures of this PMS are shown in \\cref{tab:pms2.survsig}, with rows where $\\Phi(l_{11},l_{12})=0, \\Phi(l_{11},l_{12},l_{21},l_{22})=0$ and $\\Phi(l_{11},l_{12},l_{21},l_{22},l_{32})=0$ suppressed. The reliability of the PMS is shown in \\cref{tab:pms2.R} and \\cref{fig:pms2.R}.\n\nWe again see a jump discontinuity in the reliability curve depicted in \\cref{fig:pms2.R}, at $t=10$. In this instance, if component $E$ fails during phase 1 the system will still function, but instantaneous failure will occur once phase 2 commences. This is evident in \\cref{tab:pms2.R}, which shows the jump discontinuity is of size $2.3 \\times 10^{-4}$. Indeed, this should correspond to the probability that the system survives phase 1 but with component $E$ failing during that phase. That is:\n\\begin{align*}\n & \\mathbb{P}(A, B \\mbox{ function} \\cap E \\mbox{ fails in phase 1}) \\\\\n & \\quad= \\mathbb{P}(E \\mbox{ fails in phase 1}) \\mathbb{P}(A, B \\mbox{ function} \\,|\\, E \\mbox{ fails in phase 1}) \\\\\n & \\quad = \\int_0^{10} b a^{-b} t^{b-1} e^{-(t\/a)^b}\\,dt \\left(1 - \\int_0^{10} b a^{-b} t^{b-1} e^{-(t\/a)^b}\\,dt \\right)^2 \\\\\n & \\quad \\approx 2.3 \\times 10^{-4} \\ \\ \\mbox{for } a=250, b=2.6\n\\end{align*}\nas required. Hence, PMS can exhibit jump discontinuities where probability mass from non-critical failures in one phase accumulate onto phase change boundaries when the system layout switches.\n\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular}{llllllllllllll}\n \\hline\n \\multicolumn{3}{l}{The first phase} & \\multicolumn{5}{l}{The first two phases} & \\multicolumn{6}{l}{All phases} \\tabularnewline\n \\hline \n $l_{11}$ & $l_{12}$ & $\\Phi_1$ & $l_{11}$ & $l_{12}$ & $l_{21}$ & $l_{22}$ & $\\Phi_{12}$ & $l_{11}$ & $l_{12}$ & $l_{21}$ & $l_{22}$ & $l_{32}$ & $\\Phi_S$ \\tabularnewline\n \\hline\n \\hline\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\/2 & 1 & 1 & 1 & 1 & 1 & 1\/2 \\tabularnewline\n 2 & 1 & 1 & 2 & 1 & 1 & 1 & 1\/2 & 1 & 1 & 1 & 1 & 2 & 1\/2 \\tabularnewline\n & & & 2 & 1 & 2 & 0 & 1 & 1 & 1 & 1 & 1 & 3 & 1\/2 \\tabularnewline\n & & & 2 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1\/2 \\tabularnewline\n & & & & & & & & 2 & 1 & 1 & 1 & 2 & 1\/2 \\tabularnewline\n & & & & & & & & 2 & 1 & 1 & 1 & 3 & 1\/2 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 0 & 1 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 0 & 2 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 1 & 1 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 1 & 2 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 1 & 3 & 1 \\tabularnewline\n \\hline\n \\end{tabular}\n \\caption{Survival signature of the PMS shown in \\cref{fig:pms2}}\n \\label{tab:pms2.survsig}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS2R}\n \\caption{Reliability of the PMS in example 2.}\n \\label{fig:pms2.R}\n\\end{figure}\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.85\\textwidth}{@{\\extracolsep{\\fill}}ccccccc@{\\extracolsep{\\fill}}}\n \\hline\n $t$ & $0$ & $10^{-}$ & $10^{+}$ & $100^{-}$ & $100^{+}$ & $200$ \\tabularnewline\n $R$ & 1 & 0.999768 & 0.999536 & 0.999086 & 0.999086 & 0.998910 \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Reliability of the PMS in example 2.}\n \\label{tab:pms2.R}\n\\end{table}\n\n\\subsection{Example 3}\n\nIn this final example, we replicate the space application mission discussed by Zang \\citep{zang1999bdd} and Mural \\citep{mural1999dependability}. This example includes the full complexity of real-world PMSs, where there is now heterogeneity of component types within phases. This means that multiple component types arise necessarily and not merely as a side effect of identical components appearing in differing phases. There are five phases involved in this space mission: launch is the first phase, followed by Hibern.1, Asteroid, Hibern.2, and finally Comet. The reliability block diagram is shown in \\cref{fig:pms3}. The five phases last for 48, 17520, 672, 26952 and 672 hours, respectively. The failure rates of the components in each phase are given in \\cref{tab:pms3.lambda}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\textwidth]{PMS3}\n \\caption{Reliability block diagram of the space application.}\n \\label{fig:pms3}\n\\end{figure}\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular}{llllll}\n \\hline\n & Phase1 & Phase 2 & Phase 3 & Phase 4 & Phase 5 \\tabularnewline\n \\hline\n $H_a$, $H_b$, $H_c$, $H_d$ & $10^{-5}$ & $10^{-6}$ & $10^{-5}$ & $10^{-6}$ & $10^{-5}$ \\tabularnewline\n $L_a$, $L_b$ & $5 \\times 10^{-5}$ & 0 & 0 & 0 & 0 \\tabularnewline\n $A_a$, $A_b$ & 0 & 0 & $10^{-5}$ & 0 & 0 \\tabularnewline\n $C_a$, $C_b$ & 0 & 0 & 0 & 0 & $10^{-4}$ \\tabularnewline\n \\hline\n \\end{tabular}\n \\caption{Failure rates of the components.}\n \\label{tab:pms3.lambda}\n\\end{table}\n\nAs shown in \\cref{tab:pms3.types}, in order to calculate the reliability of the PMS, the 4 `real' component types must be divided into 5 types when using the methodology presented in this paper. That is, although $H_a, H_b, H_c,$ and $H_d$ have homogeneous failure rates throughout all phases, because they do not always appear together they will exhibit different ageing. Consequently, these are split into two `pseudo' types.\n\nThe result of analysing the reliability of this PMS is shown in \\cref{tab:pms3.R} and \\cref{fig:pms3.R}. The results found using the new methodology we have presented in this paper are in agreement with the entirely independent method in \\citep{zang1999bdd}. \n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.85\\textwidth}{@{\\extracolsep{\\fill}}ccccc@{\\extracolsep{\\fill}}}\n \\hline\n Type 1 & Type 2 & Type 3 & Type 4 & Type 5 \\tabularnewline\n $H_a$, $H_b$ & $H_c$, $H_d$ & $L_a$, $L_b$ & $A_a$, $A_b$ & $C_a$, $C_b$ \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Types of components in example 3.}\n \\label{tab:pms3.types}\n\\end{table}\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\resizebox{0.9\\textwidth}{!}{%\n \\begin{tabular}{ccccccccccc}\n \\hline\n $t$ & $0$ & $48^{-}$ & $48^{+}$ & $17568^{-}$ & $17568^{+}$ & $18240^{-}$ & $18240^{+}$ & $45192^{-}$ & $45192^{+}$ & $45864$ \\tabularnewline\n $R$ & 1 & 0.99999 & 0.99999 & 0.99968 & 0.99964 & 0.99862 & 0.99862 & 0.99670 & 0.99600 & 0.98943 \\tabularnewline\n \\hline \n \\end{tabular}}\n \\caption{Reliability of the PMS in example 3.}\n \\label{tab:pms3.R}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS3R}\n \\caption{Reliability of the PMS in example 3, with inset graph providing blown-up detail of first 200 hours of operation.}\n \\label{fig:pms3.R}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nComputing the reliability of a PMS is considerably more complex than that of a non-PMS, due to the variation in system structure between phases and the dependencies between component failures in different phases. Consequently, reliability analysis of PMSs has become one of the most challenging topics in the field of system reliability evaluation and maintenance engineering in recent decades. Despite some progress towards efficient and effective methods for measuring the reliability of PMS, it is still difficult to analyze large systems without considerable computational expense and even where it is possible, many methods fail to convey intuition about the reliability of the system.\n\nIn this paper, a new and efficient method for reliability analysis of PMS is proposed using survival signature. Signatures have been proven to be an efficient method for estimating the reliability of systems. A new kind of survival signature is derived to represent the structure function of the PMS. Then the proposed survival signature is applied to calculate the reliability of the PMS. Reliability analysis of a system using signatures could separate the system structure from the component probabilistic failure distribution. Therefore, the proposed approach is easy to be implemented in practice and has high computational efficiency.\n\n\nNote that reliability analysis of PMSs with multiple failure mode components is not studied in this paper. In practice the components may perhaps have more than one failure mode. In ongoing work, the authors are considering component importance analysis, extending work such as \\cite{feng2016imprecise, eryilmaz2018marginal} to PMSs.\n\\section*{Acknowledgements}\n\nThe authors gratefully acknowledge the support of National Natural Science Foundation of China (51575094), China Postdoctoral Science Foundation (2017M611244), China Scholarship Council (201706085013) and Fundamental Research Funds for the Central Universities (N160304004).\n\nThis work was performed whilst the first author was a visitor at Durham University.\n\n\\section*{References}\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzziqdw b/data_all_eng_slimpj/shuffled/split2/finalzziqdw new file mode 100644 index 0000000000000000000000000000000000000000..0922e18fd4f9f65fb59a93d7d84c19086f2990e7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzziqdw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\tThe classical binomial process has been studied by \\citet{jakeman} and has been used to model fluctuations in a train of events in quantum optics.\n\tRecall that the classical binomial process $ \\mathcal{N}(t)$, $t\\ge 0 $, with birth rate $\\lambda>0$ and\n\t\tdeath rate $\\mu>0$, has state probabilities\n\t\t$p_n(t) = \\Pr \\{ \\mathcal{N}(t) = n | \\mathcal{N}(0) = M \\}$ which solve the following Cauchy problem: \n\t\t\\begin{align}\n\t\t\t\\label{state}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\mathrm d}{\\mathrm dt} p_n(t) = \\mu (n+1) p_{n+1}(t) - \\mu n p_n(t) - \\lambda\n\t\t\t\t(N-n) p_n(t) \\\\\n\t\t\t\t\\qquad \\qquad + \\lambda (N-n+1) p_{n-1}(t), \\qquad \\qquad \\qquad 0 \\leq n \\leq N, \\\\\n\t\t\t\tp_n(0) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t1, & n=M, \\\\\n\t\t\t\t\t0, & n \\neq M.\n\t\t\t\t\\end{cases}\n\t\t\t\\end{cases}\n\t\t\\end{align}\n\t\tThe initial number of individuals is $M \\geq 1$, and $N \\geq M$.\n\n\t\tNotice that the binomial process has a completely different behaviour compared to the\n\t\tclassical linear birth-death process. Here the birth rate is proportional to the\n\t\tdifference between a larger fixed number and the number of\tindividuals present while the\n\t\tdeath rate remains linear. The whole evolution of the binomial process develops\n\t\tin the region $[0, N]$.\n\t\tFurthermore it is shown that at large times, an equilibrium is reached and displays a binomial distribution.\n\t\n\t\tFrom \\eqref{state}, it is straightforward to realise that the generating function\n\t\t\\begin{align}\n\t\t\tQ(u,t) = \\sum_{n=0}^N (1-u)^n p_n(t), \\qquad |1-u| \\leq 1,\n\t\t\\end{align}\n\t\tis the solution to\n\t\t\\begin{align}\n\t\t\t\\label{fgp}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\partial}{\\partial t} Q(u,t) = -\\mu u \\frac{\\partial}{\\partial u} Q(u,t) - \\lambda u\n\t\t\t\t(1-u) \\frac{\\partial}{\\partial u} Q(u,t) -\\lambda N u Q(u,t), \\\\\n\t\t\t\tQ(u,0) = (1-u)^M.\n\t\t\t\\end{cases}\n\t\t\\end{align}\nMoreover, \\citet{jakeman} showed that at large times, the evolving population follows a binomial distribution with parameter $\\lambda \/ (\\lambda + \\mu)$. \n\nIn this paper, we propose a fractional generalisation of the classical binomial process. The fractional generalization includes non-markovian and rapidly dissipating or bursting birth-death processes at small and regular times. We also derive more statistical and related properties of the newly developed fractional stochastic process, which are deemed useful in real applications. Note that the theory and results presented here may have applications beyond quantum optics and may be of interest in other disciplines. \tAs in the preceding works on fractional Poisson process (e.g.\\ \\citet{laskin}) and other\n\t\tfractional point processes (see e.g.\\ \\citet{cah,pol}), fractionality is obtained by replacing the\n\t\tinteger-order derivative in the governing differential equations with a fractional-order derivative. \tIn particular, we use the Caputo fractional derivative of a well-behaved function $f(t)$ and is defined as\n\t\t\\begin{align}\n\t\t\t\\label{caputo}\n\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} f(t) = \\frac{1}{\\Gamma(m-\\nu)} \\int_0^t \\frac{\\frac{\\mathrm d^m}{\n\t\t\t\\mathrm d \\tau^m}f(\\tau)}{(t-\\tau)^{\\nu-m+1}}\n\t\t\t\\mathrm d\\tau, \\qquad m=\\lceil \\nu \\rceil,\n\t\t\\end{align}\n\twhere ``$\\lceil y \\rceil$\" is the smallest integer that is not less than $y$. Note that the Caputo fractional derivative operator is in practice a convolution of the standard derivative\nwith a power law kernel which adds more memory in the process. This characteristic is certainly an improvement from a physical viewpoint. By simple substitution, we obtain the following initial value problems for the probability generating function and the state\n\t\tprobabilities:\n\t\t\\begin{align}\n\t\t\t\\label{fgpfrac}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} Q^\\nu(u,t) = -\\mu u \\frac{\\partial}{\\partial u} Q^\\nu(u,t) - \\lambda u\n\t\t\t\t(1-u) \\frac{\\partial}{\\partial u} Q^\\nu(u,t) -\\lambda N u Q^\\nu(u,t), \\\\\n\t\t\t\tQ^\\nu(u,0) = (1-u)^M, \\qquad \\qquad \\qquad |1-u|\\leq 1,\n\t\t\t\\end{cases}\n\t\t\\end{align}\t\t\n\t\t\\begin{align}\n\t\t\t\\label{statefrac}\n\t\t\t\\begin{cases}\n\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} p_n^\\nu(t) = \\mu (n+1) p_{n+1}^\\nu(t) - \\mu n p_n^\\nu(t) - \\lambda\n\t\t\t\t(N-n) p_n^\\nu(t) \\\\\n\t\t\t\t\\qquad \\qquad \\qquad + \\lambda (N-n+1) p_{n-1}^\\nu(t), & 0 \\leq n \\leq N, \\\\\n\t\t\t\tp_n^\\nu(0) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t1, & n=M, \\\\\n\t\t\t\t\t0, & n \\neq M,\n\t\t\t\t\\end{cases}\n\t\t\t\\end{cases}\n\t\t\\end{align}\n\t\twhere $\\nu \\in (0,1]$.\n\nWe organized the rest of the paper as follows. In Section 2, the statistical properties of the fractional binomial process are derived by solving the preceding initial-value problem. Section 3 explored the sub-models that are directly extractable from the fractional binomial process. We then conclude the paper by providing more discussions and future extensions of the study in Section 4.\n\n\n\t\\section{Main properties of the fractional binomial process}\n\t\t\\label{se}\n\t\t\n\t\tFirstly, we prove a subordination relation which is of fundamental importance to deriving many of our results.\n\n\t\t\\begin{thm}\n\t\t\t\\label{sub}\n\t\t\tThe fractional binomial process $\\mathcal{N}^\\nu(t)$ has the following one-dimensional representation:\n\t\t\t\\begin{align}\n\t\t\t\t\\mathcal{N}^\\nu(t) \\overset{\\text{d}}{=} \\mathcal{N}(V_t^\\nu), \n\t\t\t\\end{align}\n\t\t\twhere $\\mathcal{N}(t)$ is a classical binomial process, $V_t^\\nu$, $t\\ge 0$, is the inverse process\n\t\t\tof the $\\nu$-stable subordinator (see e.g.\\ \\citet{meer}), $t \\ge 0$, and $\\nu \\in (0,1]$.\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tLet $\\text{Pr} \\{ V_t^\\nu \\in \\mathrm ds \\} = h(s,t) \\, \\mathrm ds$\n\t\t\t\tbe the law of the inverse $\\nu$-stable subordinator. We now show that\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ^\\nu(u,t) = \\sum_{n=0}^N (1-u)^n p_n^\\nu(t) = \\int_0^\\infty Q(u,s) \\, h(s,t) \\, \\mathrm ds\n\t\t\t\t\\end{align}\n\t\t\t\tsatisfy the fractional differential equation \\eqref{fgpfrac}. We can then write\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} \\int_0^\\infty Q(u,s) h(s,t) \\mathrm ds\n\t\t\t\t\t= \\int_0^\\infty Q(u,s) \\frac{\\partial^\\nu}{\\partial t^\\nu} h(s,t) \\mathrm ds.\n\t\t\t\t\\end{align}\n\t\t\t\tSince it can be easily verified that $h(s,t)$ is a solution to the fractional equation\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} h(s,t) = - \\frac{\\partial}{\\partial s} h(s,t),\n\t\t\t\t\\end{align}\n\t\t\t\twe readily obtain\n\t\t\t\t\\begin{align}\n\t\t\t\t\t& \\frac{\\partial^\\nu}{\\partial t^\\nu} Q^\\nu(u,t) \\\\\n\t\t\t\t\t& = - \\int_0^\\infty Q(u,s)\n\t\t\t\t\t\\frac{\\partial}{\\partial s} h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t& = \\left. - h(s,t) Q(u,s) \\right|_{s=0}^\\infty + \\int_0^\\infty h(s,t) \\frac{\\partial}{\\partial s}\n\t\t\t\t\tQ(u,s) \\mathrm ds \\notag \\\\\n\t\t\t\t\t& = \\int_0^\\infty \\left[ -\\mu u \\frac{\\partial}{\\partial u} Q(u,s)\n\t\t\t\t\t-\\lambda u(1-u) \\frac{\\partial}{\\partial u} Q(u,s)\n\t\t\t\t\t-\\lambda NuQ(u,s) \\right] h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t& = -\\mu u \\frac{\\partial}{\\partial u} Q^\\nu(u,t)\n\t\t\t\t\t-\\lambda u(1-u) \\frac{\\partial}{\\partial u} Q^\\nu(u,t)\n\t\t\t\t\t-\\lambda NuQ^\\nu(u,t). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\t\n\t\tIn the following theorem, we derive the expected number of individuals $\\mathbb{E}\\,\\mathcal{N}^\\nu(t)$ or the expected population size of the fractional binomial process at any time $t \\ge 0$.\n\t\t\\begin{thm}\n\t\t\tFor the fractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$, $\\nu \\in (0,1]$, we have\n\t\t\t\\begin{align}\n\t\t\t\t\\label{mea}\n\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu(t) = \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\tE_{\\nu,1} \\left( -(\\lambda+\\mu) t^\\nu \\right) + N\\frac{\\lambda}{\\lambda+\\mu},\n\t\t\t\\end{align}\n\twhere\n\t\\[\n\tE_{\\alpha, \\beta} \\left( \\xi \\right) = \\sum\\limits_{r=0}^\\infty \\frac{\\xi^r}{\\Gamma (\\alpha r + \\beta) }\t\n\t\\]\nis the Mittag-Leffler function.\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tBy considering that\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{sharp}\n\t\t\t\t\t\\left. -\\frac{\\partial}{\\partial u} Q^\\nu(u,t) \\right|_{u=0} = \\mathbb{E}\\,\\mathcal{N}^\\nu(t)\n\t\t\t\t\\end{align}\n\t\t\t\tand on the base of \\eqref{fgpfrac},\n\t\t\t\twe can write\n\t\t\t\t\\begin{align}\n\t\t\t\t\t-\\frac{\\partial^\\nu}{\\partial t^\\nu} \\frac{\\partial}{\\partial u} Q^\\nu(u,t) = {} &\n\t\t\t\t\t\\mu \\left( \\frac{\\partial}{\\partial u} Q^\\nu(u,t) + u \\frac{\\partial^2}{\\partial u^2}\n\t\t\t\t\tQ^\\nu(s,t) \\right) \\\\\n\t\t\t\t\t& + \\lambda \\left( \\frac{\\partial}{\\partial u} Q^\\nu(u,t)\n\t\t\t\t\t+ u \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& - \\lambda \\left( 2 u \\frac{\\partial}{\\partial u} Q^\\nu(u,t) + u^2\n\t\t\t\t\t\\frac{\\partial^2}{\\partial u^2}Q^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& + \\lambda N \\left( Q^\\nu(u,t)\n\t\t\t\t\t+ u \\frac{\\partial}{\\partial u} Q^\\nu(u,t) \\right), \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tthus leading to the Cauchy problem\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{bel}\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} \\mathbb{E} \\mathcal{N}^\\nu(t) =\n\t\t\t\t\t\t-(\\mu +\\lambda) \\mathbb{E} \\mathcal{N}^\\nu(t) + \\lambda N, \\\\\n\t\t\t\t\t\t\\mathbb{E} \\mathcal{N}^\\nu(0) = M.\n\t\t\t\t\t\\end{cases}\n\t\t\t\t\\end{align}\n\t\t\t\tThe solution to \\eqref{bel} can be written as (using formula (4.1.65) of \\citet{kilbas})\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu(t)\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) \\\\\n\t\t\t\t\t& + \\int_0^t (t-s)^{\\nu-1}\n\t\t\t\t\tE_{\\nu,\\nu} \\left(-(\\lambda+\\mu)(t-s)^\\nu\\right) \\lambda N \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) + \\lambda N \\int_0^t y^{\\nu-1} E_{\\nu,\\nu}\n\t\t\t\t\t\\left(-(\\lambda+\\mu) y^\\nu\\right) \\mathrm dy \\notag \\\\\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) + \\lambda N \\biggl| -\\frac{\n\t\t\t\t\tE_{\\nu,1}\\left( -(\\lambda+\\mu)y^\\nu \\right)}{(\\lambda+\\mu)} \\biggr|_0^t \\notag \\\\\n\t\t\t\t\t= {} & M E_{\\nu,1} \\left(-(\\lambda+\\mu)t^\\nu\\right) - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\tN \\left( E_{\\nu,1} \\left( -(\\lambda+\\mu) t^\\nu \\right) -1 \\right) \\notag \\\\\n\t\t\t\t\t= {} & \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)E_{\\nu,1}\n\t\t\t\t\t\\left( -(\\lambda+\\mu) t^\\nu \\right) + N\\frac{\\lambda}{\\lambda+\\mu}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\tFigure \\ref{afig} shows the mean value \\eqref{mea} in both cases $\\left[ M-N\\lambda\/(\\lambda+\\mu)\n\t\t\\right] < 0$ and $\\left[ M-N\\lambda\/(\\lambda+\\mu)\n\t\t\\right] > 0$ for specific values of the remaining parameters.\n\t\tNote also that when $M=N\\lambda\/(\\lambda+\\mu)$ the mean value $\\mathbb{E}\\,\n\t\t\\mathcal{N}^\\nu(t) = N\\lambda\/(\\lambda+\\mu)$ is constant.\n\t\n\t\t\\begin{figure}[h!t!tb!p!]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p1.pdf}\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p2.pdf}\n\t\t \n\t\t\n\t\t\t\\caption{\\label{afig}The mean value of the fractional binomial process $\\mathbb{E}\\,\n\t\t\t\t\\mathcal{N}^\\nu(t)$. For both graphs we have $N=100$, $M=40$, $\\nu=0.7$. The rates are respectively\n\t\t\t\t$(\\lambda,\\mu)=(1,1)$ (left) and $(\\lambda,\\mu)=(1,3)$ (right).}\n\t\t\\end{figure}\n\n\t\n\t\tWe now proceed to deriving the variance $\\mathbb{V}\\text{ar} \\, \\mathcal{N}^\\nu(t)$ of the fractional\n\t\tbinomial process, starting from the second factorial moment.\n\t\t\n\t\t\\begin{thm}\t\t\n\t\t\t\t\tFor the fractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$, $\\nu \\in (0,1]$, we have\n\t\t\t\\begin{align}\n\t\t\t\t\\label{va}\n\t\t\t\t\\mathbb{V}\\text{ar} & \\, \\mathcal{N}^\\nu(t) \\\\\t\t\t\n\t\t\t\t= {} & \\left( \\frac{\\lambda^2 N(N-1)}{(\\lambda +\\mu)^2} -\\frac{2\\lambda M(N-1)}{\\lambda+\\mu}\n\t\t\t\t+ M(M-1) \\right) E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t& + \\left( \\frac{2\\lambda^2 N}{(\\lambda+\\mu)^2} - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t(N+2M) +M \\right)\n\t\t\t\tE_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t& - \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)^2 \\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\right)^2\n\t\t\t\t+ \\frac{N \\lambda\\mu}{(\\lambda+\\mu)^2}. \\notag\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tFrom \\eqref{fgpfrac}, we have\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\frac{\\partial^\\nu}{\\partial t^\\nu} \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) = {} &\n\t\t\t\t\t-\\mu \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) -\\mu \\left( \\frac{\\partial^2}{\\partial u^2}\n\t\t\t\t\tQ^\\nu(u,t) + u \\frac{\\partial^3}{\\partial u^3} Q^\\nu(u,t) \\right) \\\\\n\t\t\t\t\t& -\\lambda \\left( (1-2u) \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) -2 \\frac{\\partial}{\\partial u}\n\t\t\t\t\tQ^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& - \\lambda \\left( (1-2u) \\frac{\\partial^2}{\\partial u^2}Q^\\nu(u,t)\n\t\t\t\t\t+(u-u^2)\\frac{\\partial^3}{\\partial u^3} Q^\\nu(u,t) \\right) \\notag \\\\\n\t\t\t\t\t& -\\lambda N \\frac{\\partial}{\\partial u} Q^\\nu(u,t) - \\lambda N \\left( \\frac{\\partial}{\\partial u}\n\t\t\t\t\tQ^\\nu(u,t) + u \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) \\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tRecalling \\eqref{sharp} and the equality\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\left. \\frac{\\partial^2}{\\partial u^2} Q^\\nu(u,t) \\right|_{u=0} = \\mathbb{E}\n\t\t\t\t\t\\left( \\mathcal{N}^\\nu(t)(\\mathcal{N}^\\nu(t) -1) \\right) = H^\\nu(t),\n\t\t\t\t\\end{align}\n\t\t\t\twe obtain\n\t\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{sarp}\n\t\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} H^\\nu(t) & =\n\t\t\t\t\t-2 \\mu H^\\nu(t) -2\\lambda H^\\nu(t) -2\\lambda \\mathbb{E}\\,\\mathcal{N}^\\nu(t) + 2\\lambda N\n\t\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu(t) \\\\\n\t\t\t\t\t& = -2 (\\lambda+\\mu) H^\\nu(t) + 2 \\lambda (N-1) \\mathbb{E}\\,\\mathcal{N}^\\nu(t). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tBy substituting \\eqref{mea} into \\eqref{sarp}, we arrive at the Cauchy problem\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{cacio}\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\frac{\\mathrm d^\\nu}{\\mathrm dt^\\nu} H^\\nu(t) = -2 (\\lambda+\\mu) H^\\nu(t)\n\t\t\t\t\t\t+ 2 \\lambda (N-1) \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\t\tE_{\\nu,1} \\left( -(\\lambda+\\mu)t^\\nu \\right) \\\\\n\t\t\t\t\t\t\\qquad \\qquad \\qquad + 2 \\lambda^2 N (N-1) \\frac{1}{\\lambda+\\mu} \\\\\n\t\t\t\t\t\tH^\\nu(0) = M(M-1),\n\t\t\t\t\t\\end{cases}\n\t\t\t\t\\end{align}\n\t\t\t\tthat can be solved using the Laplace transform $\\widetilde{H}^\\nu(z) = \\int_0^\\infty\n\t\t\t\te^{-zt} H^\\nu(t)\\, \\mathrm dt$ as follows:\n\t\t\t\t\\begin{align}\n\t\t\t\t\tz^\\nu \\widetilde{H}^\\nu(z) &- z^{\\nu-1} M(M-1) \\\\\n\t\t\t\t\t= {} & -2 (\\lambda+\\mu) \\widetilde{H}^\\nu(z) + 2 \\lambda (N-1)\n\t\t\t\t\t\\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) \\frac{z^{\\nu-1}}{z^\\nu + (\\lambda+\\mu)} \\notag \\\\\n\t\t\t\t\t& + \\frac{1}{z} 2 \\lambda^2 N(N-1)\\frac{1}{\\lambda+\\mu}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThe Laplace transform then reads\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\label{elastic}\n\t\t\t\t\t\\widetilde{H}^\\nu(z) = {} & M(M-1) \\frac{z^{\\nu-1}}{z^\\nu+2(\\lambda+\\mu)} \\\\\n\t\t\t\t\t& +2\\lambda (N-1)\n\t\t\t\t\t\\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) \\frac{z^{\\nu-1}}{(z^\\nu+(\\lambda+\\mu))\n\t\t\t\t\t(z^\\nu+2(\\lambda+\\mu))} \\notag \\\\\n\t\t\t\t\t& + 2 \\lambda^2 N(N-1) \\frac{1}{\\lambda+\\mu} \\cdot \\frac{z^{-1}}{z^\\nu\n\t\t\t\t\t+2(\\lambda+\\mu)} \\notag \\\\\n\t\t\t\t\t= {} & M(M-1) \\frac{z^{\\nu-1}}{z^\\nu+2(\\lambda+\\mu)} \\notag \\\\\n\t\t\t\t\t& + \\frac{2\\lambda(N-1)}{\\lambda+\\mu}\n\t\t\t\t\t\\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) \\left( \\frac{z^{\\nu-1}}{z^\\nu+(\\lambda+\\mu)}\n\t\t\t\t\t- \\frac{z^{\\nu-1}}{z^\\nu + 2(\\lambda+\\mu)} \\right) \\notag \\\\\n\t\t\t\t\t& + 2 \\lambda^2 N(N-1) \\frac{1}{\\lambda+\\mu} \\cdot \\frac{z^{-1}}{z^\\nu\n\t\t\t\t\t+2(\\lambda+\\mu)} \\notag.\n\t\t\t\t\\end{align}\n\t\t\t\tEquation \\eqref{elastic} then implies that\n\t\t\t\t\\begin{align}\n\t\t\t\t\tH^\\nu&(t) \\\\\n\t\t\t\t\t= {} & M(M-1) E_{\\nu,1} \\left( -2(\\lambda+\\mu)t^\\nu \\right) \\notag \\\\\n\t\t\t\t\t& +\\frac{2\\lambda (N-1)}{\\lambda+\\mu} \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\t\\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) - E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu) \\right) \\notag \\\\\n\t\t\t\t\t& + 2\\lambda^2 N(N-1)\\frac{1}{\\lambda+\\mu} t^\\nu E_{\\nu,\\nu+1} \\left( -2(\\lambda+\\mu)t^\\nu\n\t\t\t\t\t\\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tConsidering that $t^\\nu E_{\\nu,\\nu+1}(at^\\nu) = a^{-1} (E_{\\nu,1}(at^\\nu)-1)$, we obtain\n\t\t\t\t\\begin{align}\n\t\t\t\t\tH^\\nu(t) = {} & M(M-1) E_{\\nu,1} \\left( -2(\\lambda+\\mu)t^\\nu \\right) \\\\\n\t\t\t\t\t& +\t\\frac{2\\lambda(N-1)}{\\lambda+\\mu} \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\tE_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\frac{2\\lambda (N-1)}{\\lambda+\\mu} \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)\n\t\t\t\t\tE_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& -\\frac{\\lambda^2}{(\\lambda+\\mu)^2} N(N-1) E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t+ \\frac{\\lambda^2}{(\\lambda+\\mu)^2} N(N-1) \\notag \\\\\n\t\t\t\t\t= {} & M(M-1) E_{\\nu,1} \\left( -2(\\lambda+\\mu)t^\\nu \\right) \\notag \\\\\n\t\t\t\t\t& +\n\t\t\t\t\t\\frac{2\\lambda M(N-1)}{\\lambda+\\mu} E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\frac{2 \\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} E_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\frac{2\\lambda M(N-1)}{\\lambda+\\mu} E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& + \\frac{2\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& -\\frac{\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t+\\frac{\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} \\notag \\\\\n\t\t\t\t\t= {} & \\frac{\\lambda^2 N(N-1)}{(\\lambda+\\mu)^2} \\notag \\\\\n\t\t\t\t\t& + E_{\\nu,1}(-2(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t\\left( \\frac{\\lambda^2 N(N-1)}{(\\lambda + \\mu)^2} - \\frac{2\\lambda M(N-1)}{\\lambda+\\mu}\n\t\t\t\t\t+M(M-1) \\right) \\notag \\\\\n\t\t\t\t\t& - E_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\left( \\frac{2 \\lambda^2 N(N-1)}{(\\lambda+\\mu)^2}\n\t\t\t\t\t-\\frac{2\\lambda M (N-1)}{\\lambda+\\mu} \\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThe variance can thus be written as\n\t\t\t\t\\begin{align}\n\t\t\t\t\t\\mathbb{V}\\text{ar} & \\, \\mathcal{N}^\\nu(t) \\\\\n\t\t\t\t\t= {} & H^\\nu(t) + \\mathbb{E}\\, \\mathcal{N}^\\nu(t)\n\t\t\t\t\t- (\\mathbb{E}\\, \\mathcal{N}^\\nu(t))^2 \\notag \\\\\n\t\t\t\t\t= {} & H^\\nu(t) + \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right) E_{\\nu,1} (-(\\lambda+\\mu)t^\\nu)\n\t\t\t\t\t\\notag \\\\\n\t\t\t\t\t& + N \\frac{\\lambda}{\\lambda+\\mu} - \\left( M-\\frac{\\lambda}{\\lambda+\\mu} \\right)^2\n\t\t\t\t\t\\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\right)^2 \\notag \\\\\n\t\t\t\t\t& - N^2 \\frac{\\lambda^2}{(\\lambda+\\mu)^2} - 2 \\frac{N \\lambda}{\\lambda+\\mu}\n\t\t\t\t\t\\left( M-N \\frac{\\lambda}{\\lambda+\\mu} \\right) E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\lambda^2 N(N-1)}{(\\lambda +\\mu)^2} -\\frac{2\\lambda M(N-1)}{\\lambda+\\mu}\n\t\t\t\t\t+ M(M-1) \\right) E_{\\nu,1} (-2(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& + \\left( \\frac{2\\lambda^2 N}{(\\lambda+\\mu)^2} - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\t(N+2M) +M \\right)\n\t\t\t\t\tE_{\\nu,1} (-(\\lambda+\\mu)t^\\nu) \\notag \\\\\n\t\t\t\t\t& - \\left( M-N\\frac{\\lambda}{\\lambda+\\mu} \\right)^2 \\left( E_{\\nu,1}(-(\\lambda+\\mu)t^\\nu) \\right)^2\n\t\t\t\t\t+ \\frac{N \\lambda\\mu}{(\\lambda+\\mu)^2}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\t\n\t\tExploiting Theorem \\ref{sub}, we derive the explicit expression of the extinction\n\t\tprobability $p_0^\\nu(t) = \\text{Pr} \\{ \\mathcal{N}^\\nu(t) = 0 | \\mathcal{N}^\\nu(0) = M \\}$ below.\n\t\t\n\t\t\\begin{thm}\n\t\t\tThe extinction probability $p_0^\\nu(t) = \\text{Pr} \\{ \\mathcal{N}^\\nu(t) = 0 | \\mathcal{N}^\\nu(0) = M \\}$\n\t\t\tfor a fractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$ is\n\t\t\t\\begin{align}\n\t\t\t\t\\label{extinction}\n\t\t\t\tp_0^\\nu(t) = {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r \\\\\n\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h} (-1)^h E_{\\nu,1}(-(r+h)(\\lambda+\\mu)t^\\nu). \\notag\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tIt is known \\citep{jakeman} that the generating function $Q(u,t) = \\sum_{n=0}^N (1-u)^n p_n(t)$\n\t\t\t\tfor the classical binomial process can be written as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ(u,t)\n\t\t\t\t\t= {} & \\left[ 1-\\left( 1-e^{-(\\mu+\\lambda)t} \\right)\\frac{\\lambda}{\\lambda+\\mu}u \\right]^{N-M} \\\\\n\t\t\t\t\t& \\times \\left[ 1-\\left( \\left(1-e^{-(\\mu+\\lambda)t}\\right)\\frac{\\lambda}{\\lambda+\\mu} +\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)u \\right]^M. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThis suggests that the extinction probability for the classical case can be written as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_0(t) = {} & \\left[1-\\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\t+ \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t}\\right]^{N-M} \\\\\n\t\t\t\t\t& \\times \\left[ 1-\\left(\\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\t-e^{-(\\mu+\\lambda)t} \\frac{\\lambda}{\\lambda+\\mu}+e^{-(\\mu+\\lambda)t}\\right) \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left[ \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\lambda}{\\lambda+\\mu} e^{-(\\mu+\\lambda)t} \\right]^{N-M}\n\t\t\t\t\t\\left[ \\frac{\\mu}{\\lambda+\\mu} - \\frac{\\mu}{\\lambda+\\mu}e^{-(\\mu+\\lambda)t} \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{1}{\\lambda+\\mu} \\right)^N \\mu^M \\left( \\mu+\\lambda e^{-(\\lambda+\\mu)t} \\right)^{N-M}\n\t\t\t\t\t\\left( 1-e^{-(\\lambda+\\mu)t} \\right)^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\left( 1+\\frac{\\lambda}{\\mu}\n\t\t\t\t\te^{-(\\lambda+\\mu)t} \\right)^{N-M} \\left( 1-e^{-(\\lambda+\\mu)t} \\right)^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r e^{-r(\\lambda+\\mu)t} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M\n\t\t\t\t\t\\binom{M}{h} (-1)^h e^{-h(\\lambda+\\mu)t}\n\t\t\t\t\t\\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r \\sum_{h=0}^M \\binom{M}{h} (-1)^h e^{-(r+h)(\\lambda+\\mu)t}. \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tUsing Theorem \\ref{sub}, we now obtain\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_0^\\nu(t) = {} & \\int_0^\\infty p_0(s) h(s,t) \\mathrm ds \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\sum_{r=0}^{N-M} \\binom{N-M}{r}\\left( \\frac{\\lambda}{\\mu} \\right)^r \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h}\n\t\t\t\t\t(-1)^h \\int_0^\\infty e^{-(r+h)(\\lambda+\\mu)s} q(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\binom{N-M}{r} \\left( \n\t\t\t\t\t\\frac{\\lambda}{\\mu} \\right)^r \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h} (-1)^h\n\t\t\t\t\tE_{\\nu,1}(-(r+h)(\\lambda+\\mu)t^\\nu). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\n\t\t\\begin{thm}\n\t\t\tThe state probabilities $p_n^\\nu(t) = \\text{Pr} \\{ \\mathcal{N}^\\nu(t) = n | \\mathcal{N}^\\nu(0)\n\t\t\t= M \\}$, $\\lambda >0$, $\\mu > 0$, have the following form:\n\t\t\t\\begin{align}\n\t\t\t\t\\label{stato}\n\t\t\t\tp_n^\\nu(t) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n < \\min(M,N-M), \\\\\n\t\t\t\t\t\\sum_{r=0}^{N-M} g_{n,r}^\\nu(t), & N-M \\leq n < M, \\: M > N-M, \\\\\n\t\t\t\t\t\\sum_{r=n-M}^n g_{n,r}^\\nu(t), & M \\leq n < N-M, \\: M < N-M, \\\\\n\t\t\t\t\t\\sum_{r=n-M}^{N-M} g_{n,r}^\\nu(t), & \\max(M,N-M) \\leq n \\leq N,\n\t\t\t\t\\end{cases}\n\t\t\t\\end{align}\n\t\t\tand\n\t\t\t\\begin{align}\n\t\t\t\tp_n^\\nu(t) =\n\t\t\t\t\\begin{cases}\n\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n< M, \\\\\n\t\t\t\t\t\\sum_{r=n-M}^M g_{n,r}^\\nu(t), & M \\leq n \\leq N,\n\t\t\t\t\\end{cases}\t\t\t\n\t\t\t\\end{align}\n\t\t\twhen $N-M=M$, and where\n\t\t\t\\begin{align}\n\t\t\t\tg_{n,r}^\\nu(t)\n\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\binom{N-M}{r} \\binom{M}{n-r} \\\\\n\t\t\t\t& \\times \\sum_{m_1=0}^r \\binom{r}{m_1} (-1)^{m_1}\n\t\t\t\t\\sum_{m_2=0}^{N-M-r} \\binom{N-M-r}{m_2} \\left( \\frac{\\lambda}{\\mu} \\right)^{m_2} \\notag \\\\\n\t\t\t\t& \\times \\sum_{m_3=0}^{n-r} \\binom{n-r}{m_3} \\left( \\frac{\\lambda}{\\mu} \\right)^{n-m_3}\n\t\t\t\t\\sum_{m_4=0}^{M-n+r} \\binom{M-n+r}{m_4} (-1)^{m_4} \\notag \\\\\n\t\t\t\t& \\times E_{\\nu,1} \\left(\n\t\t\t\t-(m_1+m_2+m_3+m_4)(\\mu+\\lambda)t^\\nu \\right). \\notag\n\t\t\t\\end{align}\n\t\t\t\n\t\t\t\\begin{proof}\n\t\t\t\tWe start by rewriting the probability generating function of the classical binomial\n\t\t\t\tprocess as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ(u,t)& \\\\\n\t\t\t\t\t= {} & \\left[ 1-\\left( 1-e^{-(\\mu+\\lambda)t} \\right)\\frac{\\lambda}{\\lambda+\\mu}u \\right]^{N-M}\\notag\\\\\n\t\t\t\t\t& \\times \\left[ 1-\\left( \\left(1-e^{-(\\mu+\\lambda)t}\\right)\\frac{\\lambda}{\\lambda+\\mu} +\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)u \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left[ (1-u) \\left( \\frac{\\lambda}{\\lambda+\\mu} - \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t}\t\\right) + \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right]^{N-M} \\notag \\\\\n\t\t\t\t\t& \\times \\left[ \\frac{\\lambda}{\\lambda+\\mu}(1-u) + \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\mu}{\\lambda+\\mu}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right. \\notag \\\\\n\t\t\t\t\t& \\left. -\\frac{\\mu}{\\lambda+\\mu} e^{-(\\mu+\\lambda)t}\n\t\t\t\t\t-u \\frac{\\mu}{\\lambda+\\mu} e^{-(\\mu+\\lambda)t} \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\lambda}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\left[ (1-u) \\left( 1-e^{-(\\mu+\\lambda)t} \\right) + \\frac{\\mu}{\\lambda}\n\t\t\t\t\t+e^{-(\\mu+\\lambda)t} \\right]^{N-M} \\notag \\\\\n\t\t\t\t\t& \\times \\left[ (1-u) \\left( 1+\\frac{\\mu}{\\lambda}e^{-(\\mu+\\lambda)t} \\right)\n\t\t\t\t\t+ \\frac{\\mu}{\\lambda} - \\frac{\\mu}{\\lambda} e^{-(\\mu+\\lambda)t} \\right]^M \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\lambda}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\sum_{r=0}^{N-M} \\binom{N-M}{r} (1-u)^r \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1-e^{-(\\mu+\\lambda)t} \\right)^r\n\t\t\t\t\t\\left( \\frac{\\mu}{\\lambda} + e^{-(\\mu+\\lambda)t} \\right)^{N-M-r} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{h=0}^M \\binom{M}{h} (1-u)^h \\left( 1+\\frac{\\mu}{\\lambda}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)^h \\left( \\frac{\\mu}{\\lambda}-\\frac{\\mu}{\\lambda}\n\t\t\t\t\te^{-(\\mu+\\lambda)t} \\right)^{M-h} \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\sum_{r=0}^{N-M} \\sum_{j=r}^{M+r}\n\t\t\t\t\t(1-u)^j \\binom{N-M}{r} \\binom{M}{j-r} \\left( \\frac{\\lambda}{\\mu}\n\t\t\t\t\t-\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^r \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1+\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^{N-M-r}\n\t\t\t\t\t\\left( \\frac{\\lambda}{\\mu} + e^{-(\\mu+\\lambda)t} \\right)^{j-r} \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1-e^{-(\\mu+\\lambda)t} \\right)^{M-j+r}. \\notag\n\t\t\t\t\\end{align}\n\t\t\tLetting\n\t\t\t\t\\begin{align}\n\t\t\t\t\tg_{j,r}(t) = {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\binom{N-M}{r} \\binom{M}{j-r} \\left( \\frac{\\lambda}{\\mu}\n\t\t\t\t\t-\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^r \\\\\n\t\t\t\t\t& \\times \\left( 1+\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)t} \\right)^{N-M-r}\n\t\t\t\t\t\\left( \\frac{\\lambda}{\\mu} + e^{-(\\mu+\\lambda)t} \\right)^{j-r} \\notag \\\\\n\t\t\t\t\t& \\times \\left( 1-e^{-(\\mu+\\lambda)t} \\right)^{M-j+r}, \\notag\n\t\t\t\t\\end{align}\n\t\t\t\twe have\n\t\t\t\t\\begin{align}\n\t\t\t\t\tQ(u,t) =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{j=0}^{M-1} (1-u)^j \\sum_{r=0}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=M}^{N-M-1}\n\t\t\t\t\t\t(1-u)^j \\sum_{r=j-M}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=N-M}^N (1-u)^j \\sum_{j-M}^{N-M} g_{j,r}(t),\n\t\t\t\t\t\t& M < N-M, \\\\\n\t\t\t\t\t\t\\sum_{j=0}^{N-M-1} (1-u)^j \\sum_{r=0}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=N-M}^{M-1}\n\t\t\t\t\t\t(1-u)^j \\sum_{r=0}^{N-M} g_{j,r}(t)\\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=M}^N (1-u)^j \\sum_{r=j-M}^{N-M} g_{j,r}(t),\n\t\t\t\t\t\t& M > N-M, \\\\\n\t\t\t\t\t\t\\sum_{j=0}^{M-1} (1-u)^j \\sum_{r=0}^j g_{j,r}(t) \\\\\n\t\t\t\t\t\t\\qquad + \\sum_{j=M}^N (1-u)^j \\sum_{r=j-M}^{M} g_{j,r}(t), & N-M=M.\n\t\t\t\t\t\\end{cases}\t\t\t\t\t\t\n\t\t\t\t\\end{align}\n\t\t\t\tThe classical state probabilities therefore read\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n(t) =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}(t), & 0 \\leq n < \\min(M,N-M), \\\\\n\t\t\t\t\t\t\\sum_{r=0}^{N-M} g_{n,r}(t), & N-M \\leq n < M, \\: M > N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^n g_{n,r}(t), & M \\leq n < N-M, \\: M < N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^{N-M} g_{n,r}(t), & \\max(M,N-M) \\leq n \\leq N,\n\t\t\t\t\t\\end{cases}\n\t\t\t\t\\end{align}\n\t\t\t\twhich reduce to\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n(t) =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}(t), & 0 \\leq n< M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^M g_{n,r}(t), & M \\leq n \\leq N,\n\t\t\t\t\t\\end{cases}\t\t\t\n\t\t\t\t\\end{align}\n\t\t\t\twhen $N-M=M$.\n\t\t\t\t\n\t\t\t\tExploiting Theorem \\ref{sub}, we can derive the state probabilities for the\n\t\t\t\tfractional binomial process $\\mathcal{N}^\\nu(t)$, $t \\ge 0$, as\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n^\\nu(t) & = \\int_0^\\infty p_n(s) h(s,t) \\mathrm ds \\\\\n\t\t\t\t\t& =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n < \\min(M,N-M), \\\\\n\t\t\t\t\t\t\\sum_{r=0}^{N-M} g_{n,r}^\\nu(t), & N-M \\leq n < M, \\: M > N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^n g_{n,r}^\\nu(t), & M \\leq n < N-M, \\: M < N-M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^{N-M} g_{n,r}^\\nu(t), & \\max(M,N-M) \\leq n \\leq N,\n\t\t\t\t\t\\end{cases} \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tor\n\t\t\t\t\\begin{align}\n\t\t\t\t\tp_n^\\nu(t) = \\int_0^\\infty p_n(s) h(s,t) \\mathrm ds =\n\t\t\t\t\t\\begin{cases}\n\t\t\t\t\t\t\\sum_{r=0}^n g_{n,r}^\\nu(t), & 0 \\leq n< M, \\\\\n\t\t\t\t\t\t\\sum_{r=n-M}^M g_{n,r}^\\nu(t), & M \\leq n \\leq N,\n\t\t\t\t\t\\end{cases}\t\t\t\t\t\t\n\t\t\t\t\\end{align}\t\t\t\t\n\t\t\t\tfor $N-M = M$. Note that\n\t\t\t\t\\begin{align}\n\t\t\t\t\tg_{n,r}^\\nu(t) = {} & \\int_0^\\infty g_{n,r}(s) h(s,t) \\mathrm ds \\\\\n\t\t\t\t\t= {} & \\int_0^\\infty \\left[ \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\t\\binom{N-M}{r} \\binom{M}{n-r} \\left( \\frac{\\lambda}{\\mu}\n\t\t\t\t\t-\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)s} \\right)^r \\right. \\notag \\\\\n\t\t\t\t\t& \\left. \\times \\left( 1+\\frac{\\lambda}{\\mu} e^{-(\\mu+\\lambda)s} \\right)^{N-M-r}\n\t\t\t\t\t\\left( \\frac{\\lambda}{\\mu} + e^{-(\\mu+\\lambda)s} \\right)^{n-r} \\right. \\notag \\\\\n\t\t\t\t\t& \\times \\left. \\left( 1-e^{-(\\mu+\\lambda)s} \\right)^{M-n+r} \\right] h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\binom{N-M}{r} \\binom{M}{n-r}\n\t\t\t\t\t\\sum_{m_1=0}^r \\binom{r}{m_1} (-1)^{m_1} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_2=0}^{N-M-r} \\binom{N-M-r}{m_2} \\left( \\frac{\\lambda}{\\mu} \\right)^{m_2} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_3=0}^{n-r} \\binom{n-r}{m_3} \\left( \\frac{\\lambda}{\\mu} \\right)^{n-m_3}\n\t\t\t\t\t\\sum_{m_4=0}^{M-n+r} \\binom{M-n+r}{m_4} (-1)^{m_4} \\notag \\\\\n\t\t\t\t\t& \\times \\int_0^\\infty e^{\n\t\t\t\t\t-(m_1+m_2+m_3+m_4)(\\mu+\\lambda)s} h(s,t) \\mathrm ds \\notag \\\\\n\t\t\t\t\t= {} & \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\binom{N-M}{r} \\binom{M}{n-r}\n\t\t\t\t\t\\sum_{m_1=0}^r \\binom{r}{m_1} (-1)^{m_1} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_2=0}^{N-M-r} \\binom{N-M-r}{m_2} \\left( \\frac{\\lambda}{\\mu} \\right)^{m_2} \\notag \\\\\n\t\t\t\t\t& \\times \\sum_{m_3=0}^{n-r} \\binom{n-r}{m_3} \\left( \\frac{\\lambda}{\\mu} \\right)^{n-m_3}\n\t\t\t\t\t\\sum_{m_4=0}^{M-n+r} \\binom{M-n+r}{m_4} (-1)^{m_4} \\notag \\\\\n\t\t\t\t\t& \\times E_{\\nu,1} \\left(\n\t\t\t\t\t-(m_1+m_2+m_3+m_4)(\\mu+\\lambda)t^\\nu \\right). \\notag\n\t\t\t\t\\end{align}\n\t\t\t\tThis concludes the proof.\n\t\t\t\\end{proof}\n\t\t\t\\begin{flushright} \\qed \\end{flushright}\n\t\t\\end{thm}\n\t\t\n\t\t\\begin{remark}\n\t\t\tFrom \\eqref{stato}, we retrieve the extinction probability \\eqref{extinction} when $n=0$.\n\t\t\\end{remark}\n\t\t\n\t\t\\begin{remark}\n\t\t\tAs $t \\rightarrow \\infty$, the population in a fractional binomial\n\t\t\tprocess obeys a binomial distribution, i.e., \n\t\t\t\\begin{align}\n\t\t\t\t& \\lim_{t \\rightarrow \\infty} Q^\\nu(u,t) \\\\\n\t\t\t\t& = \\sum_{r=0}^{N-M} \\sum_{j=r}^{M+r} (1-u)^j \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\\binom{N-M}{r} \\binom{M}{j-r} \\left( \\frac{\\lambda}{\\mu} \\right)^j \\notag \\\\\n\t\t\t\t& = \\sum_{r=0}^{N-M} \\sum_{h=0}^M (1-u)^{h+r} \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N\n\t\t\t\t\\binom{N-M}{r} \\binom{M}{h} \\left( \\frac{\\lambda}{\\mu} \\right)^{h+r} \\notag \\\\\n\t\t\t\t& = \\left( \\frac{\\mu}{\\lambda+\\mu} \\right)^N \\left( 1+(1-u)\\frac{\\lambda}{\\mu} \\right)^{N-M}\n\t\t\t\t\\left( 1+(1-u)\\frac{\\lambda}{\\mu} \\right)^M \\notag \\\\\n\t\t\t\t& = \\left( \\frac{\\mu}{\\lambda+\\mu} + \\frac{\\lambda}{\\lambda+\\mu}(1-u) \\right)^N \\notag \\\\\n\t\t\t\t& = \\left( 1- \\frac{\\lambda}{\\lambda+\\mu}u \\right)^N, \\notag\n\t\t\t\\end{align}\n\t\t\tis the probability generating function of a binomial random variable of parameter $\\lambda\/(\\lambda+\\mu)$\n\t\t\tand must be compared with equation $\\mathrm{(9)}$ of \\citet{jakeman}.\n\\end{remark}\nNote also that as $t \\rightarrow \\infty$, indeed we observe (from \\eqref{mea} and \\eqref{va}) that\n\t\t\t\\begin{align}\n\t\t\t\t\\mathbb{E}\\, \\mathcal{N}^\\nu(t) \\longrightarrow N \\frac{\\lambda}{\\lambda+\\mu}\n\t\t\t\\end{align}\nand \n\t\t\\begin{align}\n\t\t\t\t\\mathbb{V}ar\\, \\mathcal{N}^\\nu(t) \\longrightarrow N \\frac{\\lambda}{\\lambda+\\mu}\\left(1- \\frac{\\lambda}{\\lambda+\\mu} \\right),\n\t\t\t\\end{align}\nwhich are the mean and variance of the binomial equilibrium process. This suggests that the fractional generalization still preserves the binomial limit. \n\t\t\n\t\\section{Related fractional stochastic processes}\n\t\n\t\tIn this section, we focus our attention to two pure branching processes which are in fact sub-models of the more general fractional binomial process described in Section \\ref{se}. These are\n\t\tthe fractional linear pure death process and the saturable fractional pure birth process. More specifically,\tthese processes can be directly obtained from the fractional binomial process by letting $\\mu=0$ and \t\t$\\lambda=0$, respectively. The main motivation underlining the analysis of these specific cases\n\t\tis that they are widely used in practice particularly in modeling evolving populations \tin interacting environment possibly causing extintion or saturation.\n\t\tOur discussion on the fractional linear pure death process complements that of \\citet{sakhno}'s. Instead, we analyze the saturable fractional pure birth\n\t\tprocess in more detail.\n\t\n\t\tWhen $\\lambda=0$, we obtain the mean value \\eqref{mea} of the fractional linear pure\n\t\tdeath process $\\mathcal{N}^\\nu_d(t)$, $t \\ge 0$, $\\nu \\in (0,1]$, (see \\citet{sakhno}):\n\t\t\\begin{align}\n\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu_d(t) = M E_{\\nu,1}(-\\mu t^\\nu), \\qquad t \\ge 0. \n\t\t\\end{align}\n\t\tFigure \\ref{bfig} shows the mean value of the fractional linear pure death process (left)\n\t\tfor specific values of the parameters $\\mu$ and $\\nu$.\n\t\n\t\t\t\\begin{figure}[h!t!b!p!]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p3.pdf}\n\t\t\t\\includegraphics[height=2.5in, width=2.2in]{p4.pdf}\n\t\t \n\t\t\n\t\t\t\\caption{\\label{bfig}The mean value of the fractional binomial process $\\mathbb{E}\\,\n\t\t\t\t\\mathcal{N}^\\nu(t)$ in the two different cases of pure death (left, $(\\lambda,\\mu)=(0,1)$)\n\t\t\t\tand pure birth (right, $(\\lambda,\\mu)=(1,0)$).\n\t\t\t\tFor both cases we have $N=100$, $M=40$, $\\nu=0.7$.}\n\t\t\\end{figure}\n\t\n\t\tThe variance can be easily determined using \\eqref{va} as\n\t\t\\begin{align}\n\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^\\nu_d(t) = {} & M(M-1) E_{\\nu,1} \\left( -2\\mu t^\\nu \\right)\n\t\t\t+ M E_{\\nu,1}\\left( -\\mu t^\\nu \\right) \\\\\n\t\t\t& - M^2 \\left[ E_{\\nu,1}\\left( - \\mu\n\t\t\tt^\\nu \\right) \\right]^2, \\qquad t \\ge 0, \\: \\nu \\in (0,1], \\notag\n\t\t\\end{align}\n\t\tand this reduces for $\\nu = 1$ to the variance of the classical process (see \\citet{bailey}, page 91, formula (8.32)):\n\t\t\\begin{align}\n\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^1_d(t) = M e^{-\\mu t}\\left( 1-e^{-\\mu t} \\right).\n\t\t\\end{align}\n\t\tFurthermore, the extinction probability of the fractional linear pure death\n\t\tprocess $\\mathcal{N}^\\nu_d(t)$, $t \\ge 0$ (see \\citet{sakhno}, page 73, formula (2.1)),\n\t\t\\begin{align}\n\t\t\t\\text{Pr} \\{ \\mathcal{N}^\\nu_d(t) = 0 | \\mathcal{N}^\\nu_d(0) = M \\}\n\t\t\t= \\sum_{h=0}^M \\binom{M}{h} (-1)^h E_{\\nu,1} \\left( -h \\mu t^\\nu \\right), \\qquad t \\ge 0,\n\t\t\\end{align}\n\t\tcan also be derived directly from \\eqref{extinction}.\n\t\t\n\t\tFor the saturable fractional pure birth process $\\mathcal{N}^\\nu_b(t)$, \n\t\tthe mean value reduces to\n\t\t\\begin{align}\n\t\t\t\\label{satur}\n\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu_b(t) = N - \\left( N-M \\right)E_{\\nu,1}\n\t\t\t\\left( -\\lambda t^\\nu \\right).\n\t\t\\end{align}\n\t\tFigure \\ref{bfig} shows the expected value of the saturable fractional pure birth process (right)\n\t\tdetermined for specific values of the parameters $\\lambda$ and $\\nu$. The variance instead remains rather complicated and can be written by specialising \\eqref{va} as\n\t\t\\begin{align}\n\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^\\nu_b(t) = {} &\n\t\t\t\\left[ M(M-1)-N(N-1) \\right] E_{\\nu,1}\\left( -2\\lambda t^\\nu \\right) \\\\\n\t\t\t& - (N-M)(4N-1) E_{\\nu,1}\\left( -\\lambda t^\\nu \\right) - (M-N)^2 \\left[ E_{\\nu,1}\\left( -\\lambda t^\\nu\n\t\t\t\\right) \\right]^2. \\notag\n\t\t\\end{align}\nAs $t\\rightarrow \\infty$, \n\t\t\\begin{equation}\n\t\t\t\t\t\\mathbb{E}\\,\\mathcal{N}^\\nu_b(t) \\longrightarrow N \n\t\t\\end{equation}\n\t\tand\n\t\\begin{equation}\n\t\t\t\t\t\\mathbb{V}\\text{ar}\\,\\mathcal{N}^\\nu_b(t) \\longrightarrow 0\n\t\t\\end{equation}\t\t\nas expected. \n\n\t\nWe now determine the state probabilities $p_{n,b}^\\nu(t) = \\Pr \\{ \\mathcal{N}^\\nu_b(t)=n\n\t\t| \\mathcal{N}^\\nu_b(0) = M \\}$.\n\t\tWhen $\\mu = 0$, the state probabilities can be derived from those of a\n\t\tnonlinear fractional pure birth process of \\citet{pol}, and is given as\n\t\t\\begin{equation}\n\t\t\tp_{n,b}^\\nu(t) =\n\t\t\t\\begin{cases}\n\t\t\t\t\\prod_{j=M}^{n-1} \\lambda_j \\sum_{m=M}^n \\frac{1}{\\prod_{l=M,l \\neq m}^n(\\lambda_l-\\lambda_m)}\n\t\t\t\tE_{\\nu,1}\\left( -\\lambda_m t^\\nu \\right), & M < n \\leq N ,\\\\\n\t\t\tE_{\\nu,1} \\left( -\\lambda_M t^\\nu \\right), & n=M.\n\t\t\t\\end{cases}\n\t\\end{equation}\n\tSubstituting the rates $\\lambda_j= \\lambda(N-j)$, we obtain\n\t\t\\begin{align}\n\t\t\tp_{n,b}^\\nu&(t) \\\\\n\t\t\t= {} & \\prod_{j=M}^{n-1} \\lambda(N-j) \\sum_{m=M}^n\n\t\t\t\\frac{E_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right)}{\\prod_{l=M,l \\neq m}^n(\\lambda (N-l)-\\lambda(N-m))}\n\t\t\t\\notag \\\\\n\t\t\t= {} & \\sum_{m=M}^n \\frac{(N-M)(N-M-1)\\dots (N-n+1)}{(m-M)(m-M-1)\\dots (m-m+1)\n\t\t\t(m-m-1)\\dots (m-n)} \\notag \\\\\n\t\t\t& \\times E_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right) \\notag \\\\\n\t\t\t= {} & \\sum_{m=M}^n \\frac{(N-M)!}{(N-n)!(m-M)!(n-m)!} (-1)^{n-m}\n\t\t\tE_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right) \\notag \\\\\n\t\t\t= {} & \\binom{N-M}{N-n} \\sum_{m=M}^n \\binom{n-M}{m-M} (-1)^{n-m}\n\t\t\tE_{\\nu,1}\\left( -\\lambda (N-m) t^\\nu \\right), \\quad M \\leq n \\leq N. \\notag\n\t\t\\end{align}\n\t\tThis and formula \\eqref{satur} show\tthat the behaviour of the saturable fractional pure birth\n\t\tprocess is subtantially different from that of the fractional Yule process. Similarly, the inter-birth waiting time $T_j^\\nu$, i.e.\\ the random\n\t\ttime separating the $j$th and $(j+1)$th birth, has law\n\t\t\\begin{align}\n\t\t\t\\Pr \\{ T_j^\\nu \\in \\mathrm ds \\} = \\lambda(N-j) s^{\\nu-1} E_{\\nu,\\nu} \\left( -\\lambda (N-j)s^\\nu \\right)\n\t\t\t\\mathrm ds, \\qquad j \\ge M, \\: s \\ge 0.\n\t\t\\end{align}\n\tThe figure below shows the sample paths of the saturable fractional (bottom) and classical (top) linear pure birth processes. Apparently, the proposed model naturally includes processes or populations that saturate faster than the classical linear pure birth process. The figure also indicates that saturation of the fractional binomial process is faster due to the explosive growth\/birth bursts at small times and as $\\nu \\to 0$. Note that the parameters of these related fractional point processes can be estimated using the procedures of \\citet{cap11b}.\n\n\t\t\\begin{figure}[h!t!b!p!]\n\t\t\t\\centering\n\t\t\t\\includegraphics[height=3in, width=4.5in]{satbirth.pdf}\n\t\t\n\t\t\t\\caption{\\label{bfig3}Sample trajectories of classical (top) and fractional (bottom) saturable linear pure birth processes using values $(N,M,\\lambda, \\nu)=(100,5,1,1)$ and $(N,M,\\lambda, \\nu)=(100,5,1,0.75)$, correspondingly.}\n\t\t\\end{figure}\n\t\n\n\\section{Concluding Remarks}\n We have proposed a generalization of the binomial process using the techniques of fractional calculus. The fractional generalization In addition, more statistical properties of the fractional binomial process were derived. One interesting property of the fractional binomial process was that it still preserved the binomial limit at large times while enlarging the class of models at small and regular times that naturally include non-Markovian fluctuations with long memory. This potential made the proposed fractional binomial process appealing for real applications especially to the quantum optics community. New sub-models such as the saturable fractional pure birth process could also be automatically extracted from the proposed model. The generated sample trajectories of the saturable fractional linear pure birth process showed interesting features of the process such as the isolated bursts of the population growth particularly at small times. Overall, the fractional binomial process could be considered as a viable generalization of the classical binomial process.\n \n Although theoretical investigations have been done in the present paper, a number of issues are still left undone which could be considered as possible research extensions\nof the current exploration. These may include: application of this model to rapidly saturable binomial processes, and the formalization of the parameter estimation procedures of the proposed model. \n \n\t\t\t\t\n\t\t\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Introduction}\\label{sec:intro}\n\\input{intro.tex}\n\n\n\\section{Related Works}\\label{sec:relwork}\n\\input{relwork.tex}\n\n\\section{Problem Definition}\\label{sec:problemDefinition}\n\\input{probdef.tex} \n\n\\section{Metric Embedding of Node-Pairs}\\label{sec:Methodology}\n\\input{methodology.tex}\n\n\\section{Experiments and Results}\\label{sec:experiemnts}\n\\input{experiments.tex}\n\n\\section{Conclusion}\\label{sec:conclusion}\n\\input{conclusion.tex}\n\n\n\n\\bibliographystyle{spmpsci}\n\n\\subsection{Dataset Descriptions}\\label{sec:DataSet}\n\nHere we discuss the construction and characteristics of the datasets used for\nexperiments. \n\n\\noindent {\\em Enron} email corpus \\cite{priebe2005scan} consists of email\nexchanges between Enron employees. The Enron dataset has $11$ time\nstamps and $16,836$ possible node-pairs; the task is to use first $10$ snapshots for predicting links in the\n$11^{th}$ snapshot. Following \\cite{KevinS:2013}, we aggregate data into time\nsteps of $1$ week. We use the data from weeks $147$ to $157$ of the data trace\nfor the experiments. The reason for choosing that window is that the snapshot\nof the graph at week $157$ has the highest number of edges. \\\\\n\n{\\noindent} {\\em Collaboration} dataset has $10$ time stamps with author collaboration information about $49,455$ author-pairs. The Collaboration dataset is constructed from citation\ndata containing $1.4$ million papers \\cite{Tang:2008}. We process the data to\nconstruct a network of authors with edges between them if they co-author a\npaper. Considering each year as a time stamp, the data of years $2000$-$2009$\n($10$ time stamps) is used for this experiment, where the data from the first\nnine time stamps is used for training and the last for prediction. Since this\ndata is very sparse, we pre-process the data to retain only the active authors,\nwho have last published papers on or after year $2010$; moreover, the selected\nauthors participate in at least two edges in seven or more time stamps. \\\\\n\n{\\noindent} {\\em Facebook1 and Facebook2} are network of Facebook \nwall posts \\cite{viswanath2009evolution}. Each vertex is a Facebook user account and\nan edge represents the event that one user posts a message on the wall of another user.\nBoth Facebook1 and Facebook2 has $9$ time stamps. Facebook1 has\n$219,453$ node-pairs. Facebook2 is an extended version of Facebook1 dataset with $883,785$\nnode-pairs. For pre-processing Facebook1 we follow the same setup as is discussed in\n\\cite{KevinS:2015}; wall posts of 90 days are aggregated in one time step.\n\nWe filter out all people who are active for less than 6 of the 9 \ntime steps, along with the people who have degree less than 30. \nFacebook2 is created using a similar\nmethod, but a larger sample of Facebook wall posts is used for this dataset.\n\n\\subsection{Evaluation Metrics}\nFor evaluating the proposed method we use two metrics, namely, area under\nPrecision-Recall (PR) curve (PRAUC) \\cite{Davis:2006} and an information\nretrieval metric, Normalized Discounted Cumulative Gain (NDCG). PRAUC is best suited for evaluating two class classification performance when class\nmembership is skewed towards one of the classes. This is exactly the case for\nlink prediction; the number of edges $(|E|)$ is very small compared to the\nnumber of possible node-pairs ${|V|\\choose 2}$ In such scenarios, area under the Precision-Recall curve (PRAUC) gives a more informative assessment of the algorithm's performance than other metrics such as, accuracy. The reason why PRAUC is more suitable for the skewed problem is that it does not factor in the count of true negatives in its calculation. In skewed data where the number of negative\nexamples is huge compared to the number of positive examples, true negatives\nare not that meaningful. \n\nWe also use NDCG, an information\nretrieval metric (widely used by the recommender systems community) to evaluate\nthe proposed method. NDCG measures the performance of link prediction system\nbased on the graded relevance of the recommended links. $NDCG_k$ varies from\n0.0 to 1.0, with 1.0 representing ideal ranking of edges. Here, $k$ is a parameter\nchosen by user representing the number of links ranked by the method. We use\n$k=50$ in all our experiments. \n\n\nSome of the earlier works on link prediction have used area under the ROC curve\n(AUC) to evaluate link prediction works \\cite{Gunes2015,Wang:2007}. But recent\nworks \\cite{Yang2015} have demonstrated the limitations of AUC and argued\nin favor of PRAUC over AUC for evaluation of link prediction. So we have not used AUC in this work.\n\n\\subsection{Competing Methods for Comparison}\n\nWe compare the performance of {\\sc DyLink2Vec}\\ based link prediction method with methods from four categories: (1) topological feature based methods, (2) feature time series\nbased methods~\\cite{Gunes2015}, (3) a deep learning based method, namely\nDeepWalk~\\cite{Perozzi:2014}, and (4) a tensor factorization based method\nCANDECOMP\/PARAFAC (CP)~\\cite{Dunlavy:2011}. \n\nBesides these four works, there are two other existing works for link\nprediction in dynamic network setting; one is based on deep Learning\n\\cite{Li:2014} (Conditional Temporal Restricted Boltzmann machine) and the other is based on a signature-based nonparametric method \\cite{ICML2012Sarkar_828}. We did not compare with these models as implementations of their models are not readily available, besides, both of these methods have numerous parameters which will make reproducibility of their results highly improbable and thus, conclusion derived from such experiments may not align with true understanding of the usefulness of the methods. Moreover, none of these methods give unsupervised feature representation for node-pairs in which we claim our main contribution. \n\n\nFor \\textbf{topological feature based methods}, we consider four prominent topological\nfeatures: Common Neighbors ($CN$), Adamic-Adar ($AA$), Jaccard's\nCoefficient ($J$) and Katz measure ($Katz$). \nHowever, in existing works, these features are defined for\nstatic networks only; so we adapt these features for the dynamic network setting\nby computing the feature values over the collapsed\\footnote{Collapsed network is\nconstructed by superimposing all network snapshots(see Figure \\ref{fig:StaticLimitaion}).} dynamic network. \n\nWe also combine the above four features to construct a combined\nfeature vector of length four (\\textbf{J}accard's\nCoefficient, \\textbf{A}damic-Adar, \\textbf{C}ommon Neighbors and \\textbf{K}atz), which we call $JACK$ and use it with a classifier to build a\nsupervised link prediction method, and include this model in our comparison.\n\nSecond, we compare {\\sc DyLink2Vec}\\ with \\textbf{time-series based} neighborhood similarity scores proposed in \\cite{Gunes2015}. In this work, the authors consider several\nneighborhood-based node similarity scores combined with connectivity\ninformation (historical edge information). Authors use time-series of\nsimilarities to model the change of node similarities over time. Among $16$\nproposed methods, we consider $4$ that are relevant to the link prediction task\non unweighted networks and also have the best performance. \n$TS\\mbox{-}CN\\mbox{-}Adj$ represents time-series on normalized score of Common\nNeighbors and connectivity values at time stamps $[1,t]$. Similarly, we get\ntime-series based scores for Adamic-Adar ($TS\\mbox{-}AA\\mbox{-}Adj$), Jaccard's\nCoefficient ($TS\\mbox{-}J\\mbox{-}Adj$) and Preferential Attachment\n($TS\\mbox{-}PA\\mbox{-}Adj$). \n\nThird, we compare {\\sc DyLink2Vec}\\ with \\textbf{DeepWalk} \\cite{Perozzi:2014}, a latent node\nrepresentation based method. We use DeepWalk to construct latent\nrepresentation of nodes from the collapsed dynamic\nnetwork. Then we construct latent representation of node-pairs by computing\ncross product of latent representation of the participating nodes. For example,\nif the node representations in a network are vectors of size $l$, then the\nrepresentation of a node-pair $(u,v)$ will be of size $l^2$, constructed from\nthe cross product of $u$ and $v$'s representation. The DeepWalk based\nnode-pair representation is then used with a classifier to build a supervised\nlink prediction method. We choose node representation size $l=2,4,6,8,10$ and \nreport the best performance.\n\nFinally, we compare {\\sc DyLink2Vec}\\ with a tensor factorization based method, called\n\\textbf{CANDECOMP\/PARAFAC (CP)} \\cite{Dunlavy:2011}. In this method,\nthe dynamic network is represented as a three-dimensional tensor \n$\\mathcal{Z}(n\\times n\\times t)$. Using CP decomposition $\\mathcal{Z}$ is factorized\ninto three factor matrices. The link prediction score is computed by using \nthe factor matrices.\nWe adapted the CP link prediction method for unipartite networks; which has \noriginally been developed for bipartite networks.\n\n\n\n\n\\subsection{Implementation Details}\n\nWe implemented {\\sc DyLink2Vec}\\ algorithm in Matlab version $R2014b$. The learning method runs for a maximum of $100$ iterations\nor until it converges to a local optimal solution. We use coding size $l=100$\nfor all datasets\\footnote{We experiment with different coding sizes\nranging from 100 to 800. The change in link prediction performance is not\nsensitive to the coding size. At most 2.9\\% change in PRAUC was observed for\ndifferent coding sizes.}. For supervised link prediction step we use several\nMatlab provided classification algorithms, namely, AdaBoostM1, RobustBoost, and\nSupport Vector Machine (SVM). We could use neural network classifier. But, as our main goal is to evaluate the quality of unsupervised feature representation, so, we use simple classifiers. Supervised neural network architecture may result in superior performance, but, it is out of scope of the main goal of the paper. We use Matlab for computing the feature values (CN, AA, J, Katz) that we use in other competing methods. Time-series methods are implemented using Python. We use the ARIMA (autoregressive integrated\nmoving average) time series model implemented in Python module\n\\textbf{statsmodels}. The DeepWalk implementation is provided by the authors\nof~\\cite{Perozzi:2014}. We use it to extract node features and extend it for\nlink prediction (using Matlab). Tensor factorization based method CP was\nimplemented using Matlab Tensor Toolbox. \n\n\n\n\\subsection{Performance Comparison Results with Competing Methods}\nIn Figure \\ref{fig:CompareAll} we present the performance comparison results\nof {\\sc DyLink2Vec}\\ based link prediction method with the four kinds of competing methods that we have discussed\nearlier. The figure have eight bar charts. The bar charts from the top to the\nbottom rows display the results for Enron, Collaboration, Facebook1 and\nFacebook2 datasets, respectively. The bar charts in a row show comparison\nresults using PRAUC (left), and $NDCG_{50}$ (right) metrics. \nEach chart has twelve bars, each representing a link prediction method, where the height\nof a bar is indicative of the performance metric value of the corresponding\nmethod. In each chart, from left to right, the first five bars (blue) correspond to the\ntopological feature based methods, the next four (green) represent time series\nbased methods, the tenth bar (black) is for DeepWalk, the eleventh bar (brown)\nrepresents tensor factorization based method CP, and the final bar (purple)\nrepresents the proposed method {\\sc DyLink2Vec}.\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. Topological} \nWe first analyze the performance comparison between {\\sc DyLink2Vec}\\ based method and topological feature based methods\n(first five bars). The best of the topological feature based methods have a\nPRAUC value of 0.30, 0.22, 0.137 and 0.14 in Enron, Collaboration, Facebook1,\nand Facebook2 dataset (see Figures\n\\ref{fig:CompareAll}(a), \\ref{fig:CompareAll}(c), \\ref{fig:CompareAll}(e) and\n\\ref{fig:CompareAll}(g)), whereas the corresponding PRAUC values for {\\sc DyLink2Vec}\\ are\n0.531, 0.362, 0.308, and 0.27, which translates to 77\\%, 65\\%, 125\\%, and\n93\\% improvement of PRAUC by {\\sc DyLink2Vec}\\ for these datasets. Superiority of {\\sc DyLink2Vec}\\\nover all the topological feature based baseline methods can be attributed to\nthe capability of Neighborhood based feature representation to capture temporal\ncharacteristics of local neighborhood. Similar trend is observed using \n$NDCG_{50}$ metric, see Figures \\ref{fig:CompareAll}(b),\n\\ref{fig:CompareAll}(d), \\ref{fig:CompareAll}(f) and \\ref{fig:CompareAll}(h).\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. Time-Series} \nThe performance of time-series based\nmethod (four green bars) is generally better than the topological feature based\nmethods. The best of the time-series based method has a PRAUC value of 0.503,\n0.28, 0.19, and 0.19 on these datasets, and {\\sc DyLink2Vec}'s PRAUC values are better than\nthese values by 6\\%, 29\\%, 62\\%, and 42\\% respectively. Time-series\nbased methods, though model the temporal behavior well, probably fail to\ncapture signals from the neighborhood topology of the node-pairs. Superiority of\n{\\sc DyLink2Vec}\\ over Time-Series methods is also similarly indicated by information retrieval\nmetric $NDCG_{50}$.\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. DeepWalk} \nThe DeepWalk based method (black bars in Figure \\ref{fig:CompareAll}) performs much poorly\nin terms of both PRAUC and $NDCG_{50}$---even poorer than the topological based method in all four\ndatasets. Possible reason could be the following: the latent encoding of nodes\nby DeepWalk is good for node classification, but the cross-product of those\ncodes fails to encode the information needed for effective link prediction.\n\n\\subsubsection{{\\sc DyLink2Vec}\\ vs. CANDECOMP\/PARAFAC (CP)} \n\nFinally, the tensor factorization based method CP performs marginally better\n(around 5\\% in PRAUC, and 6\\% in $NDCG_{50}$) than {\\sc DyLink2Vec}\\ in small and simple\nnetworks, such as Enron (see Figure \\ref{fig:CompareAll}(a, b)). \nBut its\nperformance degrades on comparatively large and complex networks, such as Collaboration, Facebook1\nand Facebook2. On Facebook networks, the performance of CP is\neven worse than the time-series based methods (see Figures \\ref{fig:CompareAll}(e)\nand \\ref{fig:CompareAll}(g)). {\\sc DyLink2Vec}\\ comfortably outperforms CP on larger\ngraphs, see Figures \\ref{fig:CompareAll}(c, d, e, f, g, h). In terms of PRAUC,\n{\\sc DyLink2Vec}\\ outperforms CP by 28\\%, 94\\%, and 120\\% for Collaborative, Facebook1 and Facebook2 networks respectively.\nThis demonstrates the superiority of {\\sc DyLink2Vec}\\ over one of the best state-of-the-art\ndynamic link prediction. A reason for CP's bad\nperformance on large graphs can be its inability to capture network structure \nand dynamics using high-dimensional tensors representation. \n\n\\subsubsection{Performance across datasets}\nWhen we compare the performance of all the methods across different\ndatasets, we observe varying performance. For example, for both the metrics,\nthe performance of dynamic link prediction on Facebook graphs are lower than the\nperformance on Collaboration graph, which, subsequently, is lower than the\nperformance on Enron graph, indicating that link prediction in Facebook data is\na harder problem to solve. In these harder networks, {\\sc DyLink2Vec}\\ perform substantially\nbetter than all the other competing methods that we consider in this experiment.\n\n\\begin{figure}[H]\n\\centering\n\\subfloat[Collaboration Network]{%\n\\includegraphics[width=.5\\linewidth]{DBLP2TW.eps}\n}\n\\subfloat[Facebook1 Network]{%\n\\includegraphics[width=.5\\linewidth]{FaceBookTW.eps}\n}\n\\caption{Change in link prediction performance with number of time stamps. X-axis represents size of training window used\nfor link prediction. Largest possible window size depends on number of time\nstamps available for the dataset.\n}\n\\label{fig:TimeWindow}\n\\vspace{-0.3in}\n\\end{figure}\n\n\\subsection{Performance with varying length of Time Stamps}\n\nBesides comparing with competing methods, we also demonstrate the performance\nof {\\sc DyLink2Vec}\\ with varying number of available time snapshots. For this\npurpose, we use {\\sc DyLink2Vec}\\ with different counts of past snapshots. For example,\nCollaboration dataset has 10 time stamps. The task is to predict links at time\nstamp 10. The largest number of past snapshots we can consider for this data\nis 8, where $\\mathbf{\\widehat{E}}$ is constructed using time stamps $[1-8]$,\nand $\\mathbf{\\overline{E}}$ is constructed using time stamps $[2-9]$. The\nsmallest number of time stamps we can consider is 1, where\n$\\mathbf{\\widehat{E}}$ is constructed using $[8-8]$, and\n$\\mathbf{\\overline{E}}$ is constructed using $[9-9]$.\nIn this way, by varying the length of historical time stamps, \nwe can evaluate the effect of time stamp's length on the performance of a link prediction method.\n\n\nThe result is illustrated in Figure \\ref{fig:TimeWindow}. The x-axis represents\nthe number of time stamps used by {\\sc DyLink2Vec} , the left y-axis represents the\n$NDGC_{50}$ and the right y-axis represents the\nPRAUC. Figures \\ref{fig:TimeWindow}(a),\nand \\ref{fig:TimeWindow}(b) corresponds to the results obtained on Collaboration and Facebook1, respectively. \n\n\nWe observe from Figure \\ref{fig:TimeWindow} that the performance ($NDGC_{50}$\nand PRAUC) of link prediction increases with increased number of time stamps.\nBut beyond a given number of snapshots, the performance increment becomes\ninsignificant. The performance starts to deteriorate after certain number of snapshots (see Figure \\ref{fig:TimeWindow}(a)). This may be because of the added complexity of the optimization framework with increased number of time stamps. We also observe\nconsistent improvement of performance with the number of snapshots for the\nFacebook1 data (Figure \\ref{fig:TimeWindow}(b)),\nwhich indicates that for this dataset link information from distant\nhistory is useful for dynamic link prediction. We do not show results of Enron and Facebook2 for this experiment, \nbecause of space constraint, however, they show similar trends.\n\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=.8\\linewidth]{DBLP2.eps}\n\\caption{Effect of class imbalance in link prediction performance on Collaboration network.}\n\\label{fig:classImbalance}\n\\vspace{-0.3in}\n\\end{figure}\n\n\\subsection{Effect of Class Imbalance on Performance}\nIn link prediction problem, class imbalance is a prevalent issue. The class imbalance problem appears in a classification task, when the dataset contains imbalanced number of samples for different classes. In link prediction problem, the number of positive node-pairs (with an edge) is very small compared to the number of negative node-pairs (with no edge), causing class imbalance problem. \n\nTo demonstrate the effect of class imbalance in link prediction task, we perform link prediction using {\\sc DyLink2Vec}\\ embeddings with different level of class imbalance in the training dataset.\nWe construct the training dataset by taking all positive node-pairs and sampling from the set of negative node-pairs. \nFor a balanced dataset, the number of negative samples will be equal to the number of all positive node-pairs considered. Thus, the balanced training dataset has positive node-pairs to negative node-pairs ratio $1:1$. At this point, the only way to increase the size of the data is to increase the sample size for negative node-pairs. Consequently, the ratio of classes also increases towards negative node-pair.\nFigure \\ref{fig:classImbalance} shows gradual decrease in link prediction performance in Collaboration network with the increase of imbalance (see ratios in X-axis) in the dataset (despite the fact that the dataset gets larger by adding negative node-pairs). \n\nThis result advocates towards the design choice of under-sampling \\cite{Lichtenwalter:2010} of negative node-pairs by uniformly sampling from all negative node-pairs, so that the training set has equal numbers of positive and negative node-pairs. Under-sampling, helps\nto mitigate the problem of class imbalance while also reducing the size of the training dataset.\n\n\n\n\n\\subsection{Optimization framework for DyLink to Vec}\\label{subsec:UnsupervisedFeatureExtraction}\n\nIn this section, we discuss the optimization framework which obtains the\noptimal metric embedding of a node pair by learning an optimal coding function\n$h$. For this\nlearning task, let's assume $\\mathbf{\\widehat{E}}$ is the training dataset\nmatrix containing a collection of node-pair feature vectors. Each row of this\nmatrix represents a node-pair (say, $u$ and $v$) and it contains the feature\nvector $\\mathbf{e}^{uv}$ which stores information about neighborhood and link\nhistory, as we discussed earlier. The actual link status of the node-pairs in $\\mathbf{\\widehat{E}}$ \nin $G_{t+1}$ is not used for the learning of $h$, so the metric embedding process\nis unsupervised. In subsequent discussion, we write $\\mathbf{e}$ to represent\nan arbitrary node pair vector in $\\mathbf{\\widehat{E}}$.\n\nNow, the coding function $h$ compresses $\\mathbf{e}$ to a code vector $\\mathbf{\\alpha}$\nof dimension $l$, such that $l < k$. Here $l$ is a user-defined parameter which\nrepresents the code length and $k$ is the size of feature vector. Many\ndifferent coding functions exist in the dimensionality reduction literature,\nbut for {\\sc DyLink2Vec}\\ we choose the coding function which incurs the minimum\nreconstruction error in the sense that from the code $\\mathbf{\\alpha}$ we can\nreconstruct $\\mathbf{e}$ with the minimum error over all $\\mathbf{e} \\in\n\\mathbf{\\widehat{E}}$. We frame the learning of $h$ as an optimization\nproblem, which we discuss below through two operations: Compression and\nReconstruction. \\\\\n\n\\noindent {\\bf Compression:} It obtains $\\mathbf{\\alpha}$ from $\\mathbf{e}$.\nThis transformation can be expressed as a nonlinear function of linear weighted\nsum of the entries in vector $\\mathbf{e}$.\n\\begin{equation}\n\\mathbf{\\alpha} = f(\\mathbf{W}^{(c)}\\mathbf{e} + \\mathbf{b}^{(c)})\n\\label{eq:Compression}\n\\end{equation}\n$\\mathbf{W}^{(c)}$ is a $(k \\times l)$ dimensional matrix. It represents the weight matrix for compression\nand $\\mathbf{b}^{(c)}$ represents biases. $f(\\cdot)$ is the Sigmoid function, $f(x)=\\frac{1}{1+e^{-x}}$. \\\\\n\n\\noindent {\\bf Reconstruction:} It performs the reverse operation of compression, i.e., it \nobtains $\\mathbf{e}$ from $\\mathbf{\\alpha}$ (which was constructed during the compression \noperation).\n\\begin{equation}\n\\mathbf{\\beta} = f(\\mathbf{W}^{(r)}\\mathbf{\\alpha} + \\mathbf{b}^{(r)})\n\\label{eq:Reconstruction}\n\\end{equation}\n$\\mathbf{W}^{(r)}$ is a matrix of dimensions $(l \\times k)$ representing the weight matrix for reconstruction, \nand $\\mathbf{b}^{(r)}$ represents biases. \n\nThe optimal coding function $h$ constituted by the compression and reconstruction operations is defined by the parameters $(\\mathbf{W},\\mathbf{b}) = (\\mathbf{W}^{(c)},\\mathbf{b}^{(c)},\\mathbf{W}^{(r)},\\mathbf{b}^{(r)})$.\nThe objective is to minimize the reconstruction error.\nReconstruction error for a neighborhood based feature vector $\\mathbf(e)$ is defined as,\n$J(\\mathbf{W,b,e}) = \\frac{1}{2}\\parallel \\mathbf{\\beta}-\\mathbf{e} \\parallel^2$.\nOver all possible feature vectors, the average reconstruction error augmented with a regularization\nterm yields the final objective function $J(\\mathbf{W},\\mathbf{b})$:\n\\begin{equation}\n\\begin{split}\nJ(\\mathbf{W},\\mathbf{b}) = \\frac{1}{{|\\mathbf{\\widehat{E}}|}} \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}(\\frac{1}{2}\\parallel \\mathbf{\\beta}^{uv}-\\mathbf{e}^{uv} \\parallel^2)\n\\\\+\\frac{\\lambda}{2} (\\parallel\\mathbf{W}^{(c)}\\parallel_F^2 + \\parallel\\mathbf{W}^{(r)}\\parallel_F^2 )\n\\end{split}\n\\label{eq:lossfun2}\n\\end{equation}\n\nHere, $\\lambda$ is a user assigned regularization parameter, responsible for\npreventing over-fitting. $\\parallel\\cdot\\parallel_F$ represents the \nFrobenius norm of a matrix. In this work we use $\\lambda = 0.1$. \n\nTo this end, we discuss the motivation of our proposed optimization framework\nfor learning the coding function $h$. Note that,\nthe dimensionality of $\\mathbf{\\alpha}$ is much smaller than $\\mathbf{e}$, so the\noptimal compression of the vector $\\mathbf{e}$ must extract patterns composing of\nthe entries of $\\mathbf{e}$ and use them as high-order latent feature in\n$\\mathbf{\\alpha}$. In fact, the entries in $\\mathbf{e}$ contain the neighborhood (sum\nof adjacency vector of the node pair) and link history of a node-pair for all\nthe timestamps; for a real-life network, this vector is sparse and substantial\ncompression is possible incurring small loss. Through this compression the\ncoding function $h$ learns the patterns that are similar across different node-pairs (used in $\\mathbf{\\widehat{E}}$).\nThus the function $h$ learns a metric embedding of the node-pairs that packs node-pairs\nhaving similar local structures in close proximity in the embedded feature space.\nAlthough function $h$ acts as a black-box, it captures patterns involving neighborhood\naround a node pair across various time stamps, which obviates the manual construction \nof a node-pair feature---a cumbersome task for the case of a dynamic network. \n\n\n\\subsubsection{Optimization}\n\nThe training of optimal coding defined by parameters $(\\mathbf{W,b})$ begins with\nrandom initialization of the parameters. Since the cost function\n$J(\\mathbf{W},\\mathbf{b})$ defined in Equation \\eqref{eq:lossfun2} is non-convex in\nnature, we obtain a local optimal solution using the gradient descent approach.\nSuch approach usually provides practically useful results (as shown in the Section \n\\ref{sec:experiemnts}).\nThe parameter updates of the gradient descent are similar to the parameter updates\nfor optimizing Auto-encoder in machine learning. \nOne iteration of gradient descent updates the parameters using following equations:\n\n\\begin{equation}\n\\begin{split}\nW_{ij}^{(c)} = W_{ij}^{(c)} - \\sigma \\frac{\\partial}{\\partial W_{ij}^{(c)}}J(W,b) \\\\\nW_{ij}^{(r)} = W_{ij}^{(r)} - \\sigma \\frac{\\partial}{\\partial W_{ij}^{(r)}}J(W,b) \\\\\nb_{i}^{(c)} = b_{i}^{(c)} - \\sigma \\frac{\\partial}{\\partial b_{i}^{(c)}}J(W,b) \\\\\nb_{i}^{(r)} = b_{i}^{(r)} - \\sigma \\frac{\\partial}{\\partial b_{i}^{(r)}}J(W,b) \n\\end{split}\n\\label{eq:updatefun1}\n\\end{equation}\n\nHere, $l$ appropriately identifies the weight and bias parameters $l \\in \\{1,2\\}$. $\\sigma$ is the learning rate. $W_{ij}^{(1)}$ is the weight of connection between node $j$ of the input layer to node $i$ of the hidden layer. \n\nNow, from Equation \\eqref{eq:lossfun2}, the partial derivative terms in equations \\eqref{eq:updatefun1} can be written as,\n\\begin{equation}\n\\begin{split}\n\\frac{\\partial}{\\partial W_{ij}^{(c)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}| } \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial W_{ij}^{(c)}}J(\\mathbf{W,b,e})+\\lambda W_{ij}^{(c)} \\\\\n\\frac{\\partial}{\\partial W_{ij}^{(r)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}|} \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial W_{ij}^{(r)}}J(\\mathbf{W,b,e})+\\lambda W_{ij}^{(r)} \\\\\n\\frac{\\partial}{\\partial b_{i}^{(c)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}| } \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial b_{i}^{(c)}}J(\\mathbf{W,b,e}) \\\\\n\\frac{\\partial}{\\partial b_{i}^{(r)}}J(W,b) = \\frac{1}{|\\mathbf{\\widehat{E}}| } \\sum_{\\mathbf{e} \\in \\mathbf{\\widehat{E}}}\\frac{\\partial}{\\partial b_{i}^{(r)}}J(\\mathbf{W,b,e}) \\\\\n\\end{split}\n\\label{eq:derivative}\n\\end{equation}\n\n\nThe optimization problem is solved by computing partial derivative of cost function $J(\\mathbf{W,b,e})$ using the back propagation approach \\cite{Rumelhart:1986}. \n\nOnce the optimization is done,\nthe metric embedding of\nany node-pair ($u,v$) can be obtained by taking the outputs of compression\nstage (Equation \\eqref{eq:Compression}) of the trained optimal coding\n($\\mathbf{W,b}$).\n\n\\begin{equation}\n\\begin{split}\n\\mathbf{\\alpha}^{uv} = f(\\mathbf{W}^{(c)}\\mathbf{e}^{uv} + \\mathbf{b}^{(c)})=h(\\mathbf{e}^{uv})\n\\end{split}\n\\label{eq:unsupervisedFeatures}\n\\end{equation}\n\n\\subsubsection{Complexity Analysis}\nWe use Matlab implementation of optimization algorithm L-BFGS (Limited-memory\nBroyden-Fletcher-Goldfarb-Shanno) for learning optimal coding. We execute the\nalgorithm for a limited number of iterations to obtain unsupervised features\nwithin a reasonable period of time. Each iteration of L-BFGS executes two tasks\nfor each node-pair: back-propagation to compute partial differentiation of cost\nfunction, and change the parameters $\\mathbf{(W,b)}$. \nTherefore, the time complexity\nof one iteration is $O(|NP_t|kl)$. Here, $NP_t$ is the set on node-pairs used to construct the training dataset $\\mathbf{\\widehat{E}}$.\n$k$ is the length of $\\mathbf{e}$ (dimensionality of initial edge features), and $l$ is\nlength of $\\mathbf{\\alpha}$ (optimal coding).\n\n\n\\section{Link Prediction using proposed metric embedding}\\label{subsec:SupervisedLinkPredictionModel}\nFor link prediction task in a dynamic network, $\\mathbb{G} = \\{G_1,G_2, \\dots ,G_t\\}$; we split the snapshots into two overlapping time windows, $[1,t-1]$ and $[2,t]$.\nTraining dataset, $\\mathbf{\\widehat{E}}$ is feature representation for time\nsnapshots $[1,t-1]$, the ground truth ($\\widehat{\\mathbf{y}}$) is constructed from\n$G_t$. {\\sc DyLink2Vec}\\ learns optimal embedding $h(\\cdot)$ using training dataset $\\mathbf{\\widehat{E}}$.\nAfter training a supervised classification model using\n$\\widehat{\\mathbf{\\alpha}}$=$h(\\widehat{\\mathbf{E}})$ and $\\widehat{\\mathbf{y}}$, prediction\ndataset $\\mathbf{\\overline{E}}$ is used to predict links at $G_{t+1}$. For this\nsupervised prediction task, we experiment with several classification\nalgorithms. Among them SVM (support vector machine) and\nAdaBoost perform the best. \n\n\n\\begin{algorithm}[h]\n\t\\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\\begin{algorithmic}[1]\n\t\t\\Procedure{LP{\\sc DyLink2Vec}}{$\\mathbb{G},t$}\n\t\t\\Input{$\\mathbb{G}$: Dynamic Network, $t$: Time steps}\n \t\t\\Output{$\\overline{y}$: Forecasted links at time step $t+1$}\n \t\t\n\t\t\\State $\\widehat{\\mathbf{E}}$=NeighborhoodFeature($\\mathbb{G}$,$1$,$t-1$)\n\t\t\\State $\\widehat{\\mathbf{y}}$=Connectivity($G_t$)\n\t\t\\State $\\overline{\\mathbf{E}}$=NeighborhoodFeature($\\mathbb{G}$,$2$,$t$)\n\t\t\\State $h$=LearningOptimalCoding($\\widehat{\\mathbf{E}}$)\n\t\t\\State $\\widehat{\\mathbf{\\alpha}}$=$h(\\widehat{\\mathbf{E}})$\n\t\t\\State $\\overline{\\mathbf{\\alpha}}$=$h(\\overline{\\mathbf{E}})$\n\t\t\\State $C$=TrainClassifier($\\widehat{\\mathbf{\\alpha}},\\widehat{\\mathbf{y}}$)\n\t\t\\State $\\overline{\\mathbf{y}}$=LinkForecasting($C,\\overline{\\mathbf{\\alpha}}$)\n\t\t\\State \\textbf{return} $\\overline{\\mathbf{y}}$\n\t\t\\EndProcedure\n\t\t\\end{algorithmic}\n\t\t\\caption{Link Prediction using {\\sc DyLink2Vec}}\\label{alg:LPUFE}\n\\end{algorithm}\n\n\nThe pseudo-code of {\\sc DyLink2Vec}\\ based link prediction method is given in Algorithm \\ref{alg:LPUFE}. For training\nlink prediction model, we split the available network snapshots into two\noverlapping time windows, $[1,t-1]$ and $[2,t]$. Neighborhood based features\n$\\widehat{\\mathbf{E}}$ and $\\overline{\\mathbf{E}}$ are constructed in Lines 2 and 4,\nrespectively. Then we learn optimal coding for node-pairs using neighborhood\nfeatures $\\widehat{\\mathbf{E}}$ (in Line 5). Embeddings\nare constructed using learned optimal coding (Lines 6 and 7) using output of\ncompression stage (Equation \\ref{eq:unsupervisedFeatures}). Finally, a\nclassification model $C$ is learned (Line 8), which is used for predicting\nlinks in $G_{t+1}$ (Line 9).\n\n\n\\section{MLJ Contribution Information Sheet}\n{\\em 1. What is the main claim of the paper? Why is this an important contribution to the machine learning literature?}\n\n\\noindent \\textbf{Our Answer:} The temporal dynamics of a complex system such as a social network or a\ncommunication network can be studied by understanding the patterns of link\nappearance and disappearance over time. A critical task along this\nunderstanding is to predict the link state of the network at a future time\ngiven a collection of link states at earlier time points. In existing literature,\nthis task is known as link prediction in dynamic networks. In this work, we propose {\\sc DyLink2Vec}, a scalable method for finding the metric embedding of arbitrary node-pairs (both edges or non-edges) of a \ndynamic network with many temporal snapshots. This work is different from existing \nworks of representation learning on graphs in two ways:\nfirst, existing works find the embedding of vertices whereas we consider embedding\nof arbitrary node-pairs; second, our work considers a dynamic network, which has\nmany temporal snapshots, whereas existing works consider only a static network. Therefore, \nour embedding method is particularly suitable for link forecasting\nin a dynamic networks, and thus, would be an important contribution to the machine learning literature.\\\\\n\n\\noindent {\\em 2. What is the evidence you provide to support your claim? Be precise.}\n\n\\noindent \\textbf{Our Answer:} We present following evidences: \n\\begin{itemize}\n\\item We validate the effectiveness of our representation learning method ({\\sc DyLink2Vec}) by utilizing it for link prediction on four real-life dynamic networks- {\\em Enron}, {\\em Collaboration}, and two version of {\\em Facebook} networks.\n\\item We compare the quality of embedding generated from {\\sc DyLink2Vec}\\ (our algorithm) with multiple state-of-the-art methods for dynamic link prediction task. Our comparison results show that the proposed method is superior than all the competing methods.\n\\end{itemize}\n\n\\noindent {\\em 3. What papers by other authors make the most closely related contributions, and how is your paper related to them?}\n\n\\noindent \\textbf{Our Answer:} A key challenge of link prediction in a dynamic network is to find a suitable feature representation of the node-pair instances which are used for training the prediction model. For the static link prediction problem, various topological metrics (common neighbors, Adamic-Adar, Jaccard's coefficient) are used as features, but they can not be extended easily for the dynamic setting having multiple snapshots of the network. In this work, we propose a very general method for constructing unsupervised feature representation in \ndynamic setting. The following papers also provide methods for generating useful node-pair (edge) feature representation, and thus they are related to our work. \n\\begin{itemize}\n\\item Temporal Link prediction using matrix and tensor factorization by D.M. Dunlavy, T. G. Kolda, and E. Acar, in ACM Trans. Knowl. Discov. Data, 2011\n\n\\item Link prediction using time series of neighborhood-based node similarity scores by\nG{\\\"u}ne{\\c{s}}, {\\.I}smail and G{\\\"u}nd{\\\"u}z-{\\\"O}{\\u{g}}{\\\"u}d{\\\"u}c{\\\"u}, {\\c{S}}ule and {\\c{C}}ataltepe, Zehra, in\nData Mining and Knowledge Discovery, 2015\n\n\\item DeepWalk: Online Learning of Social Representations by\nPerozzi, Bryan and Al-Rfou, Rami and Skiena, Steven, in Proc.\nof SIGKDD, 2014. \n\n\\end{itemize}\n\n\\noindent {\\em 4. Have you published parts of your paper before, for instance in a conference? If so, give details of your previous paper(s) and a precise statement detailing how your paper provides a significant contribution beyond the previous paper(s).}\n\n\\noindent \\textbf{Our Answer:} No, we did not publish parts of our paper anywhere.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe steady decrease of estimates of metal abundances in\nthe solar atmosphere over the\nlast years (Holweger \\cite{hol01}, Lodders \\cite{lod03}, Asplund et al. \\cite{ags04})\nis a serious {\\rm challenge} not only {\\rm to} solar physics, but\nalso to dust modelling.\n This calls for new {\\rm dust} models able to produce the same extinction\nwith a smaller amount of solid material.\nA solution to the problem could be provided by\nan ``admixture of vacuum'' i.e. by the porosity of interstellar grains.\n\nGrain aggregates with {\\rm large} voids can {\\rm form} during\nthe growth of interstellar grains due to their coagulation\nin dense molecular cloud cores (Dorschner \\& Henning \\cite{dh95}).\n The internal structure of such composite grains can\nbe very complicated, but\ntheir optical properties are {\\rm often} calculated using the Mie theory\nfor homogeneous spheres with an average refractive index\nderived from effective medium theory\n(EMT; see, e.g., Mathis \\& Whiffen \\cite{mw89},\nJones \\cite{jones88}, Ossenkopf \\cite{oss91},\nMathis \\cite{m96}, Li \\& Greenberg \\cite{li:gre98}, Il'in \\& Krivova \\cite{ik00}).\n\n Another approach to calculate the optical properties of\nsuch aggregates is the application of complex, computationally time consuming\nmethods such as the discrete dipole approximation (DDA;\nsee, e.g., Wright \\cite{wr87}, Kozasa et al.~\\cite{kbm92}, \\cite{ko93},\nWolff et al.~{\\cite{wo94}}, Stognienko et al.~\\cite{st95},\nKimura \\& Mann \\cite{ki98}).\n\n Using the DDA, Voshchinnikov et al.~(\\cite{vih05}) examined\nthe ability of the EMT-Mie approach to treat porous\nparticles of different structure.\nThey show that the latter approach can give relatively\naccurate results only if the very porous particles have small\n(in comparison with the wavelength of incident radiation) ``Rayleigh''\ninclusions.\nOtherwise, the approach becomes inaccurate\nwhen the porosity exceeds $\\sim$0.5.\n At the same time, the optical properties of heterogeneous\nspherical particles having inclusions of various sizes\n(Rayleigh and non-Rayleigh) and very large porosity\nwere found to closely resemble those of\nspheres with a large number ($\\ga 15-20$) of different layers.\nThe errors in extinction efficiency factors are smaller than 10~--~20\\% if\nthe size parameter $\\la 15$ and porosity is equal to $0.9$.\nNote that this consideration was restricted by spheres,\nnot very absorbing materials (silicate and amorphous carbon)\nand the integral scattering characteristics\n(extinction, scattering, absorption efficiency factors, albedo\nand asymmetry parameter)\nbut not the differential cross sections or elements of the\nscattering matrix.\nNevertheless, very simple computational models instead of\ntime-consuming DDA calculations give us a useful way to treat\ncomposite grains of different structure.\n\nIn this paper, we apply the\nparticle models of porous interstellar dust grains\nbased on the EMT-Mie and layered-sphere calculations.\nThe models described\nin Sect.~\\ref{model} are assumed to represent composite\nparticles with small (Rayleigh) inclusions and inclusions\nof different sizes (Rayleigh and non-Rayleigh).\n The wavelength dependence of extinction is discussed in Sect.~\\ref{ext1}.\n Sections~\\ref{st_ext} and \\ref{ir_ext}\n contain the application of the models to\ncalculations of the extinction curves in the directions\nof two stars, using new solar abundances\nand the near-infrared (IR) extinction in the directions along the\nGalactic plane.\n The next sections deal with grain temperatures\n(Sect.~\\ref{temp}), profiles of IR silicate bands (Sect.~\\ref{ir_b}),\nand grain opacities at $\\lambda = 1$\\,~mm (Sect.~\\ref{opa}).\nThese quantities are especially important for the analysis of observations\nof protoplanetary discs (Henning et al. \\cite{heal05}).\nConcluding remarks are presented in Sect.~\\ref{concl}.\n\n\\section{Particle models}\\label{model}\n\nInformation about the structure of grains can be included in light\nscattering calculations directly (layered spheres) or can be\nused to find the optical constants (EMT-Mie model).\nWe consider models of both types.\n\nFollowing previous papers (Voshchinnikov \\& Mathis \\cite{vm99},\nVoshchinnikov et al. \\cite{vih05}),\nwe construct layered grains as particles consisting of many concentric\nspherical layers of various materials, each with a specified\nvolume fraction $V_i$ ($\\Sigma_i V_i \/V_{\\rm total} = 1)$.\nVacuum can be one of the materials, so a\ncomposite particle may have a central cavity or voids in the form\nof concentric layers.\nThe number of layers is taken to be 18 {\\rm since}\nVoshchinnikov et al.~(\\cite{vih05}) have shown that this was enough\nto preclude an influence of the order of materials on the results.\nFor a larger number of layers,\none can speak of the optical characteristics determined by\nthe volume fractions of different constituents only.\n\nIn the case of the EMT-Mie model, an average (effective) refractive index\nis calculated using the popular rule of Bruggeman\n(see, e.g., Ch\\'ylek et al. \\cite{cval00}, Kr\\\"ugel \\cite{kru03}).\nIn this case, the average dielectric permittivity\n$\\varepsilon_{\\rm eff}$$^($\\footnote{The dielectric permittivity\nis related to the refractive index as $\\varepsilon=m^2$.}$^)$\nis calculated from\n\\begin{equation}\n\\sum_i f_{i} \\frac{\\varepsilon_{i} - \\varepsilon_{\\rm eff}}{\\varepsilon_{i} + 2 \\varepsilon_{\\rm eff}} = 0,\n\\label{bru}\n\\end{equation}\nwhere $f_{i}=V_i \/V_{\\rm total}$ is the volume fraction\nof the $i$th material with the permittivity $\\varepsilon_{i}$.\n\nThe amount of vacuum in a particle can be characterized by its porosity\n${\\cal P}$ ($0 \\leq {\\cal P} < 1$)\n\\begin{equation}\n{\\cal P} = V_{\\rm vac} \/V_{\\rm total}\n= 1 - V_{\\rm solid} \/V_{\\rm total}. \\label{por}\n\\end{equation}\nTo compare the optical properties of porous and compact particles,\none should consider the porous particles of radius (or size parameter)\n\\begin{equation}\nr_{\\rm porous} = \\frac{r_{\\rm compact}}{(1-{\\cal P})^{1\/3}}\n= \\frac{r_{\\rm compact}}{(V_{\\rm solid} \/V_{\\rm total})^{1\/3}}. \\label{xpor}\n\\end{equation}\n\n As ``basic'' {\\rm constituents}, we choose\namorphous carbon (AC1; Rouleau \\& Martin \\cite{rm91})\nand astronomical silicate (astrosil; Laor \\& Draine \\cite{laordr93}).\n The refractive indices for {\\rm these materials and some others} considered\nin Sect.~\\ref{ab_ext} were taken from the Jena--Petersburg Database of\nOptical Constants (JPDOC) described by Henning et al.~(\\cite{heal99})\nand J\\\"ager et al.~(\\cite{jetal02}).\n\nThe application of the standard EMT is known to correspond to the case\nof particles having small randomly located inclusions of\ndifferent materials (Bohren \\& Huffman \\cite{bh83}).\n The optical properties of such particles have been well studied\n(see, e.g., Voshchinnikov \\cite{v02} and references therein).\n\n However, one needs the DDA or other computationally\ntime consuming techniques to treat particles with inclusions\nlarger than the wavelength of incident radiation.\nThe difference in the optical characteristics of particles with\nsmall and large inclusions has been discussed in\nprevious studies (e.g., Wolff et al.~\\cite{wo94}, \\cite{wo98}).\nThe fact that this difference drastically\ngrows with the porosity ${\\cal P}$ and becomes quite essential\nalready for $ {\\cal P} \\ga 0.5$ has been discovered\n{\\rm only recently} by Voshchinnikov et al.~(\\cite{vih05}).\n They also found that the scattering properties of\nparticles with inclusions of different sizes (including\nthose with sizes comparable to the wavelength) were\nvery close to those of layered\nparticles, having the same size and material volume fractions.\nA similar conclusion was reached for an ensemble of particles\nwhere each particle has inclusions of one (but different) size only.\nIn both cases of ``internal'' or ``external'' mixing, the model\nof layered spheres can be applied.\n\nThese results are of particular importance for applications including the\nastronomical ones where we deal with very porous particles.\nNote that instead of time consuming calculations with DDA-like\ncodes, one can use the exact solution to the light scattering\nproblem for layered spheres, which is as quick as Mie theory, to estimate\nthe properties of particles with Rayleigh\/non-Rayleigh inclusions.\n\nThus, the models of the homogeneous sphere (with EMT) and layered spheres,\nboth having fast implementations, allow one to probe the difference\nin optical properties caused by the different structure of\nscatterers.\n\n\\section{Interstellar extinction and interstellar abundances}\\label{ab_ext}\n\n\n\\subsection{Wavelength dependence of extinction}\\label{ext1}\n\n\n\nAs it is well known,\nthe wavelength dependence of interstellar extinction $A(\\lambda)$\nis determined by the wavelength dependence of the\nextinction efficiencies $Q_{\\rm ext}(\\lambda)$.\n This quantity is shown in Fig.~\\ref{ext_w} for particles of the same mass,\nbut different porosity.\nThe volume fractions of AC1 and astrosil are equal, i.e.\n$V_{\\rm AC1} \/V_{\\rm total} = V_{\\rm astrosil} \/V_{\\rm total} =\n1\/2 \\, (V_{\\rm solid} \/V_{\\rm total}) = 1\/2 \\, (1 - {\\cal P})$.\n The radius of compact grains is $r_{\\rm s,\\,compact}=0.1\\,\\mu{\\rm m}$.\n The dependence of $Q_{\\rm ext}$ on $\\lambda$ for compact particles\nis close to that of the average interstellar extinction curve in the\nvisible--near UV ($1 \\,\\mu{\\rm m}^{-1} \\leq\\lambda^{-1} \\leq 3 \\,\\mu{\\rm m}^{-1}$)\n{\\rm where it} can be approximated by the power law\n$A(\\lambda) \\propto \\lambda^{-1.33}$\n(see discussion in Voshchinnikov~\\cite{v02}).\n From Fig.~\\ref{ext_w} one can conclude that the extinction\nproduced by particles with small inclusions and layers\n differs little for compact\nand slightly porous particles. The difference becomes {\\rm most} pronounced in\nthe near and far--UV (for $\\lambda^{-1} \\ga 2.5 - 3 \\,\\mu{\\rm m}^{-1}$).\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f1.eps}}\n\\caption{Wavelength dependence of the extinction efficiency {\\rm factor}\nfor spherical particles with $r_{\\rm s,\\,compact}=0.1\\,\\mu{\\rm m}$.\nThe particles are of the same mass but of different porosity.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{ext_w}\\end{center}\n\\end{figure}\n\nAs follows from Fig.~\\ref{ext_w}, the wavelength\ndependence of extinction flattens as porosity increases.\nIt is well known (see, e.g., Greenberg \\cite{g78}) that\n{\\rm different particles produce comparable extinction if the products of their\nsize $r$ and refractive index are close, i.e.}\n\\begin{equation}\n r \\, |m-1| \\approx {\\rm const.}\n\\label{mr}\\end{equation}\n The average refractive index of particles with a larger fraction of vacuum\nis closer to 1.\nDespite a larger radius (e.g., from Eq.~(\\ref{xpor}) follows that\n$r_{\\rm s}=0.22 \\,\\mu{\\rm m}$ if ${\\cal P}=0.9$ and $r_{\\rm s,\\,compact}=0.1 \\,\\mu{\\rm m}$),\nthe product given by Eq.~(\\ref{mr}) decreases because of {\\rm an}\ndrop of $|m-1|$. This implies that a steeper extinction\nwith the wavelength dependence closer to $\\propto \\lambda^{-1.33}$\nis produced by for compact particles with larger radii and,\nconsequently, a larger amount of solid material.\nThis behaviour is {\\rm also observed for} particles of other masses (i.e.,\ncompact spheres of other radii). Therefore, an interpretation\nof the observed interstellar extinction curve\nusing only very porous grains should\nnot give any gain in dust-phase abundances and would contradict\nthe wavelength behaviour.\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f2.eps}}\n\\caption{\nThe same as in Fig.~\\ref{ext_w} but now for normalized extinction cross section.\n}\\label{cn_w}\\end{center}\n\\end{figure}\nThe role of porosity in extinction is better seen from\nFig.~\\ref{cn_w} where\n{\\rm we give} the wavelength dependence of the normalized cross section\n\\begin{eqnarray}\nC_{\\rm ext}^{\\rm (n)} = \\frac{C_{\\rm ext}({\\rm porous \\, grain})}\n{C_{\\rm ext}({\\rm compact \\, grain \\, of \\, same \\, mass})} = \\nonumber \\\\\n\\,\\,\\,\\,\\,\\,\\, (1-{\\cal P})^{-2\/3}\\, \\frac{Q_{\\rm ext}({\\rm porous \\, grain})}\n{Q_{\\rm ext}({\\rm compact \\, grain \\, of \\, same \\, mass})}. \\label{cn}\n\\end{eqnarray}\nThis quantity shows how porosity influences the extinction cross section.\nAs follows from Fig.~\\ref{cn_w}, both models predict a growth of\nextinction of porous particles in the far-UV and a decrease in the\nvisual--near-UV part with growing ${\\cal P}$. However, the wavelength interval\nwhere $C_{\\rm ext}^{\\rm (n)} <1$ is narrower and the minimum is\nless deep in the case of layered spheres.\nIn comparison with compact grains and particles with Rayleigh inclusions,\nparticles with Rayleigh\/non-Rayleigh inclusions can also produce rather\nlarge extinction in the near-IR part of spectrum.\nThis is especially\nimportant for the explanation of the flat extinction across the $3 - 8\\,\\mu{\\rm m}$\nwavelength range measured for several lines of sight\n(see Sect.~\\ref{ir_ext}).\nAt the same time, as follows from Fig.~\\ref{pp09},\nfor particles with $r_{\\rm s,\\,compact}>0.1 \\,\\mu{\\rm m}$\nthe normalized UV cross sections grow faster for the\nBruggeman--Mie theory than for layered spheres.\nThus, an addition of vacuum into particles does not mean an automatic\ngrowth of extinction at all wavelengths and a saving in terms\nof solid-phase elements.\nEvidently, the final decision can be made after fitting the theoretical\ncalculations with observations at many wavelengths.\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f3.eps}}\n\\caption{Wavelength dependence of the normalized extinction cross section\nfor spherical particles with the same porosity ${\\cal P}=0.9$.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{pp09}\\end{center}\n\\end{figure}\n\n\\subsection{Extinction in the directions\nto the stars $\\zeta$ Oph and $\\sigma$ Sco}\\label{st_ext}\n\nThe basic requirement for any model of interstellar dust is\nthe explanation of the observed extinction law along with\nthe dust-phase abundances of elements in the interstellar medium.\nThese abundances are obtained as the difference between the\ncosmic reference abundances and the observed gas-phase abundances.\nHowever, the cosmic abundances are not yet\nconclusively established and usually this causes a problem.\nFor many years, the solar abundances were used as the reference ones,\nuntil the photospheres of the early-type stars were found\nnot to be as rich in heavy elements as the solar\nphotosphere was (Snow \\& Witt \\cite{sw96}).\nThese stellar abundances caused the so-called ``carbon crisis''.\nAbundances of the most important dust-forming elements (C, O, Mg, Si, Fe)\nrequired by the dust models were larger than available.\nHowever, during the past several years the solar abundances\ndropped and now they approach the stellar ones\n(see Asplund et al. \\cite{ags04}).\nEvidently, some abundances determined by Sofia \\& Meyer (\\cite{sm01})\nfor F and G stars must be revised downward\nas it has been done recently\nfor the Sun. This should lead to the agreement between abundances found\nfor stars of different types.\nNote also that the current solar abundances of oxygen and iron\n(see Table~\\ref{ida}) are close to those found from\nhigh-resolution \\mbox{X-ray} spectroscopy.\nJuett et al.~(\\cite{jsc03}) investigated the oxygen\nK-shell interstellar absorption edge in seven X-ray binaries and evaluated\nthe total O abundances. These abundances lie between\n467~ppm$^{(}$\\footnote{parts per million hydrogen atoms}$^{)}$ and 492~ppm.\nSchulz et al.~(\\cite{schet02}) evaluated the\ntotal abundance of iron towards the object Cyg~X-1 to be\n$[{\\rm Fe}\/{\\rm H}]_{\\rm cosmic} \\approx 25$~ppm.\n\nWe applied the model of multi-layered porous particles\nto explain the absolute extinction in the direction to the two stars.\nA first estimate has been made in order to find a possibility\nto enlarge the extinction per unit mass and to minimize the amount\nof solid phase material.\nSeveral materials as components of composite grains were considered.\nAmong the carbonaceous species,\nthe amorphous carbon in the form of Be1 (Rouleau \\& Martin \\cite{rm91})\nwas found to produce the largest extinction. Also the extinction of iron\noxides strongly increases with the growth of porosity.\nAlthough there are no very good constraints on the abundance of oxides,\nFeO is considered as a possible carrier of the 21~$\\mu{\\rm m}$ emission\nobserved in the spectra of post-AGB stars (Posch et al. \\cite{pma04}).\nVery likely, such particles particles can be produced in redox\nreactions (Duley \\cite{dul80}, Jones \\cite{jones90}).\n\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f4.eps}}\n\\caption\n{Observed and calculated extinction in the direction to\n$\\zeta$ Oph.\nThe errors of the observations are the result of a parameterization\nof the observations (see Fitzpatrick \\& Massa \\cite{fm90}).\nThe contribution to the theoretical extinction from different components\nis also shown.\n{\\rm The dot-dashed curve is the approximation with the observed value\n$R_{\\rm V}$ as suggested by Cardelli et al. (\\cite{ccm89}).}\n}\n\\label{zeta}\\end{center}\\end{figure}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f5.eps}}\n\\caption\n{The same as in Fig.~\\ref{zeta} but now for $\\sigma$ Sco.\nThe observational data were taken from Wegner~(\\cite{ww02}).\n}\\label{sigma}\\end{center}\\end{figure}\n\nWe fitted the observed extinction toward the stars\n$\\zeta$~Oph and $\\sigma$ Sco.\nFor these stars there exist well determined extinction curves and\ngas-phase abundances. It is also important that the major\npart of extinction in these directions is produced in one\ndiffuse interstellar cloud (Savage \\& Sembach \\cite{ss96},\nZubko et al. \\cite{zkw96}). This allows us to exclude possible large\nvariations in dust composition along the line of sight.\n\nObserved and calculated extinction curves are plotted in\nFigs.~\\ref{zeta} ($\\zeta$~Oph) and \\ref{sigma} ($\\sigma$ Sco).\nAs follows from the previous Section,\nthe use of only porous or only compact grains apparently\ndoes not result significant benefit in the solid-phase abundances.\nTherefore, our models are the combination of compact and\nporous particles. They consist of three or four grain populations:\n\n(I). Porous composite (multi-layered) particles\n(Be1 --- 5\\%,\npyroxene, Fe$_{0.5}$Mg$_{0.5}$SiO$_3$ --- 5\\% for $\\zeta$ Oph or\nforsterite, Mg$_2$SiO$_4$ --- 5\\% for $\\sigma$ Sco and\nvacuum --- 90\\%)\nwith the power-law size distribution having an exponential decay\n$n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-2.5}\\exp (-10\/r_{\\rm s})$.\nThe lower\/upper cut-off in the size distribution is\n$0.015\\,\\mu{\\rm m}$\/$0.25\\,\\mu{\\rm m}$ and $0.05\\,\\mu{\\rm m}$\/$0.50\\,\\mu{\\rm m}$\nfor $\\zeta$~Oph and $\\sigma$ Sco, respectively.\n\n(II). Small compact graphite\\footnote{\nThe calculations for graphite were made in the ``2\/3--1\/3'' approximation:\n$Q_{\\rm ext} = {2}\/{3}\\, Q_{\\rm ext}(\\varepsilon_{\\bot}) +\n {1}\/{3}\\, Q_{\\rm ext}(\\varepsilon_{||})$,\nwhere $\\varepsilon_{\\bot}$ and $\\varepsilon_{||}$ are the dielectric\nfunctions for two cases of orientation of the electric field relative\nto the basal plane of graphite.} grains\nwith a narrow power-law size distribution\n($n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-2.5}$,\n$r_{\\rm s}=0.01 - 0.02\\,\\mu{\\rm m}$).\n\n(III). Porous composite grains of magnetite\n(Fe$_3$O$_4$ --- 2\\%, vacuum --- 98\\% for $\\zeta$ Oph and\nFe$_3$O$_4$ --- 8\\%, vacuum --- 92\\% for $\\sigma$ Sco)\nwith a power-law size distribution\n$n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-2.5}$.\nThe lower\/upper cut-off in the size distribution is\n$0.005\\,\\mu{\\rm m}$\/$0.25\\,\\mu{\\rm m}$ and $0.05\\,\\mu{\\rm m}$\/$0.35\\,\\mu{\\rm m}$\nfor $\\zeta$~Oph and $\\sigma$ Sco, respectively.\n\n(IV). Compact grains of forsterite (Mg$_2$SiO$_4$)\nwith the power-law size distribution (only for $\\zeta$ Oph,\n$n_{\\rm d}(r_{\\rm s})\\propto r_{\\rm s}^{-3.5}$,\n$r_{\\rm s,\\,min}=0.10\\,\\mu{\\rm m}$, $r_{\\rm s,\\,max}=0.25\\,\\mu{\\rm m}$). \\\\\n\n\\begin{table*}[htb]\n\\begin{center}\n\\caption[]{Contribution of different grain populations\nto $A_{\\rm V}$ and dust-phase abundances\nfor the model of $\\zeta$ Oph (in ppm)}\\label{da-oph}\n\\begin{tabular}{llccccc}\n\\hline\n\\noalign{\\smallskip}\nComponent & $A_{\\rm V}$ & C & O & Mg& Si & Fe \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n(I) Be1\/pyroxene\/vacuum & 0\\fm58 & 123 & 68 & 11.3 & 22.6 & 11.3 \\\\\n(II) Graphite & 0\\fm075& ~96 & & & & \\\\\n(III) Magnetite\/vacuum & 0\\fm22& & 33 & & & 24.8 \\\\\n(IV) Forsterite & 0\\fm065& & 23 & 11.4 & 5.7 & \\\\\n\\noalign{\\smallskip}\\hline\n\\noalign{\\smallskip}\nTotal & 0\\fm94 & 219 & 124 & 22.7 & 28.2 & 36.1\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\end{table*}\n\nFigures~\\ref{zeta} and \\ref{sigma} also contain\nthe extinction curves calculated using the approximation\nsuggested by Cardelli et al. (\\cite{ccm89}) with\nthe coefficients revised by O'Donnell (\\cite{odonn94}).\nCardelli et al. (\\cite{ccm89}) found that\nthe extinction curves from the UV through the IR\ncould be characterized as a one-parameter family dependent\non the ratio of the total extinction to the selective one\n$R_{\\rm V}=A_{\\rm V}\/E({\\rm B-V})$.\nWe used the observed values of $R_{\\rm V}$ in order\nto plot the CCM approximation. It is seen that\nthis relation describes quite well the extinction for\n$\\zeta$~Oph but not for $\\sigma$ Sco.\n\n\\begin{table*}[htb]\n\\begin{center}\n\\caption[]{Contribution of different grain populations\nto $A_{\\rm V}$ and dust-phase abundances\nfor the model of $\\sigma$ Sco (in ppm)}\\label{da-sco}\n\\begin{tabular}{llccccc}\n\\hline\n\\noalign{\\smallskip}\nComponent & $A_{\\rm V}$ & C & O & Mg& Si & Fe \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n(I) Be1\/forsterite\/vacuum & 0\\fm50 & 58 & 35.4 & 17.7 & 8.8 & \\\\\n(II) Graphite & 0\\fm11& 79 & & & & \\\\\n(III) Magnetite\/vacuum & 0\\fm52& & 35.4 & & & 26.6 \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nTotal & 1\\fm13 & 137 & 71 & 17.7 & 8.8 & 26.6\\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\end{table*}\n\nThe contributions from different components to the calculated extinction\nare given in Tables~\\ref{da-oph} and \\ref{da-sco}\nand shown in Figs.~\\ref{zeta} and \\ref{sigma}.\nThe Tables contain also the dust-phase abundances\nof five dust-forming elements for several grain populations.\nThey were calculated for ratios of the extinction cross-section\nto particle volume averaged over grain size distribution\n(see Eq.~(3.36) in Voshchinnikov~\\cite{v02}).\n\nTable~\\ref{ida} gives the current solar abundances\nof five dust-forming elements according to Asplund et al.~(\\cite{ags04})\nas well as the ``observed''\n(solar minus gaseous) and model abundances.\n\nThe dust-phase abundances in the line of sight to the star\n$\\zeta$~Oph (HD~149757) were taken from\nTable~2 of Snow \\& Witt~(\\cite{sw96}).\nIn our calculations, we adopted\nthe following quantities\nfor $\\zeta$~Oph: a total extinction\n$A_{\\rm V}=0\\fm94^($\\footnote{This value was obtained from the\nrelation $A_{\\rm V}= 1.12 \\, E({\\rm V-K})$ (Voshchinnikov \\& Il'in \\cite{vi87})\nand a colour excess $E({\\rm V-K})=0\\fm84$ (Serkowski et al. \\cite{smf75}).}$^)$,\n colour excess $E({\\rm B}-{\\rm V})=0\\fm32$ and\ntotal hydrogen column density $N({\\rm H})=1.35\\,10^{21}\\,{\\rm cm}^{-2}$\n(Savage \\& Sembach, \\cite{ss96}).\nThe extinction curve was reproduced according to the parameterization\nof Fitzpatrick \\& Massa~(\\cite{fm90}).\n\nFor $\\sigma$ Sco (HD~147165), we used the extinction curve,\nthe colour excess $E({\\rm B}-{\\rm V})=0\\fm35$ and\nthe total extinction $A_{\\rm V}=1\\fm13$ according to Wegner~(\\cite{ww02}).\nThe hydrogen column density\n$N({\\rm H})=2.46 \\, 10^{21}\\, {\\rm cm}^{-2}$ was adopted from\nZubko et al.~(\\cite{zkw96}). The gas-phase abundances were taken from\nAllen et al.~(\\cite{asj90}).\n\nThe dust-phase abundances required by the model are larger\nthan the observed ones in the direction to $\\zeta$~Oph\n(for C and Fe) and\nsmaller than the observed abundances in the direction to $\\sigma$ Sco.\nNote that for $\\sigma$ Sco the required amount of C and Si in dust\ngrains is the lowest in comparison with previous\nmodelling. This is due to the use of highly porous particles\nwhich give considerable extinction in the UV and near-IR (see\nFigs.~\\ref{cn_w} and \\ref{pp09}) and allow one to ``save'' material.\nFor example, the extinction model of $\\sigma$~Sco with compact grains\npresented by Zubko et al.~(\\cite{zkw96})\nrequires 240~--~260 ppm of C and 20~--~30 ppm of Si\nand the model of Clayton et al.~(\\cite{cletal03})\nneeds 155 ppm of C and 36 ppm of Si\n(cf. 137 ppm and 8.8 ppm from Table~\\ref{ida}).\n\nThe models presented above are based on the light scattering\ncalculations for particles with Rayleigh\/non-Rayleigh inclusions\n(layered spheres). It is evident that the observed extinction\ncan be also reproduced if we use the particles with Rayleigh inclusions\n(i.e., if we apply the EMT-Mie theory). Our estimates show that\ndespite a larger extinction in the UV\nthis model requires more material in the solid phase\nin comparison with layered spheres because of\na smaller extinction in the visual-near IR part of the spectrum.\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption[]{Observed and model dust-phase abundances (in ppm)}\\label{ida}\n\\begin{tabular}{cccccc}\n\\hline\n\\noalign{\\smallskip}\nElement & Solar$^\\ast$ & \\multicolumn{2}{c} {$\\zeta$ Oph} &\\multicolumn{2}{c} {$\\sigma$ Sco}\\\\\n & abundance& obs & model & obs & model \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n~C& 245 & 110 & 219 & 176 & 137 \\\\\n~O& 457 & 126 & 124 & ~85 & ~71 \\\\\nMg$^{\\ast\\ast}$& ~33.9 & ~31.9& ~22.7& ~30.9 & ~~~17.7 \\\\\nSi& ~34.2 & ~32.6& ~28.2& ~~32.4 & ~~~8.8 \\\\\nFe& ~28.2 & ~28.2& ~36.1& ~~~27.9 & ~~~26.6 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\noindent $^\\ast$ According to Asplund et al.~(\\cite{ags04}). \\\\\n\\noindent $^{\\ast\\ast}$ The abundance of Mg was recalculated with the oscillator\nstrengths from Fitzpatrick~(\\cite{fp97}).\n\n\\end{table}\n\n\n\\subsection{Near infrared extinction in the Galactic plane}\\label{ir_ext}\n\nWe now consider the possibility of explaining the flat extinction\nacross the $3 - 8\\,\\mu{\\rm m}$ wavelength range observed for\nseveral lines of sight. This flattening was first measured by\nLutz et al.~(\\cite{luetal96}) toward the Galactic center\nwith {\\it ISO}, using hydrogen recombination lines.\nLater Lutz~(\\cite{lutz99}) confirmed the effect using more\nrecombination lines. Recently, Indebetouw et al.~(\\cite{inetal05})\nfound a similar flat extinction along two lines of sight:\n$l=42\\degr$ and $l=284\\degr$. The extinction was obtained at seven\nwavelengths ($1.2 - 8\\,\\mu{\\rm m}$) by combining images from the {\\it Spitzer\nSpace Telescope} with the 2MASS point-source catalog.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f6.eps}}\n\\caption\n{Observed and calculated extinction in the near-IR part\nof spectrum. The observations correspond to the average extinction\nfor two lines of sight along the Galactic plane\n(Indebetouw et al.~\\cite{inetal05})\ntransformed into magnitudes of extinction per kpc.\nThe theoretical extinction was calculated for component~(I)\nof the model used for $\\zeta$ Oph (${\\cal P}=0.9$, short dashed curve).\nTwo other curves correspond to the same component but with\nanother particle porosity.\n{\\rm The dot-dashed curve is the approximation of Cardelli et al. (\\cite{ccm89})\nwith $R_{\\rm V}=3.1$.}\n}\n\\label{gc}\\end{center}\\end{figure}\nWe used the average extinction given in Table~1 of\nIndebetouw et al.~(\\cite{inetal05}) and transformed\nit into magnitudes of extinction per kpc using\nthe measured value $A_{\\rm K}\/D = 0\\fm15 \\pm 0\\fm1\\,{\\rm kpc}^{-1}$\n(Indebetouw et al.~\\cite{inetal05}).\nThe observations are plotted in Fig.~\\ref{gc} together with\nthree theoretical curves.\nBecause we have little information about the UV-visual extinction and\ngas-phase abundances in these directions, we (rather arbitrarily)\napplied the model used for\n$\\zeta$ Oph (porous component (I): Be1\/pyroxene, porosity 90\\%; see\nSect.~\\ref{st_ext}). This model (short dashed curve on Fig.~\\ref{gc})\nwell explains the flat extinction at $\\lambda > 3\\,\\mu{\\rm m}$$^($\\footnote{Note\nthat porous particles from magnetite (component (III); see\nSect.~\\ref{st_ext}) cannot fit well the extinction at these wavelengths\nbecause of a bump at $\\lambda \\approx 2\\,\\mu{\\rm m}$.}$^)$\nbut the extinction in the J and H bands is too small.\nCompact particles (long-dashed curve in Fig.~\\ref{gc}) produce\neven larger extinction at these bands than the observed one.\nHowever, the extinction from such particles at longer wavelengths\ndecreases rapidly. Our preliminary analysis shows that\nparticles with a porosity of\nabout 0.6 (solid curve on Fig.~\\ref{gc})\ncan be chosen as an appropriate model.\nEvidently, a similar curve can be obtained as a combination\nof compact and very porous particles.\nQuite close extinction can be found with\nthe CCM approximation and the standard value $R_{\\rm V}=3.1$.\nWe arbitrarily extrapolated this approximation to long wavelengths\n($\\lambda^{-1} < 0.3 \\,\\mu{\\rm m}^{-1}$) where it gives too small extinction.\n\n Extinction produced by porous grains was also rather flat\nbetween 1.0 and 2.2~$\\mu{\\rm m}$ (for example,\n$A(\\lambda) \\propto \\lambda^{-1.3}$ for ${\\cal P}=0.6$) as was detected\nfor several ultracompact HII regions with $A_{\\rm V} \\ga 15^{\\rm m}$\n(Moore et al. \\cite{metal05}).\n\nIn order to calculate the dust-phase abundances\nfor models presented in Fig.~\\ref{gc}\nwe estimated the total hydrogen column density\nusing Eqs.~(3.26), (3.27) and (3.22) from Voshchinnikov~(\\cite{v02}).\nFirst, we found the column density of\natomic hydrogen $N({\\rm HI})$ from the extinction at J band and then\ntransformed $N({\\rm HI})$ into a total hydrogen column density\n$N({\\rm H})$ using the ratio of total to selective extinction\n$R_{\\rm V}=3.1$. A value of\n$N({\\rm H})\/D=2.79 \\, 10^{21}\\, {\\rm cm}^{-2}{\\rm kpc}^{-1}$\nwas obtained.\nThe calculated abundances are given in Table~\\ref{igc}, which also\ncontains the visual extinction calculated for three\nmodels. Note that the model of grains with porosity ${\\cal P}=0.6$\ngives the largest contribution to $A_{\\rm V}$ in comparison with\ntwo other models.\n\n\\begin{table}[htb]\n\\begin{center}\n\\caption[]{Dust-phase abundances (in ppm)\nand visual extinction for three models presented\nin Fig.~\\ref{gc}.}\\label{igc}\n\\begin{tabular}{cccc}\n\\hline\n\\noalign{\\smallskip}\nElement & ${\\cal P} = 0$ & ${\\cal P} = 0.6$ & ${\\cal P} = 0.9$\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n~C& 96 & 84 & 62 \\\\\n~O& 53 & 46 & 34 \\\\\nMg& ~8.9& ~7.7 & ~5.7 \\\\\nSi& 18 & 16 & 11 \\\\\nFe& ~8.9 & ~7.7 & ~5.7 \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n$A_{\\rm V}$ & 0\\fm60 & 0\\fm79 & 0\\fm61 \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\\end{center}\n\\end{table}\n\nThere are useful observational data\nfor foreground stars in the field $l=284\\degr$.\nFor stars HD~90273, HD~93205 and HD~93222,\nWegner~(\\cite{ww02}) and Barbaro et al.~(\\cite{betal04})\nestimate the values of $R_{\\rm V}$ which lie between 3.4 and 4.0.\nFor HD~90273 Barbaro et al.~(\\cite{betal04}) also find\nan anomalously high gas to dust ratio\n$N({\\rm H})\/E({\\rm B}-{\\rm V})=\n1.04\\,10^{22}\\,{\\rm atoms}\\,{\\rm cm}^{-2}\\,{\\rm mag.}^{-1}$\nEnlargement of $R_{\\rm V}$ increases the required dust-phase\nabundances while the decrease of the gas to dust ratio reduces them.\nAndr\\'e et al.~(\\cite{and03}) measured the interstellar gas-phase\noxygen abundances along the sight lines toward 5 early-type stars\nwith $l=285\\fdg3-287\\fdg7$ and $b=-5\\fdg5~-~+0\\fdg1$.\nThe values of $[{\\rm O}\/{\\rm H}]_{\\rm g} = 443$~ppm vary from\n 356~ppm to 512~ppm, the average value being 443~ppm.\nThis gives for the mean dust-phase\nabundance $[{\\rm O}\/{\\rm H}]_{\\rm d} = 14$~ppm.\nHowever, the extinction curves for HD~93205 and HD~93222\npublished by Wegner~(\\cite{ww02}) have strong UV bumps and flat\nextinction in the far-UV. This means that extinction can be mainly produced\nby carbonaceous grains.\nEvidently, the most reasonable way to solve the problem of\nabundances is a re-examination of the reference cosmic\nabnudances and a detailed study of their local values.\n\n\\section{Infrared radiation}\\label{irr}\n\\subsection{Dust temperature}\\label{temp}\n\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f7.eps}}\n\\caption{Size dependence of the temperature\nfor spherical particles.\nThe particles are located at a distance of\n10$^4\\,R_\\star$ from a star with an effective temperature\n$T_\\star=2500$\\,K.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{td}\\end{center}\n\\end{figure}\nThe commonly used equilibrium temperature\nof cosmic grains is derived from a balance {\\rm between} the energy gain\ndue to absorption of the UV and visual stellar photons\nand the energy loss due to re-emission of IR photons.\nThe temperature of porous and compact particles\nof different size and porosity is shown in\nFigs.~\\ref{td} and \\ref{td-ppp} as a function of particle\nsize and porosity, respectively.\nThe results were calculated for particles located at a distance of\n10$^4\\,R_\\star$ from a star with an effective temperature\n$T_\\star=2500$\\,K. In the case of the layered spheres,\nan increase of the vacuum fraction\ncauses a decrease of the grain temperature if the amount of the\nsolid material is kept constant. This behaviour holds for particles of all\nsizes as well as for particles located closer to the star or farther away\nand for other values of $T_\\star$. If the EMT-Mie theory is applied,\nthe temperature drops when the porosity grows up to $\\sim 0.7$ and then\nstarts to increase (see Fig.~\\ref{td}, upper panel and Fig.~\\ref{td-ppp}).\nSuch a behaviour corresponds to the results\nof Greenberg \\& Hage~(\\cite{grha91}) who found an increase of\ntemperature for large grain porosity (see Fig.~4 in their paper).\n\n\n\\begin{figure}[htb]\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f8.eps}}\n\\caption{Dependence of the dust temperature on particle porosity.\nSolid lines: calculations based on the EMT-Mie theory.\ndashed lines: calculations based on the layered-sphere theory.\nOther parameters are the same as in Fig.~\\ref{td}.\n}\\label{td-ppp}\\end{center}\\end{figure}\nAs it follows from Figs.~\\ref{td} and \\ref{td-ppp},\nthe difference in the temperature of very porous grains\ncalculated using the two models can reach $\\sim 6$\\,K or $\\sim 15$\\%\nwhile the temperature of compact (${\\cal P} = 0$)\ncomposite grains differs by less than 1\\%.\nNote that the relative difference in temperatures of $\\sim 15$\\%\nbetween particles of the two types is kept for other stellar temperatures\n(e.g., in the case of the Sun or an interstellar radiation field).\n\n\nThe intermediate porosity of grains leads to a shift of the\npeak position of IR emission to larger wavelengths\nin comparison with compact particles. This occurs independently of\nparticle structure (small or various size inclusions).\nHowever, very porous grains with Rayleigh\/non-Rayleigh inclusions\nare expected to be systematically cooler than\nparticles with Rayleigh inclusions. Such a difference can be of\ngreat importance at a lower temperature regime because it can influence\nthe growth\/destruction of mantles on grains in molecular clouds.\n\n \n\n\\subsection{Infrared features}\\label{ir_b}\n\nIt is well known that the shape of the IR dust features is a good indicator\nof the particle size and chemical composition. With an increase of the size,\na feature becomes wider and eventually fades away.\nFor example, in the case of compact spherical grains of astrosil,\nthe 10~$\\mu{\\rm m}$ and 18~$\\mu{\\rm m}$ features disappear when the grain radius\nexceeds $\\sim 2-3 \\,\\mu{\\rm m}$.\nObserved differences in small scale structure of the features\nare usually attributed to variations of the composition\n(e.g., changes of the ratio of magnesium\nto iron in silicates) or material state (amorphous\/crystalline).\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f9.eps}}\n\\caption{Wavelength dependence of the absorption efficiency factors\nfor spherical particles\nof radius $r_{\\rm s, \\, compact}=0.1 \\,\\mu{\\rm m}$.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{abs01}\\end{center}\n\\end{figure}\nIn Fig.~\\ref{abs01} we compare the wavelength dependence of\nthe absorption efficiency factors for particles of the same mass but different\nstructure.\n The upper panel shows results obtained with the EMT-Mie model\nfor particles with Rayleigh inclusions.\nIt can be seen that the central position and the width of the dust features\ndoes not really change.\nLarger changes occur for the layered-sphere model\n(Fig.~\\ref{abs01}, lower panel).\n In this case a growth of ${\\cal P}$ causes a shift of the center\nof the feature to longer wavelengths and its broadening.\n For particles with ${\\cal P}=0.9$, the 10~$\\mu{\\rm m}$ feature transforms into\na plateau while the 18~$\\mu{\\rm m}$ feature disappears.\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f10.eps}}\n\\caption[]\n{Emission in the disc around the star ${\\beta}~\\rm Pictoris$ in the region of silicate\n10\\,$\\mu{\\rm m}$ band.\nStars and squares are the observations of Knacke et al.~(\\cite{kn93})\nand of Telesco \\& Knacke~(\\cite{tk91}). The curves present the results\nof calculations for particles of radius $r_{\\rm s, \\, compact}=0.1 \\,\\mu{\\rm m}$\nas shown in Fig.~\\ref{abs01} but normalized at $\\lambda=9.6\\,\\mu{\\rm m}$.\n}\\label{bet}\\end{center}\n\\end{figure}\nWe plotted in Fig.~\\ref{bet} our data from Fig.~\\ref{abs01} in a normalized\nmanner together with observations of ${\\beta}~\\rm Pictoris$ made by\nKnacke et al.~(\\cite{kn93}) and Telesco \\& Knacke~(\\cite{tk91}).\nAs follows from Fig.~\\ref{bet}, for given optical constants of the silicate\nthe observed shape of the 10~$\\mu{\\rm m}$ feature\nis better reproduced by either compact or porous particles\nwith small size inclusions of materials.\nNote that in the case of ${\\beta}~\\rm Pictoris$ the EMT-Mie calculations\nwere earlier used by Li \\& Greenberg~(\\cite{li:gre98}) for the\nexplanation of the 10~$\\mu{\\rm m}$ emission feature and by\nVoshchinnikov \\& Kr\\\"ugel~(\\cite{vk99}) for\nthe interpretation of the positional\nand wavelength dependence of polarization.\nThe best fit was obtained for very\nporous particles: ${\\cal P} \\approx 0.95$ and 0.76, respectively.\n\n\n\\begin{figure}\\begin{center}\n\\resizebox{\\hsize}{!}{\\includegraphics{3371f11.eps}}\n\\caption{Wavelength dependence of the normalized absorption efficiency factors\nfor spherical particles of radius $r_{\\rm s, \\, compact}=2 \\,\\mu{\\rm m}$.\nUpper panel: calculations based on the EMT-Mie theory.\nLower panel: calculations based on the layered-sphere theory.\n}\\label{abs2}\\end{center}\n\\end{figure}\nFigure~\\ref{abs2} shows the normalized absorption efficiency factors\nfor spherical particles of radius $r_{\\rm s, \\, compact}=2 \\,\\mu{\\rm m}$.\nFor compact grains the 10~$\\mu{\\rm m}$ feature almost disappears.\nWhen the porosity increases the strength of the feature grows\nin the case of the Bruggeman-Mie calculations.\nThis tendency coincides with the results\nshown in Fig.~7 of Hage \\& Greenberg~(\\cite{hagr90}) who found\nthat the higher the porosity, the sharper the silicate emission {\\rm became}.\nFor the case of layered spheres the feature becomes only slightly\nstronger but its peak shifts to longer wavelengths.\n\n\nA standard ``compact'' approach to the modelling of\nthe 10~$\\mu{\\rm m}$ feature was used by\nvan Boekel et al.~(\\cite{vb03}, \\cite{vb05})\nand Przygodda et al. (\\cite{pr03}) who considered the flattening of the 10~$\\mu{\\rm m}$ feature\nas an evidence of grain growth in the discs around Herbig Ae\/Be\nstars and T Tauri stars, respectively.\nOur investigations show that the variations\nof the shape of the feature and its position and strength can\nalso be attributed to the change of porosity and relative amount\nof carbon in composite grains of small sizes.\n\n\n\\subsection{Dust opacities}\\label{opa}\n\n\\begin{table*}[htb]\n\\caption[]{Mass absorption coefficients at $\\lambda = 1$\\,mm of\ncompact and porous spheres consisting of\nAC1$^\\ast$ and (or) astrosil$^{\\ast\\ast}$.} \\label{t1}\n\\begin{center}\\begin{tabular}{cccccccccc}\n\\hline\n\\noalign{\\smallskip}\n&\\multicolumn{3}{c}{AC1 + astrosil}&\\multicolumn{3}{c}{astrosil}\n&\\multicolumn{3}{c}{AC1 } \\\\\n\\noalign{\\smallskip} \\cline{2-10} \\noalign{\\smallskip}\n${\\cal P}$ & $\\rho_{\\rm d}$&\\multicolumn{2}{c}{$\\kappa$, cm$^2$\/g}\n& $\\rho_{\\rm d}$ &\\multicolumn{2}{c}{$\\kappa$, cm$^2$\/g}\n& $\\rho_{\\rm d}$ &\\multicolumn{2}{c}{$\\kappa$, cm$^2$\/g} \\\\\n\\noalign{\\smallskip} \\cline{2-10} \\noalign{\\smallskip}\n&&Brugg.--Mie & lay. spheres\n&&Brugg.--Mie & lay. spheres\n&&Brugg.--Mie & lay. spheres \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n 0.00 & ~2.58 & 1.58 & 1.58 & ~3.30 & 0.310& 0.310 & ~1.85 & 4.37 & ~4.37\\\\\n 0.10 & ~2.32 & 1.89 & 1.55 & ~2.97 & 0.371& 0.334 & ~1.66 & 5.13 & ~4.60\\\\\n 0.30 & ~1.80 & 2.75 & 1.87 & ~2.31 & 0.548& 0.446 & ~1.30 & 7.09 & ~5.77\\\\\n 0.50 & ~1.29 & 3.83 & 2.57 & ~1.65 & 0.778& 0.646 & 0.925 & 9.22 & ~7.88\\\\\n 0.70 & 0.772 & 3.94 & 4.04 & 0.990 & 0.794& ~1.05 & 0.555 & 9.14 & 11.9 \\\\\n 0.90 & 0.258 & 2.45 & 8.12 & 0.330 & 0.431& ~2.20 & 0.185 & 5.94 & 21.8 \\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n\\end{tabular}\\end{center}\n $^{\\ast}$ $m(\\lambda=1 {\\rm mm})=2.93+0.276i$, $\\rho_{\\rm d}=1.85$\\,g\/cm$^3$\n\n $^{\\ast\\ast}$ $m(\\lambda=1 {\\rm mm})=3.43+0.050i$, $\\rho_{\\rm d}=3.3$\\,g\/cm$^3$\n\\end{table*}\n\nThe dust opacity or\nthe mass absorption coefficient of a grain material $\\kappa(\\lambda)$\nenters directly in the expression for\nthe dust mass of an object $M_{\\rm d}$ which is determined from\noptically thin millimeter emission\n\\begin{equation}\nM_{\\rm d} = \\frac{F_{\\rm mm}(\\lambda) D^2}{\\kappa(\\lambda) B_\\lambda(T_{\\rm d})}.\n \\label{m}\n\\end{equation}\nHere, $F_{\\rm mm}(\\lambda)$ is the observed flux,\n$D$ the distance to the object, $B_\\lambda(T_{\\rm d})$ the Planck function,\n$T_{\\rm d}$ the dust temperature.\nThe mass absorption coefficient $\\kappa(\\lambda)$ depends on\nthe particle volume $V_{\\rm total}$, the material density $\\rho_{\\rm d}$\nand the extinction cross-section $C_{\\rm ext}$ as follows:\n\\begin{equation}\n\\kappa(\\lambda) = \\frac{C_{\\rm ext}}{\\rho_{\\rm d} V_{\\rm total}} \\approx\n \\frac{3}{\\rho_{\\rm d}} \\, \\left(\\frac{2 \\pi}{\\lambda}\\right) \\,\n {\\rm Im} \\left\\{\\frac{\\varepsilon_{\\rm eff}-1}{\\varepsilon_{\\rm eff}+2} \\right\\}.\n \\label{kap}\n\\end{equation}\nAt long wavelengths the scattering can be neglected\n{\\rm ($C_{\\rm ext} \\approx C_{\\rm abs}$)} and\n$C_{\\rm abs}$ {\\rm can be evaluated} in the Rayleigh approximation.\nThen the mass absorption coefficient does not depend on the particle size\nas shown in the right part of Eq.~(\\ref{kap}).\nThe effective dielectric permittivity $\\varepsilon_{\\rm eff}$\nin Eq.~(\\ref{kap}) can be found from the Bruggeman rule (see Eq.~(\\ref{bru})) or\nthe layered-sphere rule of the EMT (see Eqs.~(7), (8) in\nVoshchinnikov et al. \\cite{vih05}).\n\n\nExtensive studies of the mass absorption coefficient dependence\non the material properties and grain shape are\nsummarized by Henning~(\\cite{h96}) who,\nin particular, {\\rm notes} that the opacities at 1~mm are considerably\nlarger for non-spherical particles {\\rm than for} spheres\n(see also Ossenkopf \\& Henning \\cite{oh94}).\nWe find that a similar effect (an increase of opacity in comparison\nwith compact spheres) is\nproduced by inclusion of a large fraction of vacuum\ninto the particles. This follows from Table~\\ref{t1}\nwhere the opacities at $\\lambda = 1$\\,mm are presented.\nThis Table contains the results for particles consisting of three\nmaterials (AC1, astrosil and vacuum) or\ntwo materials (AC1 or astrosil and vacuum).\nIn the first case,\nthe volume fractions of AC1 and astrosil are equal\n($V_{\\rm AC1} \/V_{\\rm total} = V_{\\rm astrosil} \/V_{\\rm total} =\n1\/2 \\, (1 - {\\cal P})$) while in the second case the volume fraction\nof solid material is $1 - {\\cal P}$.\nIt can be seen that the values of $\\kappa$ are generally larger for particles\nwith a larger fraction of vacuum. This is related\nto the decrease of the particle density $\\rho_{\\rm d}$ which is calculated\nas the volume-average quantity.\n As the mass of dust in an object is proportional to $\\rho_{\\rm d}$\n(see Eqs.~(\\ref{m}) and (\\ref{kap})), the assumption of porous grains\n can lead to considerably smaller mass estimates.\nNote that the opacities are larger for more absorbing\ncarbon particles. A similar effect was noted by\nQuinten et al.~(\\cite{qkhm02}) who theoretically studied the wavelength\ndependence of extinction of different carbonaceous particles.\nThey also showed that the far IR extinction was larger\nfor clusters of spheres and spheroids than for compact spheres.\nA very large enhancement of the submm opacities was found\nby Ossenkopf \\& Henning~(\\cite{oh94}) in the case of pure\ncarbon aggregates or carbon on silicate grains.\n\nUsing Eq.~(\\ref{m}) and data from Table~\\ref{t1} shows\nhow the particle porosity and structure\ncan influence estimates of dust mass in an object.\nThe mass ratio estimates can be found in the Rayleigh--Jeans approximation\n$$\n\\frac{M_{\\rm d} ({\\rm \\mbox{compact}})}{M_{\\rm d} ({\\rm porous})}\n= \\frac{\\kappa_{\\rm porous}(\\lambda) \\, T_{\\rm d, porous}}\n {\\kappa_{\\rm compact}(\\lambda) \\, T_{\\rm d, compact}}.\n$$\nWith grain temperatures from Fig.~\\ref{td-ppp}\nand the values of $\\kappa$ for composite grains and EMT-Mie theory\n(the third column in Table~\\ref{t1}) we can find that the ratio\n$M_{\\rm d} ({\\rm \\mbox{compact}})\/M_{\\rm d} ({\\cal P} = 0.9)$\nlies between $\\sim 1.3$ and $\\sim 1.5$. If the layered-sphere model\nis used (the forth column in Table~\\ref{t1}) the ratio increases to\n$3.8 - 4.3$. This means that the calculated mass of an object can be reduced\nif compact grains are replaced by porous ones.\n\nThe ratio of dust masses calculated for two grain models is\n$$\n\\frac{M_{\\rm d} ({\\rm \\mbox{EMT-Mie}})}{M_{\\rm d} ({\\rm layered \\, sphere})}\n= \\frac{\\kappa_{\\rm lay \\, sphere}(\\lambda) \\, T_{\\rm d, lay \\, sphere}}\n {\\kappa_{\\rm EMT-Mie}(\\lambda) \\, T_{\\rm d, EMT-Mie}} \\approx 2.8.\n$$\nThe numerical value was obtained for particles\nconsisting of AC1, astrosil and vacuum with ${\\cal P} = 0.9$\nand the temperature ratio\n$T_{\\rm d, lay \\, sphere}\/T_{\\rm d, EMT-Mie}=0.85$\nas discussed in Sect.~\\ref{temp}.\nIf we consider particles of the same porosity but consisting of two\nmaterials, the ratio of masses will\nbe even larger (3.1 for AC1 and 4.3 for astrosil).\nThus, one can overestimate the mass of an object\nby a factor of 3 or more if the EMT-Mie model is applied,\nsince real dust grains in molecular cloud cores\nshould be very porous and should have non-Rayleigh inclusions.\nAnother case when the effect can be important is in circumstellar\ndiscs, e.g. Takeushi et al.~(\\cite{tcl05}) used $\\kappa = 0.3\\,{\\rm cm^2\/g}$\nat $\\lambda = 1$\\,mm for highly porous silicate grains,\nwhich is a good approximation only for particles with small size\ninclusions (see Table~\\ref{t1}).\n\n\\section{Concluding remarks}\\label{concl}\n\nWe have considered how the porosity of composite cosmic dust grains\ncan affect their optical properties important for interpretation of\nobservations of interstellar, circumstellar and cometary dust.\n Two models of particle structure were used.\n Particles of the first kind had well-mixed inclusions\nsmall in comparison with the wavelength,\nwhile those of the second kind consisted of very thin,\ncyclically repeating layers.\n Earlier we showed that the optical properties of such layered particles\nare close to those of particles with small and large inclusions\n(see Voshchinnikov et al. \\cite{vih05}).\n As effective medium theories give reliable results\nfor particles with small inclusions,\ntwo very different particle structure models can be\nsimply realized and extensive computations can be performed.\n\nFor both models, we studied\nhow an increase of the volume fraction of vacuum could change\nthe extinction efficiencies at different wavelengths,\ntemperature of dust grains, profiles of the IR silicate bands and\ndust millimeter opacities.\n It is found that the models begin to differ\nessentially when the porosity exceeds $\\sim 0.5$.\n This difference appears as lower temperatures (Sect.~\\ref{temp}),\nshifted central peaks of the silicate bands\n(Sect.~\\ref{ir_b}) and larger millimeter opacities\n(Sect.~\\ref{opa})\nfor layered particle model in comparison with that based\non EMT calculations.\n The latter model also requires larger dust-phase abundances\nthan the layered model (Sect.~\\ref{st_ext})\nto produce the same interstellar extinction.\n\nThe assumption that interstellar particles have only small size inclusions\nlooks to some extent artificial\n(excluding, of course, the case of special laboratory samples).\n Therefore, we believe that the layered sphere model well describing\nlight scattering by very porous quasispherical particles with\ninclusions of different sizes\nshould find wide applications in interpretation of different phenomena.\n In particular, this model has good perspectives\nto explanation of the flat interstellar extinction observed\nin the near-IR part of spectrum (Sect.~\\ref{ir_ext}) and\nvariations of the shape of the silicate feature detected in spectra\nof T Tau and Herbig Ae\/Be stars\n(the results will be in a subsequent next paper).\n\n\n\\acknowledgements{\nWe are grateful to Walter Wegner for the possibility to use unpublished data\nand to Bruce Draine for comments on an earlier version of the paper.\nWe are also grateful to the referee Michael Wolff and scientific editor\nAnthony Jones for useful suggestions.\nNVV acknowledges the hospitality of the Max-Planck-Institut f\\\"ur Astronomie\nwhere this work was finished.\nThe work was partly supported by grant 1088.2003.2 of the President of the\nRussian Federation for leading scientific schools.\n}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe properties and dynamics of physical systems are closely tied to their symmetries.\nOften these symmetries are known from fundamental principles. There are also, however, systems with unknown or emergent symmetries.\nDiscovering and characterizing these symmetries is an essential component of physics research.\n\n\nBeyond their inherent interest, symmetries are also practically useful for increasing the statistical power of datasets for various analysis goals.\nFor example, a dataset can be augmented with pseudodata generated by applying symmetry transformations to existing data, thereby creating a larger training sample for machine learning tasks.\nNeural network architectures can be constructed to respect symmetries (e.g.~convolutional neural networks and translation symmetries~\\cite{6795724}), in order to improve generalization and reduce the number of model parameters.\nFurthermore, symmetries can significantly increase the size of a useful synthetic dataset created from a generative model trained on a limited set of examples~\\cite{2008.06545}.\n\n\nDeep learning is a powerful tool for identifying patterns in high-dimensional data and is therefore a promising technique for symmetry discovery.\nA variety of deep learning methods have been proposed for symmetry discovery and related tasks.\nNeural networks can parametrize the equations of motion for physical systems, which can have conserved quantities resulting from symmetries~\\cite{greydanus2019hamiltonian,cranmer2020lagrangian}.\nGeneric neural networks targeting classification tasks can encode symmetries in their hidden layers~\\cite{Barenboim:2021vzh,Krippendorf:2020gny}. %\nThis possibility can be used to actively learn symmetries by encoding a shared equivariance in hidden layers across learning tasks~\\cite{zhou2021metalearning}.\nDirectly learning symmetries can be framed as an inference problem given access to parametric symmetry transformations of the same dataset~\\cite{benton2020learning}.\nA given symmetry can be identified in data if a classifier is unable to distinguish a dataset from its symmetric counterpart~\\cite{Tombs:2021wae,Lester:2021kur,Lester:2021aks} (similar to anomaly detection methods comparing data to a reference~\\cite{Collins:2018epr,Collins:2019jip,DAgnolo:2018cun}).\nAnother class of targeted approaches can be found in the domain of automatic data augmentation.\nIf a dataset can be augmented without changing its statistical properties, then one has learned a symmetry. Significant advances in this area have used reinforcement learning~\\cite{cubuk2019autoaugment,lim2019fast}. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images\/Plots\/SchematicSymmetry2.pdf}\n \\caption{\n A schematic diagram of (top) the training setup for a usual GAN and (bottom) the SymmetryGAN variation discussed in this paper for automatically discovering symmetries.\n %\n Here, $g$ is the generator and $d$ is the discriminator.\n %\n Not represented here is the incorporation of the inertial reference dataset.\n %\n In our numerical examples, this is accomplished by directly imposing constraints on $g$.\n }\n \\label{fig:schematic}\n\\end{figure}\n\n\nAn alternative symmetry discovery approach that is flexible, fully differentiable, and simple is based on generative models~\\cite{hataya2019faster,antoniou2018data}.\nUsually, a generative model is a function that maps random numbers to structured data.\nFor example, a deep generative surrogate model can be trained such that the resulting probability density matches that of a target dataset.\nFor symmetry discovery, by contrast, the random numbers are replaced with the target dataset itself.\nIn this way, a well-trained generator designed to confound an adversary will implement a symmetry transformation. \nWe call this generative model framework for symmetry discovery \\emph{SymmetryGAN}, since it has the same basic training strategy as a generative adversarial network (GAN)~\\cite{Goodfellow:2014upx}, as shown in \\Fig{schematic}.\n\n\nIn this paper, we extend the SymmetryGAN approach and introduce it to the physics community.\nIn particular, we build a rigorous statistical framework for describing the symmetries of a dataset and construct a learning paradigm for automatically detecting generic symmetries.\nThe key idea is that symmetries of a target dataset have to be defined with respect to an \\emph{inertial} reference dataset, analogous to inertial frames in classical mechanics.\nOur deep learning setup is simpler than existing approaches and we develop an analytic understanding of the algorithm's performance in simple cases.\nThis in turn allows us to understand the dynamics of the machine learning as it trains from a random initialization to an element of the symmetry group.\n\n\n\n This rest of this paper is organized as follows.\n %\n In \\Sec{stats}, we build a rigorous statistical framework for discovering the symmetries of a dataset, contrasting it with discovering the symmetries of an individual data element.\n %\n Our machine learning approach with an inertial restriction is introduced in \\Sec{MLa} and the deep learning implementation is described in \\Sec{ML}.\n %\n Empirical studies of simple Gaussian examples, including both analytic and numerical results, are presented in \\Sec{results}.\n %\n We then apply our method to a high energy physics dataset in \\Sec{hepexample}.\n %\n In \\Sec{inference}, we discuss possible ways to go beyond symmetry discovery and towards symmetry inference, with further studies in \\App{symmetry_discovery_map}.\n %\n Our conclusions and outlook are in \\Sec{conclusions}.\n\n\n\n\\section{Statistics of Symmetries}\n\\label{sec:stats}\n\nWhat is a symmetry?\nLet $X$ be a random variable on an open set $\\O\\subseteq\\mathbb{R}^n$, and let $x$ be an instantiation of $X$.\nWhen we refer to the symmetry of an individual data element $x \\in X$, we usually mean a transformation $h:\\O\\rightarrow\\O$ such that:\n\\begin{equation}\n h(x) = x,\n\\end{equation}\ni.e.\\ $x$ is invariant to the transformation $h$.\nMore generally, we can consider functions of individual data elements, $f:\\O\\subseteq\\mathbb{R}^n\\rightarrow\\O'\\subseteq\\mathbb{R}^m$.\nIn that case, the function is symmetric if\n\\begin{equation}\n\\label{eq:functionsymmetry}\n f(h(x)) = f(x),\n\\end{equation}\ni.e. the output of $f$ is invariant to the transformation $h$ acting on $x$.\nOne can also consider equivariances, where the output of $f$ has well-defined transformation properties under the symmetry~\\cite{Dolan:2020qkr,serviansky2020set2graph,Bogatskiy:2020tje,Shimmin:2021pkm}.\nWhile symmetries acting on individual data elements are interesting, they are \\emph{not} the focus of this paper.\n\n\nWe are interested in the symmetries of a dataset as a whole, treated as a statistical distribution.\nLet $X$ be governed by the probability density function (PDF) $p$.\nNaively, a symmetry of the dataset $X$ is a map $g:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ such that $g$ preserves the PDF:\n\\begin{equation}\n\\label{eq:naivesymmetry}\np(X = x) = p(X = g(x)) \\, |g'(x)|,\n\\end{equation}\nwhere $|g'(x)|$ is the Jacobian determinant of $g$.\nWhile it is necessary that any candidate symmetry preserves the probability density, it is not sufficient, at least not in the usual way that physicists think about symmetries.\n\n\nConsider the simple case of $n=1$.\nLet $F$ be the cumulative distribution function (CDF) of $X$.\n$F(X)$ is itself a random variable satisfying \n\\begin{equation}\n F(X)\\sim\\mathcal{U}[0,1],\n\\end{equation}\nwhere $\\mathcal{U}(\\O)$ is the uniform random variable on $\\O$.\nConversely, $F^{-1}(\\mathcal{U}[0,1])$ is a random variable governed by the PDF $p$ (for technical details, see~\\Ref{10.2307\/2132726}).\nThe uniform distribution on the interval $[0,1]$ has many PDF-preserving maps, such as the quantile inversion map:\n\\begin{equation}\n\\widetilde{g}(x)=1-x.\n\\end{equation}\nThis map has the additional property that $\\widetilde{g}^2(x)=x$, so it appears to represent a $\\mathbb{Z}_2$ (i.e.~parity) symmetry.\nUsing the CDF map from above, every probability density $p$ admits a $\\mathbb{Z}_2$ PDF-preserving map:\n\\begin{equation}\n\\label{eq:PDFpreserveZ2}\n g=F^{-1}\\circ\\widetilde{g}\\circ F.\n\\end{equation}\n\n\nIf we were to accept \\Eq{naivesymmetry} as the definition of a symmetry, then \\emph{all} one-dimensional random variables would have a $\\mathbb{Z}_2$ symmetry, namely the one in \\Eq{PDFpreserveZ2}.\nWhile true in a technical sense, this is not what physicists (or, to our knowledge, any domain experts) think of as a symmetry of a dataset.\nThe precise definition of a symmetry must therefore be stricter than simply PDF-preserving.\nIn particular, while this $\\mathbb{Z}_2$ PDF-preserving map applies to every one-dimensional random variable, it requires a different map for each such variable.\nWhen we usually think about symmetries, we imagine common maps that can be applied to a variety of physical systems that share the same underlying symmetry structure.\n\n\n\n\nThis line of thinking suggests a sharper definition of a symmetry that makes use of a reference distribution.\nConsider two probability densities\n\\begin{equation}\n\\label{eq:twoPDFs}\np:\\mathbb{R}^n\\rightarrow\\mathbb{R}, \\quad p_I:\\mathbb{R}^n\\rightarrow\\mathbb{R}.\n\\end{equation}\nA map $g:\\mathbb{R}^n\\rightarrow\\mathbb{R}^n$ is defined to be a symmetry of $p$ \\emph{relative} to $p_I$ if it is PDF-preserving for both $p$ and $p_I$:\n\\begin{equation}\n\\label{eq:improvedsymmetry}\np(x) = p(g(x)) \\, |g'(x)|, \\quad p_I(x) = p_I(g(x)) \\, |g'(x)|.\n\\end{equation}\nThe reference or \\textit{inertial} density $p_I$ is the analogue of an inertial reference frame in classical mechanics.\nThis new definition of a symmetry will typically exclude quantile maps, like $\\widetilde{g}$ above, because the $\\widetilde{g}$ that works for one random variable will typically not work for another (e.g.\\ Gaussian and exponential random variables).\n\n\nWhile this new definition solves the problem of ``fake'' symmetries, it also introduces a dependence on the inertial distribution. \nJust as with inertial reference frames, however, there is often a canonical choice for $p_I$ which reduces the number of possibilities in practice.\nA natural choice for many physics datasets is to pick the uniform distribution on $\\mathbb{R}^{n}$, where $n$ is the dimension of the dataset.\nThis not a proper (i.e.\\ normalizable) probability density, though, so we discuss techniques below to use it as the inertial distribution nonetheless.\n\n\nFinally, it is instructive to relate the definitions of symmetries for datasets and functions.\nGiven the two PDFs in \\Eq{twoPDFs}, we can construct the likelihood ratio\n\\begin{equation}\n \\ell(x) \\equiv \\frac{p(x)}{p_I(x)}.\n\\end{equation}\nApplying the symmetry map $g$ as in \\Eq{improvedsymmetry}, the likelihood ratio transforms as:\n\\begin{equation}\n \\ell(g(x)) = \\frac{p(g(x))}{p_I(g(x))} = \\frac{p(x)}{p_I(x)} = \\ell(x),\n\\end{equation}\nwhere the Jacobian factor $|g'(x)|$ cancels between the numerator and denominator.\nTherefore the likelihood ratio, which is an ordinary function, is symmetric by the definition in \\Eq{functionsymmetry}.\nThis cancelling of the Jacobian factor is an intuitive way to understand why an inertial reference density is necessary to define the symmetry of a dataset.\n\n\n\n\\section{Machine Learning with Inertial Restrictions}\n\\label{sec:MLa}\n\nThe SymmetryGAN paradigm for discovering symmetries in a dataset involves simultaneously learning two functions:\n\\begin{align}\n g:\\mathbb{R}^n &\\rightarrow\\mathbb{R}^n,\\\\\n d:\\mathbb{R}^n &\\rightarrow[0,1].\n\\end{align}\nThe function $g$ is a \\emph{generator} that represents the symmetry map.%\n\\footnote{Here, we are using the machine learning meaning of a ``generator'', which differs from the generator of a symmetry group, though they are closely related.}\nThe function $d$ is a \\emph{discriminator} that tries to distinguish the input data $\\{x_i\\}$ from the transformed data $\\{g(x_i)\\}$.\nWhen the discriminator cannot distinguish the original data from the transformed data, then $g$ will be a symmetry.\nThe technical details of this approach are provided in \\Sec{ML} using the framework of adversarial networks.\n\n\nAs described in \\Sec{stats}, it is not sufficient to require that $g$ preserves the PDF of the input data; it also has to preserve the PDF of the inertial density.\nThere are several methods to implement an inertial restriction into the machine learning strategy.\n\\begin{itemize}\n \\item \\emph{Simultaneous discrimination:}\n %\n In this method, the discriminator $d$ is applied both to the input dataset and to data drawn from the inertial density $p_I$.\n %\n The training procedure penalizes any map $g$ that does not fool $d$ for both datasets.\n %\n In practice, it might be advantageous to use two separate discriminators $d$ and $d_I$ for this approach.\n %\n \\item \\emph{Two stage selection:}\n %\n Here, one first identifies all PDF-preserving maps $g$.\n %\n Then one \\textit{post hoc} selects the ones that also preserve the inertial density.\n %\n \\item \\emph{Upfront restriction:}\n %\n If the PDF-preserving maps of $p_I$ are already known, then one could restrict the set of maps $g$ at the outset.\n %\n This allows one to perform an unconstrained optimization on the restricted search space.\n %\n\\end{itemize}\n\n\n\nEach of these methods has advantages and disadvantages.\nThe first two options require sampling from the inertial density $p_I$.\nThis is advantageous in cases where the symmetries of the inertial density are not known analytically.\nWhen $p_I$ is uniform on $\\mathbb{R}^n$ or another unbounded domain, though, these approaches are not feasible.%\n\\footnote{One could try to leverage approximate strategies, such as cutting off the support for $p_I$ a few standard deviations away from the mean of $p$. Still, one can run into edge effects if there is a mismatch between the domain and range of $g$.}\nThe second option is computationally wasteful, as the space of PDF-preserving maps is generally much larger than the space of symmetry maps.\nWe focus on the third option: restricting the set of functions $g$ to be automatically PDF-preserving for $p_I$.\nThis in turn requires a way to parametrize all such $g$, or at least a large subset of them.\n\nFor all of the studies in this paper, we further focus on the case where the inertial distribution $p_I$ is uniform on $\\mathbb{R}^n$.\nFor any open set $\\O\\subseteq\\mathbb{R}^n$, a differentiable function $g:\\O\\to \\O$ preserves the PDF of the uniform distribution $\\mathcal{U}(\\O)$ if and only if $g$ is an equiareal map.%\n\\footnote{By carefully taking suitable limits, these ideas go through even if $\\mathcal{U}(\\O)$ is an improper prior. The important takeaway is that uniform distributions are preserved by equiareal maps.}\nTo see this, note that the PDF of $X\\sim\\mathcal{U}(\\O)$ is $p(X = x) = 1\/\\operatorname{Vol}(\\O)$.\nHence, the PDF-preserving condition $p = p\\circ g\\cdot |g'|$ is met if and only if $|g'| = 1$.\nA map is equiareal if and only if its Jacobian determinant is $1$, which proves our claim.\nTherefore, our search space to discover symmetries of physics datasets will be the space of equiareal maps of appropriate dimension.\nOf course, there are interesting physics symmetries that do not preserve uniform distributions on $\\mathbb{R}^n$; these would require an alternative approach.\n\n\nThe set of equiareal maps for $n > 1$ is not well characterized.\nFor example, even for $n = 2$, not all equiareal maps are linear.\nA simple example of a non-linear area-preserving map is the H\\'{e}non map~\\cite{10.2307\/43635985}: $g(x,y)=(x,y-x^2)$.\nThis makes the space of equiareal maps difficult to directly encode into the learning.\nWhile the general set of equiareal maps is difficult to parametrize, the set of area preserving linear maps on $\\mathbb R^n$ is well understood:\n\\begin{multline*}\n \\mathbb{A} SL^\\pm_n(\\mathbb{R}) = \\{g: \\mathbb{R}^n\\to\\mathbb{R}^n \\; | \\; g(x) = Mx + V,\\\\ M\\in \\mathbb{R}^{n\\times n}, \\det M = \\pm 1, V\\in \\mathbb{R}^n\\}.\n\\end{multline*}\nThis is a subgroup of the general affine group $\\operatorname{Aff}_n(\\mathbb{R})$, and it can be characterized as a topological group of dimension $n(n+1) - 1$.\nThese maps even have complete parametrizations such as the \\textit{Iwasawa decomposition}~\\cite{10.2307\/1969548} which significantly aid the symmetry discovery process.\n\n\nNot all symmetries are linear, however, and if one chooses $\\mathbb{A} SL^\\pm_n(\\mathbb{R})$ as the search space, one cannot discover non-linear maps.\nEven so, the subset of symmetries discoverable within $\\mathbb{A} SL^\\pm_n(\\mathbb{R})$ is rich enough, and the benefits of having a known parameterization valuable enough, that we focus on linear symmetries in this paper and leave the study of non-linear symmetries to future work.\n\n\n\\section{Deep Learning Implementation}\n\\label{sec:ML}\n\n\nTo implement the SymmetryGAN procedure, we modify the learning setup of a GAN~\\cite{Goodfellow:2014upx}.\nFor a typical GAN, a generator function $g$ surjects a latent space onto a data space.%\n\\footnote{While all the GANs discussed here are (approximately) bijective, GANs in general need not be. Symmetry discovery requires the generator to be bijective, so one may want to leverage nomalizing flows~\\cite{10.5555\/3045118.3045281,Kobyzev2020} in future work.}\nThen, a discriminator distinguishes generated examples from target examples.\n\n\nFor a SymmetryGAN, the latent probability density is the \\emph{same} as the target probability density, as illustrated in \\Fig{schematic}.\nThe generator $g$ and discriminator $d$ are parametrized as neural networks.\nThey are then trained simultaneously to optimize the binary cross entropy loss functional:\n\\begin{align}\n\\label{eq:numericloss}\n L[g,d]=-\\frac1N\\sum_{x\\in\\{x_i\\}_{i=1}^N}\\Big[\\log\\big(d(x)\\big) + \\log\\big(1-d(g(x))\\big)\\Big]\\,.\n\\end{align}\nThis differs from the usual binary cross entropy in that the same samples appear in the first and second terms.\nA similar structure appears in neural resampling~\\cite{Nachman:2020fff} and in step 2 of the \\textsc{OmniFold} algorithm~\\cite{Andreassen:2019cjw}.\nFollowing \\Sec{MLa}, we assume that the generator $g$ already preserves the inertial distribution. \n\n\nThe behavior of \\Eq{numericloss} can be understood analytically by considering the limit of infinite data:\n\\begin{align}\\nonumber\n L[g,d]&=-\\int \\Big[\\log\\big(d(x)\\big) \\, p(x)\\\\\\label{eq:loss}\n &\\qquad+\\log\\big(1-d(g(x))\\big)\\, p(g(x))\\, |g'(x)|\\Big]\\dd x\\,,\n\\end{align}\nwhere the Jacobian factor $|g'(x)|$ is now made manifest.\nFor a fixed $g$, the optimal $d$ is the usual result from binary classification (see e.g.\\ \\Ref{hastie01statisticallearning,sugiyama_suzuki_kanamori_2012}):\n\\begin{align}\n\\label{eq:optimalf}\n d_*=\\frac{p(x)}{p(x)+p(g(x)) \\, |g'(x)|}\\,,\n\\end{align}\nwhich is the ratio of the probability density of the first term in \\Eq{loss} to the sum of the densities of both terms.\nInserting $d_*$ into \\Eq{loss} and optimizing using the Euler-Lagrange equation:\n\\begin{equation}\n\\frac{\\delta L[g,g']}{\\delta g}=\\frac{\\partial L}{\\partial g}-\\frac{\\dd}{\\dd x} \\frac{\\partial L}{\\partial g'} = 0,\n\\end{equation}\none can show that the optimal $g$ satisfies\n\\begin{equation}\np(x)=p(g_*(x)) \\, |g_{*}'(x)|,\n\\end{equation}\ni.e.\\ $g$ is PDF-preserving as in \\Eq{naivesymmetry}.\nFor such a $g$, we have that $d_* = \\frac12$, the loss is maximized at a value of $2 \\log 2$, and the discriminator is maximally confounded. The generator tries to maximize the loss with respect to $g$ and the discriminator tries to minimize the loss with respect to $d$.\n\n\nThe SymmetryGAN approach has the potential to find any symmetry representable by $g(x)$.\nTo target a particular symmetry subgroup, $G \\leq \\mathbb{A} SL_n^\\pm(\\mathbb{R})$, we can add a term to the loss function.\nFor example, to discover a cyclic symmetry group, $G = \\mathbb{Z}_q, q\\in\\mathbb N$, the loss function can be augmented with a mean squared error term:\n\\begin{align}\n\\label{eq:cyclicloss}\n L[g,d] = L_\\text{BCE}[g,d]-\\frac\\alpha N\\sum_{x\\in\\{x_i\\}_{i=1}^N}(g^q(x) - x)^2,\n\\end{align}\n %\n %\nwhere $L_\\text{BCE}$ is the binary cross entropy loss in \\Eq{numericloss}, $g^q$ is $g$ composed with itself $q$ times, and $\\alpha>0$ is a weighting hyperparameter.\nA SymmetryGAN with this loss function will discover the largest subgroup of $G$ that is a symmetry of the dataset.\n\n\n\\section{Empirical Gaussian Experiments}\n\\label{sec:results}\n\nIn this section, we study the SymmetryGAN approach both analytically and numerically in a variety of simple Gaussian examples.\nFor the empirical studies here and in \\Sec{hepexample}, all neural networks are implemented using \\textsc{Keras}~\\cite{keras} with the \\textsc{Tensorflow} backend~\\cite{tensorflow} and optimized with \\textsc{Adam}~\\cite{adam}.\nThe generator function $g$ is parametrized as a linear function, with constraints that vary by example and are described further below.\nThe discriminator function $d$ is parametrized with two hidden layers, using 25 nodes per layer.\nRectified Linear (ReLU) activation functions are used for the intermediate layers and a sigmoid function is used for the last layer.\nFor the empirical studies, $128$ events are generated for each example.\n\n\\subsection{One-Dimensional Gaussian}\n\\label{sec:1d_example}\n\nOur first example involves data drawn from a one-dimensional Gaussian distribution with a $\\mathbb{Z}_2$ reflection symmetry.\nData are distributed according to the probability distribution $\\mathcal N(0.5, 1.0),$ i.e.\\ a Gaussian with $\\mu = 0.5$ and $\\sigma^2 = 1.0$.\nThis distribution has precisely two symmetries, both linear:\n\\begin{equation}\n\\label{eq:1D_minima}\ng(x) = x, \\qquad g(x) = 1-x.\n\\end{equation}\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images\/Plots\/Z2analytic.pdf}\n \\caption{\n %\n The analytic loss landscape in the slope ($c$) vs.\\ intercept ($b$) space for the one-dimensional Gaussian example.\n %\n The two maxima are indicated by stars.}\n \\label{fig:Z2analytic}\n\\end{figure}\n\n\nImplicitly, we are taking the inertial distribution to be uniform on $\\mathbb{R}$.\nAs stated earlier, the PDF-preserving maps of $\\mathcal{U}(\\mathbb{R})$ are equireal.\nIn one dimension, the only equireal maps are linear.\nLinear maps in one dimension are defined by two numbers, so the generator function can be parametrized as\n\\begin{equation}\n \\label{eq:linear_form}\n g(x) = b + c \\, x. \n\\end{equation}\nIn \\Fig{Z2analytic}, we show the analytically computed loss from \\Eq{loss} as a function of $b$ and $c$.\nIn this figure, the discriminator $d$ is taken to be the analytic optimum in \\Eq{optimalf}.\nThere are two maxima in the loss landscape, one corresponding to each of the linear symmetries from \\Eq{1D_minima}.\nHere, and in most subsequent examples below, we have shifted the output such that maximum loss value is $0$.\n\n\nAnother interesting feature of the loss landscape is the deep minimum at $c=0$ that divides the space into two parts.\nThis gives rise to the prediction that, under gradient descent, the neural network will find $g(x)= 1-x$ when $c$ is initialized negative and find $g(x) = x$ when $c$ is initialized positive.\nIn the edge case when $c$ is initialized to precisely zero, the generator is degenerate and no longer even bijective and the outcome is indeterminate, but the likelihood of sampling $c$ to be precisely zero is, of course, zero.\nFor the rest of the paper, we ignore such edge cases.\nThere are no such features in the loss landscape as a function of $b$, suggesting that there should be little dependence on the initial value of $b$.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/b_fc_f.pdf}\n \\label{fig:Z2numeric_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/c_ic_f.pdf}\n \\label{fig:Z2numeric_ii}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/b_ib_f.pdf}\n \\label{fig:Z2numeric_iii}}\n \\caption{\n %\n The empirical symmetry discovery process for the one-dimensional Gaussian example.\n %\n The initial parameters have a subscript $i$ and the final parameters have a subscript $f$.\n %\n (i) Final slope ($c_f)$ vs.\\ final intercept ($b_f$), showing that the network finds the two maxima.\n %\n (ii) Final slope ($c_f)$ vs.\\ initial slope ($c_i$), showing the phase transition at $c_i = 0$.\n %\n (iii) Final intercept ($b_f)$ vs.\\ initial intercept ($b_i$), showing the independence on $b_i$.}\n \\label{fig:Z2numeric}\n\\end{figure*}\n\n\n These predictions are tested empirically in \\Fig{Z2numeric}, where the initialized parameters are $(b_i,c_i)\\sim \\mathcal{U}([-5, 5]^2)$ and the learned parameters are $(b_f,c_f)$.\n %\n In \\Fig{Z2numeric_i}, there are distinct clusters at $(b_f, c_f) = (0, 1)$ and $(1, -1)$,\n showing that the SymmetryGAN correctly finds both symmetries of the distribution and nothing else.\n %\n In \\Fig{Z2numeric_ii}, there is a demonstration of the loss barrier in slope space; if the initial slope is positive, the final slope is $+1$, whereas if the initial slope is negative, the final slope is $-1$.\n %\n Finally, \\Fig{Z2numeric_iii} shows the absence of a loss barrier in intercept space; the final intercepts are scattered between $0$ and $1$ independent of the initialized intercept. \n %\n We discuss further the \\emph{symmetry discovery map} from initialized to learned parameters in \\Sec{symmetry_discovery_map} and \\App{symmetry_discovery_map}.\n \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Images\/Plots\/restrictedZ2.pdf}\n \\caption{\n The analytic loss landscape for the restricted generator $g(x) = 1 + cx$, with two local maxima at $c = -1$ and $c = 0.5$.}\n \\label{fig:Z2restricted}\n\\end{figure}\n \nIn the above example, the parameterization of $g$ was sufficiently flexible that the SymmetryGAN could find both symmetries and the loss landscape had no other maxima.\nIf the space is incompletely parameterized, though, then local maxima can manifest as false symmetries.\nFor example, suppose instead of a two parameter $g$ as above, $g$ were parameterized as $g(x) = 1 + c\\, x$.\nThe corresponding analytic loss landscape is shown in \\Fig{Z2restricted}.\nA SymmetryGAN initialized with a negative slope correctly finds the only symmetry of this form, $g(x) = 1 - x$, but a neural network initialised with positive slope is unable to cross over the loss barrier at $c = 0$ and instead settles at the locally loss maximizing $g(x) = 1 + 0.5 \\,x$.\nWhile our investigations of $\\mathbb{A} SL_n^\\pm(\\mathbb{R})$ suggest that this does not happen with the full parametrization, the topology of the set of equiareal maps is not known and therefore obstructions like the one illustrated here are possible.\nIt is always possible to check if a solution is a symmetry, however.\nSpecifically, one can apply the learned function to the data and train a \\textit{post hoc} discriminator to ensure that its performance is equivalent to random guessing.\nFor an analytic symmetry, we know that at the point of loss maximisation $p = p\\circ g\\cdot |g'|$, and consequently $d = \\frac{p}{p + p\\circ g\\cdot |g'|} = \\frac12$.\nHence, at the global (symmetry) maxima, $L = - \\frac1N\\sum_{x_i}\\qty[\\log d + \\log (d\\circ g)] = 2\\log2$.\nOn the other hand, there is no way for the neural network to get stuck at non-symmetry local maxima with $L = 2 \\log 2$.\nHence, the true symmetries can be distinguished from local optima by checking the value of the loss.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/SO2symmAnalytic.pdf} \\label{fig:SO2_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/4O2asymmAnalytic.pdf}\n \\label{fig:SO2_iii}}\n %\n \\caption{\n The analytic loss landscapes overlaid with empirically discovered symmetries for the two-dimensional Gaussian examples with the generator restriction in \\Eq{rotation}.\n %\n (i) The Gaussian $N_{1,1}$ with uniform covariance, which has loss maxima on the unit circle $c^2 + s^2 = 1$.\n %\n(ii) The Gaussian $N_{1,2}$ whose covariance matrix has non-equal diagonal elements, which only has symmetries at $c = \\pm 1$ and $s = 0$.\n }\n \\label{fig:SO2}\n\\end{figure*}\n\n\\subsection{Two-Dimensional Gaussian}\n\\label{sec:2d_example}\n\nNext, we consider cases of two-dimensional Gaussian random variables.\nThese examples offer much richer symmetry groups for study as well as a greater scope for variations.\nWe take the inertial distribution to be uniform on $\\mathbb{R}^2$.\n\n\nWe start with the standard normal distribution in two dimensions,\n\\begin{equation}\nN_{1,1}\\equiv\\mathcal{N}\\qty(\\vec{0},\\mathbbm{1}_2),\n\\end{equation}\nwhere $\\mathbbm{1}_n$ is the $n\\times n$ identity matrix.\nThis distribution has as its linear symmetries all rotations about the origin and all reflections about lines through the origin, which constitute the group $O(2)$.\nFor further exploration, we consider a two-dimensional Gaussian with covariance not proportional to the identity,\n\\begin{equation}\nN_{1,2} \\equiv \\mathcal{N}\\qty(\\vec{0},\\mqty[1&0\\\\0&2]).\n\\end{equation}\nThe symmetry group of this distribution is quite complicated and described below.\nAmong other features, it contains the Klein 4--group, $V_4 = \\qty{\\mathbbm{1}, -\\mathbbm{1}, \\sigma_3, -\\sigma_3}$, for Pauli matrix $\\sigma_3$.\n\n\n\nThe linear search space that preserves $\\mathbb{R}^2$, the general affine group in two dimensions, $\\operatorname{Aff}_2(\\mathbb{R}) = \\mathbb{A} GL_2(\\mathbb{R})$, has six real parameters.\nBefore exploring the entire space, we first examine the subspace:\n\\begin{align}\n\\label{eq:rotation}\ng(X) = \\mqty[c&s\\\\-s&c]\\,X,\n\\end{align}\nfor $c, s\\in \\mathbb{R}^\\times$, where $\\mathbb{R}^\\times$ is the set of non-zero real numbers.\nWhile this is only a rotation if $c^2+s^2=1$, we want to test if a SymmetryGAN can discover this relationship starting from this more general representation.\nThe symmetries represented by \\Eq{rotation} are a subgroup of $GL_2(\\mathbb{R})$: $SO(2)\\times \\mathbb{R}^+ = \\ev{\\theta, r|\\theta\\in [0, 2\\pi), r\\in\\mathbb{R}^+}$, where $\\mathbb{R}^+$ is the set of positive real numbers.\nFor the $ N_{1,1}$ Gaussian, this means looking for the $r = 1$ subgroup, which is indicated by the red circle in the loss landscape in \\Fig{SO2_i}.\nTo test the SymmetryGAN, we sample the parameters $c$ and $s$ uniformly at random from $[-1, 1]^2$, and the learned $c$ and $s$ values correspond to the expected $SO(2)$ unit circle, also shown in \\Fig{SO2_i}.\nWe repeat this exercise for the $ N_{1,2}$ Gaussian in \\Fig{SO2_iii}, where the SymmetryGAN discovers the $\\mathbb{Z}_2$ subgroup of $V_4$ generated by a rotation by $\\pi$.\n\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/C2MSE.pdf}\n \\label{fig:MSE_i}\n }\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/C3MSE.pdf}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/C7MSE.pdf}}\\\\\n\n \\caption{The analytic loss landscapes overlaid with empirically discovered symmetries for the $N_{1,1}$ example with a cyclic-enforcing term added to the loss, to be compared to \\Fig{SO2_i}.\n %\n The cases studied are (i) $\\mathbb{Z}_2$, (ii) $\\mathbb{Z}_3$, and (iii) $\\mathbb{Z}_7$.}\n \\label{fig:MSE}\n\\end{figure*}\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6O2d-tsymm.pdf}} $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2symmRU.pdf}} $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2symmAB.pdf}}\\\\\n \\caption{\n %\n Slices through the analytic loss landscape together with empirically discovered symmetries for $\\mathcal N_{1,1}$ with the full $\\mathbb{A} GL_2(\\mathbb{R})$ search space.\n %\n (i) The determinant-rotation angle space. The maxima are indicated by vertical red lines.\n %\n (ii) The dilatation-shear space. The maximum is indicated by a red star.\n %\n (iii) The affine translation space. The maximum is indicated by a red star at the origin.}\n \\label{fig:AGL2symm}\n\\end{figure*}\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2asymmDU.pdf}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2asymmTR.pdf}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.31\\textwidth]{Images\/Plots\/6GL2asymmAB.pdf}}\\\\\n \n \\caption{\n %\n Similar to \\Fig{AGL2symm} but for the $N_{1,2}$ distribution.\n %\n (i) The determinant-sheer space. The maxima are indicated by two red stars.\n %\n (ii) The dilatation-rotation angle space. The maxima are indicated by four red stars.\n %\n (iii) The affine translation space. The maximum is indicated by a red star at the origin.}\n \\label{fig:AGL2asymm}\n\\end{figure*}\n\n\n\nThis two-dimensional example allows us to test the approach in \\Eq{cyclicloss} for finding $\\mathbb{Z}_q$ subgroups of the full symmetry group.\nRestricting our attention to the $N_{1, 1}$ example and the $SO(2)\\times \\mathbb{R}^+$ subgroup in \\Eq{rotation}, we add the cyclic-enforcing mean squared error term to the loss with $\\alpha=0.1$.\nResults are shown in \\Fig{MSE} for $q = 2$, $3$, and $7$, where the analytic loss optima and empirically found symmetries are broken into discretely many solutions, with the number corresponding to the $q^{\\textrm{th}}$ roots of unity, as expected.\n\nWe now consider the general affine group, $\\operatorname{Aff}_2(\\mathbb{R})$.\nIn two dimensions, the elements of this group can be represented as a matrix with 6 parameters:\n\\begin{itemize}\n \\item $d\\in\\mathbb{R}^\\times$, the determinant;\n \\item $\\theta\\in [0, 2\\pi)$, the angle of rotation;\n \\item $r\\in \\mathbb{R}^+$, the dilatation;\n \\item $u\\in \\mathbb{R}$, the shear in the $x$-direction; and\n \\item $(a, b)\\in \\mathbb{R}^2$ the overall affine shift.\n\\end{itemize}\nBy Iwasawa's decomposition~\\cite{10.2307\/1969548}, the full transformation can be written as\n\\begin{align}\n&g(X) =\\sqrt{|d|}\\,\\mqty[1&0\\\\0&-1]^{\\delta}\\mqty[c_\\theta&s_\\theta\\\\-s_\\theta&c_\\theta]\\mqty[r&0\\\\0&\\frac{1}{r}]\\mqty[1&u\\\\0&1]\\,X+\\mqty[a\\\\b]\\,,\n\\end{align}\nwhere $\\delta=\\frac{1 - \\operatorname{sgn}(d)}{2}$ and $c_\\theta=\\cos(\\theta)$ and $s_\\theta=\\sin(\\theta)$.\n\n\n\nFor the distribution $N_{1,1}$, the symmetry group is $O(2)$, described by the parameters $d = \\pm 1, \\theta\\in[0, 2\\pi), r = 1$, and $u = a = b = 0$.\nVisualizing this space is difficult, but multiple slices through the analytic loss landscape are presented in \\Fig{AGL2symm}.\nThe neural network is trained over all six parameters of the Iwasawa decomposition of $\\operatorname{Aff}_2(\\mathbb{R})$.\nThe empirically discovered symmetries, shown as yellow dots in \\Fig{AGL2symm}, are two-parameter slices of the discovered symmetry group, where slices are chosen such that the parameters not under study are closest to $d = r = 1$, $\\theta = a = b = 0$.\nThe empirical data agree well with the predictions.\n\n\n\n\n\nThe same analysis of $N_{1,2}$ is more complex because the corresponding symmetry group is more complicated than for $N_{1,1}$.\nWhen $r = 1$ and $u = 0$, the symmetries are the $V_4$ we saw earlier ($\\theta = 0,\\pi$ and $d = \\pm 1$).\nBy varying $r$ and $u$, however, one can in fact undo the symmetry-breaking induced by the non-identity covariance, thereby restoring the rotational symmetry.\nFor example, when $r = \\sqrt{2}$, $N_{1,2}$ is transformed into a Gaussian with covariance $\\mathrm{diag}[2,1]$, therefore $r = \\sqrt 2$ and $\\theta = \\frac\\pi2, \\frac{3\\pi}2$ constitutes a symmetry.\nIt is difficult to describe the whole symmetry group in closed form, or even to visualise it because it does not live in any single planar slice of $\\mathbb{A} GL_2(\\mathbb{R})$.\nAs shown for various parameter slices in \\Fig{AGL2asymm}, though, the empirical results agree well with the analytic predictions.\n\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\n \\includegraphics[height=0.4\\textwidth]{Images\/Plots\/1d_bimodalplot.pdf}\n \\label{fig:otherdistributions_1Di}\n }\n \\subfloat[]{\n \\includegraphics[height=0.4\\textwidth]{Images\/Plots\/1d_bimodalsymm.pdf}\n \\label{fig:otherdistributions_1Dii}\n }\n \\caption{\n Empirical distribution (i) and empirically discovered symmetries overlaid on the analytic loss landscape (ii) for a one-dimensional bimodal distribution inspired by \\Ref{fisher2018boltzmann}.\n }\n \\label{fig:otherdistributions_1D}\n\\end{figure*}\n\n\\begin{figure*}\n \\subfloat[]{\\includegraphics[height=0.27\\textwidth]{Images\/Plots\/2d_circularplot.pdf}}\\subfloat[]{ \\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2doctagonalrotations.pdf}}\\subfloat[]{ \\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2doctagonalreflections.pdf}}\\\\\n \\subfloat[]{\\includegraphics[height=0.27\\textwidth]{Images\/Plots\/2d_squareplot.pdf}}\\subfloat[]{\\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2dsquarerotations.pdf}}\\subfloat[]{\\includegraphics[height=0.28\\textwidth]{Images\/Plots\/2dsquarereflections.pdf}}\n \\caption{\n Empirical distributions (left column) and empirically discovered rotations (middle column) and reflections (right column) overlaid on the analytic loss landscape for two two-dimensional Gaussian mixture models inspired by \\Ref{fisher2018boltzmann}.\n %\n The studied examples are (i,ii,iii) a two-dimensional octagonal distribution, and (iv,v,vi) a two-dimensional 5$\\times$5 distribution. Note that antipodal points on (iii) and (vi) represent the same reflection}\n \\label{fig:otherdistributions_2D}\n\\end{figure*}\n\n\n\\subsection{Gaussian Mixtures}\n\n\nAs our last set of simple examples, we apply the SymmetryGAN approach to three Gaussian mixture models, inspired by the examples in \\Ref{fisher2018boltzmann}.\nThe first is a one-dimensional bimodal probability distribution:\n\\begin{align}\np(x) = \\frac12\\mathcal N (-1, 1) + \\frac12 \\mathcal N (1, 1),\n\\end{align}\nwhich respects the $\\mathbb{Z}_2$ symmetry group $g(x) = \\pm x$.\nThe empirical distribution for this example is shown in \\Fig{otherdistributions_1Di}.\nApplying SymmetryGAN starting from the generator for linear transformations in \\Eq{linear_form}, it finds the predicted symmetries with great accuracy, as shown in \\Fig{otherdistributions_1Dii}.\n\n\n\n\n\nWe next consider two two-dimensional Gaussian mixtures.\nThe octagonal distribution,\n\\begin{align}\np(x) = \\frac18\\sum_{i = 1}^8\\mathcal N\\left(\\cos\\frac{2\\pi i}{8}, 0.1\\right)\\times \\mathcal N\\left(\\sin\\frac{2\\pi i}{8}, 0.1\\right),\n\\end{align}\nhas the dihedral symmetry group of an octagon $D_{8}$.\nThe two-dimensional $5\\times 5$ square distribution,\n\\begin{align}\np(x) = \\frac1{25} \\sum_{i = 1}^5\\sum_{j = 1}^5\\mathcal N (i - 2 , 0.1)\\times\\mathcal N (j-2, 0.1),\n\\end{align}\nhas the symmetry group of a square $D_4$.\nWe use the generator \\begin{equation}\n \\label{eq:O2}\n g(X) = \\mqty[c&s\\\\-s&(-1)^\\delta c]X,\n\\end{equation}\nwhich can discover the the entire symmetry subgroup (rotations and reflections) in $O(2)$.\nData sampled from these distributions are shown in the left column of \\Fig{otherdistributions_2D}.\nIn the middle and right columns of \\Fig{otherdistributions_2D}, we see that SymmetryGAN finds the expected rotations and reflections, respectively.\n\n\n\\section{Particle Physics Example}\n\\label{sec:hepexample}\n\nWe now turn to an application of SymmetryGANs in particle physics.\nHere, we are interested to learn if this approach can recover well-known azimuthal symmetries in collider physics and possibly identify symmetries that are not immediately obvious.\n\n\\subsection{Dataset and Preprocessing}\n\nThis case study is based on dijet events.\nJets are collimated sprays of particles produced from the fragmentation of quarks and gluons, and pairs of jets are one of the most common configurations encountered at the LHC.\nWith a suitable jet clustering algorithm, each jet has a well-defined momentum, and we can search for symmetries of the jet momentum distributions.\n\nThe dataset we use is the background dijet sample from the LHC Olympics anomaly detection challenge~\\cite{gregor_kasieczka_2019_4536377,Kasieczka:2021xcg}.\nThese events are generated using \\texttt{Pythia} 8.219~\\cite{Sjostrand:2006za,Sjostrand:2007gs} with detector simulation provided by \\texttt{Delphes} 3.4.1~\\cite{deFavereau:2013fsa,Mertens:2015kba,Selvaggi:2014mya}\nThe reconstructed particle-like objects in each event are clustered into\n$R=1$ anti-$k_T$~\\cite{Cacciari:2008gp} jets using \\texttt{FastJet} 3.3.0~\\cite{Cacciari:2011ma,Cacciari:2005hq}.\nAll events are required to satisfy a single $p_T>1.2$~TeV jet trigger, and our analysis is based on the leading two jets in each event, where leading refers to the ones with largest transverse momenta ($p_T^2 = {p_x^2 + p_y^2}$).\n\n\nEach event is represented as a four-dimensional vector:\n\\begin{equation}\nX = (p_{1x},p_{1y},p_{2x},p_{2y}),\n\\end{equation}\nwhere $p_1$ refers to the momentum of the leading jet, $p_2$ represents the momentum of the subleading jet, and $x$ and $y$ are the Cartesian coordinates in the transverse plane.\nWe focus on the transverse plane because the jets are typically back-to-back in this plane as a result of momentum conservation.\nThe longitudinal momentum of the parton-parton interaction is not known and so there is no corresponding conservation law for $p_z$.%\n\\footnote{In principle, we could use SymmetryGAN to confirm the absence of a symmetry in $p_z$.}\n\nSince we have a four-dimensional input space, a natural search space for symmetries is $SO(4)$, the group of all rotations on $\\mathbb{R}^4$. Before exploring the whole candidate symmetry space, we first consider an $SO(2)\\times SO(2)$ subspace where the two leading jets are independently rotated.\n\n\\subsection{$SO(2) \\times SO(2)$ Subspace}\n\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[height=0.45\\textwidth]{Images\/Plots\/LHCO.pdf}\n \\label{fig:LHCO_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[height=0.45\\textwidth]{Images\/Plots\/LHCO_avg.pdf}\n \\label{fig:LHCO_ii}}\n \\caption{\n %\n (i) Empirically discovered symmetries in the LHC Olympics dijet dataset.\n %\n The final values of $\\theta_1$ and $\\theta_2$ from the SymmetryGAN are plotted over the line $\\theta_1 = \\theta_2$.\n %\n (ii) The map between initial and final symmetry parameters.\n %\n The final rotation angle is the average of the initialized rotation angles, offset by $\\pi$ if the angle between the initialized angles is reflex. \n }\n \\label{fig:LHCO}\n\\end{figure*} \n\nBecause of momentum conservation, we expect that only those rotations that simultaneously rotate both jets by the same angle will be symmetries.\nWe start from a generic $SO(2)\\times SO(2)$ group element:\n\\begin{equation}\n g_{\\theta_1, \\theta_2}\\mqty[p_{1x}\\\\ p_{1y}\\\\p_{2x}\\\\p_{2y}] = \\mqty[\\cos\\theta_1&\\sin\\theta_1&0&0\\\\-\\sin\\theta_1&\\cos\\theta_1&0&0\\\\0&0&\\cos\\theta_2&\\sin\\theta_2\\\\0&0&-\\sin\\theta_2&\\cos\\theta_2]\\mqty[p_{1x}\\\\ p_{1y}\\\\p_{2x}\\\\p_{2y}],\n\\end{equation}\nwhere $(\\theta_1, \\theta_2) \\in [0, 2\\pi)^2$.\nWe expect the symmetries to correspond to the subgroup $\\qty{g_{\\theta_1, \\theta_2}|\\theta_1 = \\theta_2}\\cong SO(2)$.\nThis prediction is borne out in \\Fig{LHCO_i}.\n\n\nWe can also study the training dynamics of the SymmetryGAN.\nMore information about this procedure is given in \\App{symmetry_discovery_map}, but the idea is to find a symmetry discovery map $\\Omega: SO(2)\\times SO(2) \\to SO(2)$, $(\\theta_{1i}, \\theta_{2i})\\mapsto\\theta_{f},$ that describes how the initial parameters map to the learned ones.\nWe propose the map given by\n\\begin{equation}\n\\begin{split}\n \\Omega(\\theta_1, \\theta_2) &= \\begin{cases}\\frac{\\theta_1 + \\theta_2}{2}& |\\theta_1 - \\theta_2| < \\pi\\,,\\\\\n \\frac{\\theta_1 + \\theta_2}{2} - \\pi & |\\theta_1 - \\theta_2| > \\pi\\,,\n \\end{cases}\n \\end{split}\n\\end{equation}\nwhere there is only one output angle even though the output space is two-dimensional.\nThis map posits that the final angle will bisect the smaller angle between $\\theta_1$ and $\\theta_2$, which is validated by the empirical results shown in \\Fig{LHCO_ii}.\n\n\n\\subsection{$SO(4)$ Search Space}\n\nWe now turn to the four-dimensional rotation group.\n$SO(4)$ is a six parameter group, specified by $\\qty{\\theta_i}_{i=1}^6,$ which parametrize the six independent rotations:\n\\begin{align}\nR_1\\colon p_{1x}&\\leadsto p_{1y},&\nR_2\\colon p_{1x}&\\leadsto p_{2x},\\\\\nR_3\\colon p_{1x}&\\leadsto p_{2y},&\nR_4\\colon p_{1y}&\\leadsto p_{2x},\\\\\nR_5\\colon p_{1y}&\\leadsto p_{2y},&\nR_6\\colon p_{2x}&\\leadsto p_{2y},\n\\end{align}\nwhere the notation $R: a \\leadsto b$ means\n\\begin{align}\n R(a) &= a\\cos\\theta + b\\sin\\theta,\\\\\n R(b) &= b\\cos\\theta - a\\sin\\theta.\n\\end{align}\nOne way to describe a generic generator $g_{\\vb*\\theta}$ is\nby\n\\begin{equation}\n g_{\\vb*\\theta}(X) = R_1 R_2 R_3 R_4 R_5 R_6 \\, X.\n\\end{equation}\n\n\\begin{figure*}[p]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/LHCO_Comparison1.pdf} \\label{fig:LHCOComparison_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/LHCO_Comparison2.pdf}\n \\label{fig:LHCOComparison_ii}}\n \\caption{\n %\n Two dimensional projection of (i) the original LHC Olympics dijet dataset and (ii) its transformation by one of the generators discovered by the SymmetryGAN.\n %\n Here, we plot the momenta of the two leading jets in the transverse plane.}\n \\label{fig:LHCO_Comparison}\n\\end{figure*}\n\nIt is not easy to visualize a six-dimensional space, and the symmetries discovered by SymmetryGAN\ndo not lie in any single $2$-plane or even $3$-plane.\nTherefore, we need alternative methods to verify that the maps discovered by the neural network are indeed symmetries.\n\n\nOne verification strategy is to visually inspect $X$ and $g_{\\vb*\\theta}(X)$ to see if the spectra look the same.\nIn \\Fig{LHCO_Comparison}, we show a projection of the distribution of $X$ and one instance of $g_{\\vb*\\theta}(X)$, which suggests that the found $g_{\\vb*\\theta}$ is indeed a symmetry.\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_rand1.pdf} \\label{fig:KLrand_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_rand2.pdf}\n \\label{fig:KLrand_ii}}\n %\n \\caption{\n %\n %\nAn example of the jet azimuthal angle distributions, (i)$\\widetilde{\\phi}_{1}$ and (ii)$\\widetilde{\\phi}_{2}$, of the LHC Olympics dijet data rotated by a randomly selected rotation in $SO(4)$.\nThe distribution is not uniform, so a random rotation is not a symmetry.}\n \\label{fig:KL_rand}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_symm1.pdf} \\label{fig:KLsymm_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_symm2.pdf}\n \\label{fig:KLsymm_ii}}\n %\n \\caption{\n %\n %\n The same as \\Fig{KL_rand}, but for a symmetry in $SO(4)$.\n %\n The distribution is uniform, so this rotation is a candidate symmetry.\n }\n \\label{fig:KL_symm}\n\\end{figure*}\n\nAnother verification strategy is to test if the discovered symmetries preserve special projections of the dataset.\nEach of the two jets has an azimuthal angle $\\phi_{j} = \\text{arctan2}(p_{jy}, p_{jx})$ for $j = 1,2$ that is uniformly distributed over $[-\\pi, \\pi)$, where $\\text{arctan2}$ is the two argument arctangent function, which returns the principal value of the polar angle $\\theta\\in (-\\pi, \\pi]$.\nSymbolically, the data can be represented as\n\\begin{equation}\n X = \\mqty[p_{1x}\\\\ p_{1y}\\\\p_{2x}\\\\p_{2y}] = \\mqty[p_{1T}\\cos\\phi_{i1}\\\\p_{1T}\\sin\\phi_{1}\\\\p_{2T}\\cos\\phi_{2}\\\\p_{2T}\\sin\\phi_{2}],\\qquad \\phi_{j}\\sim\\mathcal{U}[-\\pi, \\pi)\\,,\n\\end{equation}\nwhere $p_{jT}$ is the transverse momentum of each jet (which is approximately the same for both jets since they are roughly back to back).\nIf one applies an arbitrary rotation, there is no reason the new azimuthal angles,\n\\begin{align}\n \\widetilde{\\phi}_{1} &= \\text{arctan2}(g_{\\vb*\\theta}(X)_2, g_{\\vb*\\theta}(X)_1), \\\\\n \\widetilde{\\phi}_{2} &= \\text{arctan2}(g_{\\vb*\\theta}(X)_4, g_{\\vb*\\theta}(X)_3),\n\\end{align}\nshould be uniformly distributed anymore, as \\Fig{KL_rand} demonstrates.\nIf one of the symmetry rotations discovered by the neural network is applied to $X$, however, $\\widetilde{\\phi}_{j}$ must remain uniformly distributed, as shown in \\Fig{KL_symm}.\n\n\n\\begin{figure*}[t]\n \\centering\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_Div1.pdf} \\label{fig:KLdiv_i}}\n $\\quad$\n \\subfloat[]{\\includegraphics[width=0.45\\textwidth]{Images\/Plots\/KL_DIv2.pdf}\n \\label{fig:KLdiv_ii}}\n %\n \\caption{\n %\n %\n The KL divergence between the jet azimuthal angle distribution before and after a random rotation or a symmetry rotation, for the (i) leading jet and (ii) subleading jet.\n %\n The KL divergence between two samples drawn from $\\mathcal U[-\\pi, \\pi)$ is shown for comparison.\n }\n \\label{fig:KL_div}\n\\end{figure*}\n\nThis effect can be quantified by computing the Kullback-Leibler (KL) divergence of the two $\\widetilde{\\phi}_{j}$ distributions against that of $\\phi_{j}$.\nIn \\Fig{KL_div}, we see that the KL divergence of the symmetries is much smaller than the KL divergence of the random rotations.\nAlso plotted on the same figure is the KL divergence of two samples drawn from $\\mathcal{U}[-\\pi, \\pi)$, which represents the irreducible effect from considering a finite dataset.\nThis would be the KL divergence of $\\widetilde{\\phi}_{j}$ obtained from applying an ideal analytic symmetry to $X$, against $\\phi_{j}$.\nIt is instructive to consider the means of the histograms.\nThe KL divergence of randomly selected elements of $SO(4)$ has means of $0.37$ ($0.34$) for the leading (subleading) jet, while the KL divergence of symmetries in $SO(4)$ has respective means $0.0058$ ($0.0090$).\nThe irreducible statistical noise has a mean of $0.0010$.\n\nClearly, the symmetries reconstruct the distribution much better than randomly selected elements of $SO(4)$, and are in fact quite close to the irreducible KL divergence due to finite sample size.\nNote that the x-axis of \\Fig{KL_div} is logarithmic, which magnifies the region near zero, so the difference between the symmetry histogram and the statistical noise histogram is smaller than it might appear.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{Images\/Plots\/LHCOLosses.pdf}\n \\caption{\n %\n %\n The loss of random rotations in $SO(4)$ compared to the loss of rotations learned by SymmetryGAN, overlaid with the analytic loss of a symmetry, $2\\log2$.}\n \\label{fig:LHCOLosses}\n\\end{figure}\n\nA final method to independently verify that the rotations SymmetryGAN finds are symmetries of the LHC Olympics data is by computing the loss function.\nAs discussed at the end \\Sec{1d_example}, when $g$ represents a symmetry and $d$ is an ideal discriminator, the binary cross entropy loss is $2\\log2$.\nBy training a \\textit{post hoc} classifier, we can therefore compute the loss of a specific symmetry generator.%\n\\footnote{In principle, one could look at the value of the loss after training the discriminator. In practice, a post-hoc classifier yields more reliable behavior; see related discussion in \\Ref{Diefenbacher:2020rna}.}\nIn \\Fig{LHCOLosses}, we compare the loss of randomly sampled rotations from $SO(4)$ to the loss of rotations discovered by SymmetryGAN.\nThe latter is quite close to the analytic optimum, $2\\log 2$.\n\n\nFrom these tests, we conclude that SymmetryGAN has discovered symmetries of the LHC Olympics dataset.\nAs discussed further in \\Sec{inference} below, though, discovering symmetries is different from inferring the structure of found subgroup from the six-dimensional search space.\nMimicking the study from \\Fig{MSE_i}, we can study its $\\mathbb{Z}_2$ subgroups, through the loss function in \\Eq{cyclicloss} with $q=2$.\nThe backbone of this subgroup is expected to be the reflections $p_{1k}\\leftrightarrow p_{2k}$ (because both jets have approximately the same momenta) and $p_{jx}\\leftrightarrow p_{jy}$ (because $\\sin\\phi$ and $\\cos\\phi$ look the same upon drawing sufficiently many samples of $\\phi$).\nThe learning process reveals a much larger group, though.\nThere is in fact a continuous group of $\\mathbb{Z}_2$ symmetries, which combine an overall azimuthal rotation and one of the aforementioned backbone reflections.\nIn retrospect, these $\\mathbb{Z}_2$ symmetries should have been expected, since they are compositions of well-known symmetry transformations.\nThis example highlights the need to go beyond symmetry discovery and towards symmetry inference.\n\n\n\\section{Towards Symmetry Inference}\n\\label{sec:inference}\n\nThe examples in \\Secs{results}{hepexample} highlight the potential power of SymmetryGAN for discovering symmetries using deep learning.\nDespite the many maps discovered by the neural network, though, it is difficult to infer, for example, the precise Lie subgroup of $SO(4)$ respected by the LHC Olympics data.\nThis highlights a limitation of this approach and the distinction between ``symmetry discovery'' and ``symmetry inference''.\nThough SymmetryGAN can identify points on the Lie group manifold, there is no simple way to infer precisely which Lie group has been discovered.\nIn this section, we mention three potential methods to assist in the process of symmetry inference.\n\n\\subsection{Finding Discrete Subgroups}\n\\label{sec:discrete_subgroups}\n\nOne way to better understand the structure of the learned symmetries is to look for discrete subgroups.\nAs already shown in \\Fig{MSE} and mentioned in the particle physics case, we can identify discrete $\\mathbb{Z}_q$ symmetry transformations by augmenting the loss with \\Eq{cyclicloss}.\nBy forcing the symmetries to take a particular form, we can infer the presence (or absence) of such a subgroup.\n\n\nIt is interesting to consider possible modifications to \\Eq{cyclicloss} to handle non-Abelian discrete symmetries.\nThe goal would be to learn multiple symmetries simultaneously that satisfy known group theoretic relations.\nFor example in the Abelian case, a loss term like \n\\begin{equation}\n\\label{eq:abeliansymm}\n -\\frac\\alpha N\\sum_{x\\in\\{x_i\\}_{i=1}^N}(g_1 \\circ g_2(x) - g_2 \\circ g_1(x))^2\n\\end{equation}\ncould be used to identify any two symmetries $g_1$ and $g_2$ that commute.\nWe leave a study of these possibilities to future work.\n\n\n\\subsection{Group Composition}\n\nBy running SymmetryGAN a few times, one may discover a few points on the symmetry manifold.\nBy composing these discovered symmetries together, one can rapidly increase the number of known points on the manifold because the discovered symmetries are elements of a group, by construction, so their composition is still an element of the group.\n\n\nThis notion is quite powerful.\nThe ergodicity of the orbits of group elements is a richly studied and complex area of mathematics (see e.g.~\\Ref{bams\/1183548783}).\nMany groups of physical interest are locally connected, compact, and have additional structure.\nIn that context, it is likely that the full symmetry group is generated by $\\qty{r_1,\\dots, r_\\nu}$, where $r_i$ is randomly drawn from the group and $\\nu$ is the product of the representation dimension and the number of connected components.\n\nFor example, consider the group $U(1)\\cong SO(2)$, which has $\\nu=1$.\nAlmost any element of $U(1), e^{i\\theta}$, has rotation angle which is an irrational multiple of $\\pi, \\frac\\theta\\pi\\in\\mathbb{R}\\setminus\\mathbb{Q}$.\nWe can therefore approximate any element $e^{i\\phi}\\in U(1)$ by repeated applications of $e^{i\\theta}$:\n\\begin{equation}\n \\forall e^{i\\phi}\\in U(1)\\;\\forall\\epsilon > 0\\;\\exists n\\in \\mathbb{N}\\; \\norm{e^{i\\phi} - e^{in\\theta}} < \\epsilon\\,.\n\\end{equation}\nIn other words, the subgroup generated by $e^{i\\theta}$ is dense in $U(1)$.\n\n\nIn practice, the symmetries discovered by SymmetryGAN will be not exact due to numerical considerations.\nSince the network learns approximate symmetries with some associated error, each composition compounds this error.\nThus, there are practical limits on the number of compositions that can be carried out with numeric data.\n\n\n\\subsection{The Symmetry Discovery Map}\n\\label{sec:symmetry_discovery_map}\n\nSo far, we have initialized a SymmetryGAN with uniformly distributed values of certain parameters, and then trained it to return the values of those parameters that constitute a symmetry.\nWe can define a \\textit{symmetry discovery map}, which connects the initialized parameters of $g$ to the parameters of the learned function:\n\\begin{equation}\n\\Omega: \\mathbb{R}^k \\to \\mathbb{R}^k,\n\\end{equation}\nwhere $k$ is the dimension of the parameter space.\nThis is a powerful object not only for characterizing the learning dynamics but also to assist in the process of symmetry discovery and inference.\n\n\nThere are at least two distinct reasons why knowledge of this symmetry discovery map is useful.\nFirst, the map is of theoretical interest.\nWe discussed in \\Sec{1d_example} the importance of understanding the topology of the symmetry group. \nThe symmetry discovery map induces a \\emph{deformation retract} from the search space to the symmetry space.\nEvery deformation retract is a homotopy equivalence, and by the Eilenberg-Steenrod axiom of homotopy equivalence~\\cite{Eilenberg117}, the homology groups of the symmetry group can be constructed from the homology groups of the search space.\nEven in low dimensions, the topology of the symmetry group can be non-trivial (cf.~\\Sec{2d_example} for an example in 2D).\nThe topology of $GL_n(\\mathbb{R})$, however, has been studied for over half a century, and the homotopy and homology groups of several non-trivial subgroups of $\\operatorname{Aff}_n(\\mathbb{R})$ have been fully determined~\\cite{SCHLICHTING20171}.\nHence, if the symmetry discovery map were known, one could leverage the full scope of algebraic topology and the known results for the linear groups to understand the topology of the symmetry group.\n\n\nSecond, this map has practical value.\nEvery time a SymmetryGAN is trained, it must relearn how to move the initialized values of $g$ to the final values.\nIntuitively, nearby initial values should map to nearby final values, so learning the symmetry discovery map should enable a more efficient exploration of the symmetry group.\nIn practice, this can be accomplished by augmenting the loss function in \\Eq{numericloss}.\nLet $g(x|c)$ be the symmetry generator, with the parameters $c$ made explicit.\nLet $\\Omega(c)$ be a neural network representing the symmetry discovery map.\nSampling parameters from the space of parameters $\\mathbb{R}^k$ and data points from $X$, we can optimize the following loss:\n\\begin{align}\n\\label{eq:numericloss_parameters}\n L[\\Omega,d]&=-\\sum_{c \\in \\{c_a\\}} \\sum_{x\\in\\{x_i\\}} \\Big[\\log\\big(d(x)\\big) \\\\\n & \\nonumber \\qquad \\qquad \\qquad \\qquad + \\log\\big(1-d(g(x|\\Omega(c)))\\big)\\Big]\\,.\n\\end{align}\nNote that this loss is now a functional of $\\Omega$ instead of $g$.\nIf $\\Omega(c)$ can be initialized to the identity function, then gradient descent acting on $\\Omega(c)$ is (asymptotically) the same as gradient descent acting on the original parameters.\nThus, as long as $\\Omega(c)$ has a sufficiently flexible parametrization, the learned $\\Omega(c)$ will be a good approximation to the symmetry discovery map learned by the original SymmetryGAN.\n\n\nWe defer a full exploration of the symmetry discovery map to future work.\nPreliminary analytic and numerical studies of the symmetry discovery map are shown in \\App{symmetry_discovery_map}.\n\n\n\\section{Conclusions and Outlook}\n\\label{sec:conclusions}\n\n\nIn this paper, we provided a rigorous statistical definition of the term ``symmetry'' in the context of probability densities.\nThis is highly relevant for the field of high energy collider physics where the key objects of study are scattering cross sections.\nWe proposed SymmetryGAN as a novel, flexible, and fully differentiable deep learning approach to symmetry discovery.\nSymmetryGAN showed promising results when applied to Gaussian datasets as well as to dijet events from the LHC, conforming with our analytic predictions and providing new insights in some cases.\n\n\nA key takeaway lesson is that the symmetry of a probability density only makes sense when compared to an inertial density.\nFor our studies, we focused exclusively on the inertial density corresponding to the uniform distribution on (an open subset of) $\\mathbb{R}^n$, since Euclidean symmetries are ubiquitous in physics.\nFurthermore, we only considered area preserving linear maps on $\\mathbb{R}^n$, a simple yet rich group of symmetries that maintain this inertial density.\nMoving forward, there are many opportunities to further develop the concepts introduced in this paper.\nAs a straightforward extension, non-linear equiareal maps over $\\mathbb{R}^n$ could be added to the linear parameterizations we explored, as could Lorentz-like symmetries.\nIn more complex cases where there is no obvious notion of an inertial density, one could study the relative symmetries between two different datasets.\nIt would also be interesting to discover approximate symmetries and rigorously quantify the degree of symmetry breaking.\nThis is relevant in cases where the complete symmetry group is obscured by experimental acceptances and efficiencies.\n\n\nA key open question is how to go beyond symmetry discovery and towards symmetry inference.\nWe showed how one can introduce loss function modifications to emphasize the discovery of discrete subgroups.\nOne could imagine extending this strategy to continuous subgroups to gain a better handle on group theoretic structures.\nThe symmetry discovery map is a potentially powerful tool for symmetry inference, since it in principle allows the entire symmetry group to be discovered in a single training.\nIn practice, though, we found learning the symmetry discovery map to be particularly challenging.\nWe hope future algorithmic and implementation developments will enable more effective strategies for symmetry discovery and inference, in particle physics and beyond.\n\nThe code for this paper can be found in this \\href{https:\/\/github.com\/hep-lbdl\/symmetrydiscovery}{\\underline{GitHub repository}}.\n\n\\section*{Acknowledgments}\n\nKD and BN are supported by the U.S. Department of Energy, Office of Science under contract DE-AC02-05CH11231.\nJT is supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, \\url{http:\/\/iaifi.org\/}), and by the U.S. DOE Office of High Energy Physics under grant number DE-SC0012567.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n\tA quantum-chromodynamic potential model was proposed by us\\cite{GRR}\nin 1982, which not only yielded results for the $c\\bar{c}$ and $b\\bar{b}$\nenergy levels and their spin splittings in good agreement with the existing\nexperimental data but its predictions were also confirmed by\nlater experiments at the Cornell Electron Storage Ring\\cite{CESR}.\nAn essential feature of our model was the inclusion of the\none-loop radiative corrections to the quark-antiquark potential, which\nhad been derived by us in an earlier investigation\\cite{GR}. Subsequently,\nthe model was improved by using relativisitic kinematics\\cite{GRR2}\nand a nonsingular form of the quarkonium potential\\cite{GRS}.\nAs shown by us, in addition to the energy levels of $c\\bar{c}$ and $b\\bar{b}$,\nour model also yields results in good agreement with the experimental data\nfor the leptonic and E1 transition widths. It was further shown by\nZhang, Sebastian, and Grotch\\cite{zhang} that the M1 transition\nwidths for $c\\bar{c}$ and $b\\bar{b}$ obtained from our model\nare in better agreement with the experimental data than those predicted\nusing other potential models.\n\n\tRecently the mass of the ${}^1P_1$ state of charmonium has\nbeen determined by the E760 collaboration\\cite{E760} in $p\\bar{p}$\nannihilations at Fermilab, and the splitting between the center of\ngravity of the ${}^3P_J$ states and the ${}^1P_1$ state, denoted as\n$\\Delta M_P$, is found to be approximately $-0.9$~MeV. This experimental\nresult has created much interest since it provides a new test for the\npotential models for heavy quarkonia.\n\n\tIf the spin-dependent forces in the quarkonium potential could be\ntreated perturbatively, the $\\Delta M_P$ splitting would arise solely\nfrom the spin-spin (color hyperfine) interaction. However, the spin-dependent\nforces are known to be quite large and, as observed by Lichtenberg and Potting\n\\cite{lichten}, the contributions of the spin-orbit and tensor interactions to\n$\\Delta M_P$ cannot be ignored in a nonperturbative treatment. We shall\nanalyze this complex situation with the use of our model which\navoids the use of an illegitimate perturbative treatment, and provide an\nexplanation for the observed splittings of the charmonium P~states.\n\n\tSeveral authors\\cite{halzen,chen,grotch} have recently shown that\na theoretical value for $\\Delta M_P$ in close agreement with the\nexperimental value can be readily obtained from the spin-spin\ninteraction terms in the quarkonium potential. However,\nsince they have employed an illegitimate perturbative treatment, the\nsignificance of this simple interpretation remains an open question.\n\n\tOnly a quarkonium model which is in good overall agreement with\nthe experimental data can be taken seriously. Our model for heavy quarkonia\nsatisfies this requirement.\n\n\\section{$\\symbol{'143}\\bar{\\symbol{'143}}$ SPECTRUM}\n\n\tOur model is based on the Hamiltonian\n\\begin{equation}\nH=H_0+V_p+V_c , \\label{hamiltonian}\n\\end{equation}\nwhere\n\\begin{equation}\nH_0=2(m^2+{\\bf p}^2)^{1\/2}\n\\end{equation}\nis the relativistic kinetic energy term, and $V_p$ and $V_c$ are nonsingular\nquasistatic perturbative and confining potentials, which are\ngiven in the Appendix.\nWe found a trial wave function introduced by Jacobs, Olsson, and Suchyta\n\\cite{jacobs} particularly suitable for obtaining the quarkonium energy levels\nand wave functions.\n\n\tOur results for the energy-level splittings as well as the individual\nenergy levels of $c\\bar{c}$ are given in Tables~\\ref{splittings}\nand~\\ref{levels}.\nFor experimental data we have generally relied on the Particle Data Group\n\\cite{PDG}, but for the $\\eta_c$ state we have used the new result announced by\nthe E760 collaboration \\cite{appel}. The two sets\nof theoretical results in these tables correspond to the scalar-exchange and\nthe scalar-vector-exchange forms of the confining\npotential, given by\n\\begin{mathletters}\n\\begin{equation}\nV_c=V_S ,\n\\end{equation}\nand\n\\begin{equation}\nV_c=(1-B)V_S+BV_V ,\n\\label{vconfine}\n\\end{equation}\n\\end{mathletters}\nrespectively. The results obtained with the scalar-exchange confining\npotential are\nunsatisfactory, while the scalar-vector-exchange results\nare in surprisingly close agreement with the\nexperimental data, including the observed mass of the ${}^1P_1$ state\nand the $\\Delta M_P$ splitting. The scalar-vector mixing parameter $B$\nis found to be about $\\frac{1}{4}$.\n\n\tIn Table~\\ref{mp}, we display the contributions to $\\Delta M_P$ from\nthe various types of terms in the Hamiltonian (\\ref{hamiltonian}) with\nthe confining potential (\\ref{vconfine}).\nThe table shows comparable contributions to $\\Delta M_P$\nfrom several sources, which brings out the complexity of this splitting when\nspin-dependent potential terms are included in the unperturbed Hamiltonian.\nThe $\\Delta M_P$ splitting, therefore, does not provide a direct test of the\nspin-spin interaction in heavy quarkonia.\n\n\tIn Tables~IV and~V, we give the results for the leptonic and E1 transition\nwidths corresponding to the scalar-vector-exchange confining potential by using\nthe formulae\n\\begin{equation}\n\\Gamma_{ee}({}^3S_1 \\to e^+e^-) = \\frac{16\\pi\\alpha^2\ne_Q^2}{M^2(Q\\overline{Q})}\n\t\\left| \\Psi(0)\\right|^2 \\left( 1-\\frac{16\\alpha_s}{3\\pi}\\right),\n\\end{equation}\nand\n\\begin{eqnarray}\n\\Gamma_{E1}({}^3S_1 \\to {}^3P_J) &=& \\frac{4}{9}\\frac{2J+1}{3}\\alpha e_Q^2\n\tk_J^3 |r_{fi}|^2,\\nonumber \\\\\n\\Gamma_{E1}({}^3P_J \\to {}^3S_1) &=&\\frac{4}{9}\\alpha e_Q^2\n\tk_J^3 |r_{fi}|^2,\\\\\n\\Gamma_{E1}({}^1P_1 \\to {}^1S_0) &=&\\frac{4}{9}\\alpha e_Q^2\n\tk_J^3 |r_{fi}|^2.\\nonumber\n\\end{eqnarray}\nThe photon energies for the E1 transition widths have been\nobtained from the energy difference of the initial and the\nfinal $c\\bar{c}$ states by taking into account the recoil correction.\nOur results are\nin good agreement with the available experimental data \\cite{PDG}, and\nour prediction for the E1 transition width of $1{}^1P_1\\to 1{}^1S_0$ is\n341.8 keV.\n\n\\section{CONCLUSION}\n\tWe conclude with explanatory remarks concerning some features of our\nquarkonium potential.\n\n\\subsection{Renormalization scheme}\n\n\tWe have used the Gupta-Radford (GR) renormalization scheme \\cite{GR2}\nfor the one-loop radiative corrections to the quarkonium potential\nrather than the modified minimal-subtraction ($\\overline{\\rm MS}$)\nscheme. The GR scheme is a simplified momentum-space subtraction\nscheme, and the parameter $\\mu$ can be interpreted as representing the\nmomentum scale of the physical process.\nThis scheme also has the desirable feature that\nit satisfies the decoupling theorem\\cite{appelquist}. On the other\nhand, in the $\\overline{\\rm MS}$ scheme $\\mu$ appears as a mathematical\nparameter, and in this scheme decoupling-theorem-violating terms are\nsimply ignored.\n\n\tThe one-loop radiative corrections in the GR scheme can be converted\ninto those in the $\\overline{\\rm MS}$ scheme by means of the relation\n\\cite{GR2}\n\\begin{equation}\n\\alpha_s=\\bar{\\alpha_s}\\left[ 1+\\frac{\\bar{\\alpha_s}}{4\\pi}\\left(\n\t\\frac{49}{3}-\\frac{10}{9}n_l+\\frac{2}{3}\\sum_{n_h}\n\t\\ln\\frac{m^2}{\\mu^2} \\right)\\right]\\, , \\label{renorm}\n\\end{equation}\nwhere $\\bar{\\alpha_s}$ refers to the $\\overline{\\rm MS}$ scheme, and $n_l$\nand $n_h$ are the numbers of light and heavy quark flavors. If we drop\nthe decoupling-theorem-violating terms that appear in the $\\overline{\\rm MS}$\nscheme, we can put $n_l=n_f$ and $n_h=0$, and (\\ref{renorm}) reduces to\n\\begin{equation}\n\\alpha_s=\\bar{\\alpha_s}\\left[ 1+\\frac{\\bar{\\alpha_s}}{4\\pi}\\left(\n\t\\frac{49}{3}-\\frac{10}{9}n_f \\right)\\right]\\, .\n\\end{equation}\n\n\\subsection{Quasistatic potential}\n\n\tIn an earlier investigation \\cite{GRR2}, we arrived at the surprising\nconclusion that while the quasistatic form of the quarkonium potential yields\nresults in good agreement with the experimental data, this is not the case\nfor the momentum-dependent form. This conclusion has also been confirmed by\nthe\nrecent investigations of Gara {\\it et al.} \\cite{gara} and Lucha {\\it et al.}\n\\cite{lucha}.\n\n\tIt appears to us that the success of the quasistatic potential is related\nto the phenomenon of quark confinement. Since a rigorous treatment of\nquark confinement does not exist at this time, we shall only offer a\nplausible argument. It was argued earlier \\cite{GR3} with the use\nof a renormalization-group-improved quantum-chromodynamic treatment\nthat quark confinement can be understood as a consequence of the fact\nthat quarks and antiquarks are unable to exchange low-momentum gluons.\nMoreover, since for the quark-antiquark scattering in the center-of-mass\nsystem\n\\begin{equation}\n{\\bf p}^2=\\frac{1}{4}{\\bf k}^2+\\frac{1}{4}{\\bf s}^2,\n\\end{equation}\nwhere\n\\begin{equation}\n{\\bf k}={\\bf p'}-{\\bf p},\\qquad {\\bf s}={\\bf p'}+{\\bf p},\n\\end{equation}\nit follows that if ${\\bf k}^2$ is allowed to take only large values,\n${\\bf s}^2$ can be treated as small. This may be regarded as a justification\nfor the quasistatic approximation in which terms of second and higher\norders in ${\\bf s}$ are ignored.\n\n\tOur quarkonium perturbative and confining potentials are not\nonly quasistatic but also nonsingular. In the momentum space, these\npotentials are obtained by first expanding in powers of\n${\\bf p}^2\/(m^2+{\\bf p}^2)$, and then approximating ${\\bf p}^2$ as $\\frac{1}{4}{\\bf k}^2$.\nThe perturbative potential in powers of ${\\bf p}^2\/(m^2+{\\bf p}^2)$ includes, among\nothers, terms of the form\n\\begin{equation}\nf({\\bf p}^2)=\\frac{a+b\\;{\\bf S}_1\\cdot{\\bf S}_2}{m^2+{\\bf p}^2}\\ ,\n\\label{zero}\n\\end{equation}\nwhich becomes in the quasistatic approximation\n\\begin{equation}\nf({\\bf k}^2)=\\frac{a+b\\;{\\bf S}_1\\cdot{\\bf S}_2}{m^2+\\frac{1}{4}{\\bf k}^2}\\ .\n\\label{quasizero}\n\\end{equation}\nIt has been observed by Grotch, Sebastian, and Zhang \\cite{grotch} that while\nthe\ncontribution of $f({\\bf p}^2)$ vanishes for the P states due to the\nvanishing of the wave function at the origin, $f({\\bf k}^2)$ yields a small\nbut nonvanishing contribution for these states. Consequently, for P and higher\nangular-momentum states it would be more accurate to drop\nterms of the form (\\ref{zero}) than to convert them into the approximate form\n(\\ref{quasizero}). We agree with the observation of Grotch {\\it et al.}\nAccordingly, in the treatment of states with $l\\neq 0$ we shall\ndrop terms of the form (\\ref{quasizero}) in the momentum-space potentials\nand the corresponding terms of the form\n\\begin{equation}\nf({\\bf r})=\\frac{a+b\\;{\\bf S}_1\\cdot{\\bf S}_2}{\\pi r}e^{-2mr}\n\\label{coordzero}\n\\end{equation}\nin the coordinate-space potentials.\n\n\\subsection{Confining potential}\n\n\tIn our theoretical treatment, our aim has been to avoid phenomenology\nexcept in the choice of the long-range confining potential, which\ncannot be derived sufficiently accurately by any known theoretical\ntechnique. It is indeed remarkable that the results obtained from our\nfield-theoretical perturbative potential supplemented with a phenomenological\nconfining potential are in\nexcellent over-all agreement with the experimental data including the\n$\\Delta M_P$ splitting. It should be noted that we have neglected\neffect of coupling of the energy levels to virtual decay\nchannels and possibly other small effects.\nSuch effects presumably have also been taken into account in\nour phenomenological confining potential.\n\n\\acknowledgments\n\tThis work was supported in part by the U.S. Department of Energy\nunder Grant No. DE-FG02-85ER40209 and the National Science Foundation\nunder Grant No. PHY-93-07980. W.~W.~R. would like to acknowledge conversations\nwith R.~Lewis and G.~A.~Smith regarding the results of the E760\ncollaboration.\n\n\\cleardoublepage\n\\widetext\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjkln b/data_all_eng_slimpj/shuffled/split2/finalzzjkln new file mode 100644 index 0000000000000000000000000000000000000000..de1aa11fd26b946cbf501584a3cb03c77005066d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjkln @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:introduction}\n\nThe 20th century has seen the development of two pillars of modern theoretical physics: quantum field theory (QFT) and general relativity (GR). The standard model of particle physics, which successfully describes the quantum properties of the strong and electroweak interactions, is based on the former framework. However, its na\\\"{i}ve application to the latter yields a QFT of gravity prone to (perturbatively) non-renormalizable ultraviolet (UV) divergences~\\cite{tHooft:1974toh,Goroff:1985th,Goroff:1985sz}.\n\nDespite remarkable progress in a number of directions, the difficulties of formulating a complete theory of quantum gravity have led a considerable portion of the community to shift the focus on general, possibly model-independent lessons that could shed light on the nature of gravity at all scales. Many of these proposals, commonly dubbed ``swampland conjectures''~\\cite{Vafa:2005ui} in the context of string theory, rest on considerations on black-hole physics, which often can lie entirely in the semi-classical regime where one expects low-energy effective field theory (EFT) to be a reliable description.\n\nOn the other hand, some aspects of these conjectures arose from and are tied to string theory and, in particular, its spacetime-supersymmetric incarnations. These settings are much better understood, since quantum corrections are often under quantitative control~\\cite{Berglund:2005dm, Gonzalo:2018guu, Marchesano:2019ifh, Blumenhagen:2019vgj, Baume:2019sry, Palti:2020qlc, Marchesano:2021gyv, Lee:2018urn, Lee:2018spm, Lee:2019tst, Lee:2019xtm} and can sometimes even be computed exactly~\\cite{Klaewer:2020lfg, Klaewer:2021vkr}. In certain settings, such as $\\mathcal{N} = 2$ Calabi-Yau flux compactifications, one can even classify general families of models at once~\\cite{Grimm:2018ohb, Grimm:2019wtx, Grimm:2019ixq, Gendler:2020dfp, Grimm:2020ouv, Bastian:2021eom}. In some string models with high-energy supersymmetry breaking, a variety of swampland proposals were verified~\\cite{Basile:2020mpt, Basile:2021mkd}, although a construction of realistic and (meta-) stable vacua is still an open problem~\\cite{Kachru:2003aw, Balasubramanian:2005zx, Koerber:2007xk, Danielsson:2009ff, Moritz:2017xto, Kallosh:2018nrk, Bena:2018fqc, Gautason:2018gln, Cordova:2018dbb, Blaback:2019zig, Hamada:2019ack, Gautason:2019jwq, Cribiori:2019clo, Andriot:2019wrs, Shukla:2019dqd, Shukla:2019akv, Cordova:2019cvf, Andriot:2020wpp, Andriot:2020vlg,Farakos:2020wfc,Gao:2020xqh,Bena:2020qpa,Bena:2020xrh,Crino:2020qwk,Dine:2020vmr,Basiouris:2020jgp,Cribiori:2020use,Hebecker:2020ejb,Andriot:2021rdy,DeLuca:2021pej,Cicoli:2021dhg,Cribiori:2021djm}. At any rate, it is paramount to understand the consequences of general consistency conditions outside of the specific contexts arising from (supersymmetric) string compactifications, although parametric control is most likely going to be problematic~\\cite{Dine:1985he,Bena:2018fqc,Gao:2020xqh,Bena:2020xrh,Dine:2020vmr} due to unknown or uncalculable corrections. To wit, efforts to test swampland proposals have almost entirely focused on supersymmetric settings, and in particular on stringy constructions. In order to shed light on whether they only encode a ``string lamppost principle''~\\cite{Montero:2020icj} or they hold in more generality, it is important to extend their exploration to a broader class of quantum gravity models. On the other hand, the more stringent and well-grounded swampland proposals, such as the ``no global symmetries''~\\cite{Misner:1957mt, Polchinski:2003bq, Banks:2010zn, Harlow:2018tng} and weak gravity~\\cite{Arkani-Hamed:2006emk} conjectures, could help guide the search for asymptotic safety, which is currently faced with the daunting prospect of navigating ever-larger theory spaces~\\cite{Benedetti:2010nr, Knorr:2021slg}.\n\nA concrete problem that can be phenomenologically relevant in the near future is constraining the possible values of the Wilson coefficients of a curvature expansion of the gravitational EFT. In particular, one can expect that generic detectable leading-order effects of quantum gravity be encoded in the coefficients of the quadratic curvature invariants, which we shall discuss in detail in the following, or in some specific non-local form factors~\\cite{Belgacem:2017cqo,Knorr:2018kog}. Some efforts in this direction have been made using the S-matrix bootstrap~\\cite{Guerrieri:2021ivu, Caron-Huot:2021rmr}, finding compelling agreement with the parameter space allowed by string theory. In the EFT framework, the problem has also been investigated via positivity bounds~\\cite{deRham:2017zjm, deRham:2017xox, deRham:2018qqo, DeRham:2018bgz, Alberte:2019lnd, Alberte:2019xfh, Alberte:2019zhd, Alberte:2020bdz, Alberte:2020jsk, deRham:2021fpu}.\n\nIn this paper we approach this issue from a novel direction, studying the constraints coming from swampland conjectures together with the consistency conditions required by the existence of a UV fixed point of the gravitational renormalization group (RG) flow\\footnote{See also~\\cite{deAlwis:2019aud} for related discussions on the weak gravity conjecture in the context of asymptotically safe gravity.}. The latter scenario has been termed ``asymptotic safety'', in analogy with asymptotic freedom as a particular case. This idea, originally due to Weinberg~\\cite{1976W}, would imply that the Wilson coefficients of the IR effective action stem from a UV-complete RG trajectory, which in turn would be determined by a finite number of relevant deformations from the fixed point. Recently, this area of research has witnessed considerable development of the theoretical framework to investigate RG flows beyond perturbation theory~\\cite{Dupuis:2020fhh} and finding evidence for the existence of the Reuter fixed point in a variety of different approximation schemes~\\cite{Souma:1999at,Lauscher:2002sq,Litim:2003vp,Codello:2006in,Machado:2007ea,Benedetti:2009rx,Dietz:2012ic,Dona:2013qba,Eichhorn:2013xr,Dona:2014pla,Christiansen:2014raa,Falls:2014tra,Christiansen:2015rva,Meibohm:2015twa,Oda:2015sma,Dona:2015tnf,Biemans:2016rvp,Eichhorn:2016esv, Dietz:2016gzg, Falls:2016msz,Gies:2016con,Biemans:2017zca,Christiansen:2017cxa,Hamada:2017rvn,Platania:2017djo,Falls:2017lst,deBrito:2018jxt,Eichhorn:2018nda,Eichhorn:2019yzm,deBrito:2019umw,Knorr:2021slg} (see also~\\cite{Donoghue:2019clr,Bonanno:2020bil} for critical assessments on the status of the field and its open questions). Possible implications of asymptotically safe gravity in astrophysics and cosmology (see~\\cite{Bonanno:2017pkg,Platania:2020lqb} for reviews) have been investigated using simplified models~\\cite{Bonanno:2006eu,Falls:2012nd,torres15,Koch:2015nva,Bonanno:2015fga,Bonanno:2016rpx,Kofinas:2016lcz,Falls:2016wsa,Bonanno:2016dyv,Bonanno:2017gji,Bonanno:2017kta,Bonanno:2017zen,Bonanno:2018gck,Liu:2018hno,Majhi:2018uao,Anagnostopoulos:2018jdq,Adeifeoba:2018ydh,Pawlowski:2018swz,Gubitosi:2018gsl,Platania:2019qvo,Platania:2019kyx,Bonanno:2019ilz,Held:2019xde} and more elaborate computations~\\cite{Bosma:2019aiu}, leading to the tentative conclusions that black-hole and cosmological singularities could be resolved by quantum effects, and that the nearly scale-invariant cosmological power spectrum could arise naturally from a nearly scale-invariant asymptotically safe regime.\n\nIn this paper we shall propose a concrete method to extract the allowed region of IR parameters from the RG flow of asymptotically safe trajectories. \nIn particular, we shall focus on the simpler case of the one-loop approximation in quadratic gravity~\\cite{Codello:2006in,Niedermaier:2009zz, Niedermaier:2010zz} in order to test our construction and provide a proof of principle of our idea. We will show that the IR limit of asymptotically safe trajectories falls inside the region allowed by the weak gravity conjecture and electromagnetic duality, and display a non-trivial intersection with the one allowed by the de Sitter and trans-Planckian censorship bounds.\n\nThe contents of this paper are organized as follows. In sect.~\\ref{sec:swampland} we provide a brief overview of swampland conjectures, focusing on the weak gravity conjecture, the de Sitter conjecture and the trans-Planckian censorship conjecture, since they entail the most relevant bounds for our subsequent analysis. In sect.~\\ref{sec:one-loop} we describe in detail the one-loop approxima\\-tion to quadratic gravity that we employ as testing grounds, and our method of extracting the physical IR Wilson coefficients. The resulting effective action turns out to contain non-local form factors. In sect.~\\ref{sec:results} we collect our results: in sect.~\\ref{sec:IR_space} we present the allowed region of parameter space that we found, which spans a plane in the three-dimensional space of dimensionless IR parameters, and in sect.~\\ref{sec:wgc_constraints} and sect.~\\ref{sec:dsc_constraints} we study the constraints stemming from the swampland conjectures discussed in sect.~\\ref{sec:wgc_intro} and sect.~\\ref{sec:dsc_tcc_intro} respectively. In sect.~\\ref{sec:intersections} we discuss and display the intersection of all regions. We conclude with a summary and some perspectives in sect.~\\ref{sec:conclusions}.\n\n\\section{An overview of swampland conjectures}\\label{sec:swampland}\n\nAs we have anticipated in the introduction, swampland conjectures are proposals that ought to rule out EFTs of gravity that do not admit UV completions~\\cite{Vafa:2005ui}. These conjectures are generally motivated in part by purely low-energy considerations, stemming from black-hole physics or inflation, but they also arise from detailed investigations of string-theoretic settings, where generally one has more control over corrections and patterns can be corrobo\\-rated across families of EFTs. The latter approach has led some to describe a ``lamppost'' effect~\\cite{Montero:2020icj}, whereby only settings that are somewhat under control can be investigated and thus it is unclear to which extent the resulting conclusions can be generalized. Furthermore, while at least minimal supersymmetry is generally unbroken in order to retain computational control, recent considerations~\\cite{Cribiori:2021gbf,Castellano:2021yye} point to a tension between low-energy supersymme\\-try breaking\\footnote{Nevertheless, scenarios with high-energy supersymmetry breaking have been investigated in the context of the swampland~\\cite{Bonnefoy:2018tcp,Basile:2020mpt,Basile:2021mkd}. See~\\cite{Mourad:2017rrl, Basile:2021vxh, Mourad:2021lma} for recent reviews.} and the consistency of the EFT. As we have discussed in the preceding section, one of the motivations behind this work is indeed to go beyond the usual settings, seeking lessons for other approaches to quantum gravity.\n\nSince its inception, the swampland program aims to describe the boundary between the landscape of consistent gravitational EFTs with a growing number of proposed criteria\\footnote{See~\\cite{Palti:2019pca, vanBeest:2021lhn} for reviews.}, numerous relations among which~\\cite{Andriot:2020lea, Lanza:2020qmt} point to a deeper underlying principle. In particular, connections between the distance conjecture~\\cite{Ooguri:2006in, Ooguri:2018wrx} and string dualities suggest that an organizing principle for these consistency criteria in the IR be related to a non-perturbative UV formulation of quantum gravity. Furthermore, as we shall see in the following, swampland considerations have provided intriguing clues toward a number of phenomenological puzzles~\\cite{Grana:2021zvf}. \n\nIn this paper we shall focus on some conjectures which can provide bounds for the Wilson coefficients of the gravitational EFT. In particular,\n\n\\begin{itemize}\n \\item The \\emph{weak gravity conjecture} (WGC)~\\cite{Arkani-Hamed:2006emk} relates the mass and charge of light states and black holes;\n \\item The \\emph{de Sitter conjecture} (dSC)~\\cite{Obied:2018sgi}, along with its refined versions~\\cite{Ooguri:2018wrx,Garg:2018reu,Andriot:2018mav}, constrains the behavior of scalar potentials and their derivatives, leading to an obstruction to the existence of de Sitter vacua that is $\\mathcal{O}(1)$ in Planck units;\n \\item The \\emph{trans-Planckian censorship conjecture} (TCC)~\\cite{Bedroya:2019snp, Brandenberger:2021pzy} constrains sub-Planckian cosmological perturbations to remain sub-Planckian across inflation, and leads to bounds on the lifetime of metastable de Sitter configurations as well as on the $\\mathcal{O}(1)$ parameter that appears in the dSC, at least in asymptotic regions of field space.\n\\end{itemize}\n\nIn light of the latter consideration, for the purposes of this paper in the following we shall investigate the consequences of the TCC on Starobinsky-like inflationary potentials as a special case of the dSC. Indeed, we shall restrict ourselves to the asymptotic region of field space corresponding to small curvatures in Planck units, where the TCC could provide a dSC bound with a specific $\\mathcal{O}(1)$, as we shall see below.\n\n\\subsection{Weak gravity conjecture and black holes}\\label{sec:wgc_intro}\n\nLet us begin reviewing some features of the (electric) WGC, referring the reader to~\\cite{Palti:2019pca, vanBeest:2021lhn} for more details. In its most basic form, it states that in a consistent EFT of gravity coupled to a $U(1)$ gauge field there exists a state whose mass $m$ is lower than its charge $q$ in Planck units. In four dimensions, the bound for charged particles reads\n\\begin{eqaed}\\label{eq:wgc_basic}\n \\frac{m}{M_\\text{Pl}} \\leq \\mathcal{O}(1) \\, q \\, ,\n\\end{eqaed}\nwhere the model-dependent $\\mathcal{O}(1)$ constant is $\\frac{1}{\\sqrt{2}}$ in Einstein-Maxwell theory.\n\nAmong various motivations and evidence gathered in the literature, the WGC is grounded in black-hole physics from the requirement that charged, extremal black holes be able to decay, lest protected by a symmetry (such as supersymmetry, in the case of BPS-saturated states). The rationale behind this lies in avoiding remnants while keeping the black hole from violating the extremality bound, since a violation of either would presumably lead to consistency issues potentially within the EFT regime~\\cite{Giddings:1992hh, Susskind:1995da, Arkani-Hamed:2006emk}. For charged black holes of mass $M$ and charge $Q$, this requirement translates into\n\\begin{eqaed}\\label{eq:wgc_bhs}\n \\frac{M}{Q} \\geq \\left(\\frac{M}{Q}\\right)_\\text{extremal} \\, ,\n\\end{eqaed}\nwhere the latter is generally an $\\mathcal{O}(1)$ constant\\footnote{For Einstein-Maxwell theory, the extremality bound reads $\\frac{M}{Q} \\geq \\sqrt{2} \\, M_\\text{Pl}$.}. However, higher-curvature corrections could potentially spoil this condition even for macroscopic black holes, provided they are sufficiently close to extremality. Writing the leading quartic corrections according to~\\cite{Kats:2006xp}\n\\begin{eqaed}\\label{eq:eft_corr}\n \\Delta \\mathcal{L} = c_1 \\, R^2 + c_2 \\, R_{\\mu \\nu}R^{\\mu \\nu} + c_3 \\, R_{\\mu \\nu \\rho \\sigma} R^{\\mu \\nu \\rho \\sigma} \\, ,\n\\end{eqaed}\nthe resulting bounds for the corresponding Wilson coefficients $c_i$ comprise a family of inequalities for linear combinations of the $c_i$, parametrized by the extremality parameter of the black hole~\\cite{Arkani-Hamed:2006emk,Kats:2006xp,Cheung:2018cwt,Charles:2017dbr,Hamada:2018dde,Charles:2019qqt}. The extremality bound in general now takes the form\n\\begin{eqaed}\\label{eq:wgc_bhs_hd}\n \\frac{M}{Q} \\geq \\left(\\frac{M}{Q}\\right)_\\text{extremal} \\left(1 - \\frac{\\Delta}{M^2} \\right) \\, ,\n\\end{eqaed}\nwhere the linear combination $\\Delta$ of Wilson coefficients is to be non-negative in order for the WGC to hold, and is proportional to the coefficient $c_2 + 4c_3$ of the Weyl-squared term~\\cite{Kats:2006xp,Charles:2019qqt}.\n\nThe leading order contributions to $\\Delta$ comprise not only the Wilson coefficients in the effective action of eq.~\\eqref{eq:eft_corr}, but also the Wilson coefficients that involve the $U(1)$ gauge field. It has been recently shown~\\cite{Cano:2021tfs} that, \\emph{assuming invariance under electromagnetic duality}, higher-curvature corrections up to sextic order can be written in terms of purely gravitational terms, up to field redefinitions. Let us stress that our aim is to intersect swampland bounds with the constraints provided by asymptotic safety, and the technical obstacles to compute its consequences for quartic electromagnetic couplings in gravity, which would entail involved FRG computations along the lines of~\\cite{Knorr:2021slg}, compel us to focus on the duality-invariant scenario of~\\cite{Cano:2021tfs}, which at any rate appears intriguing on its own\\footnote{Another instance of the interplay between duality and the WGC has been studied in~\\cite{Loges:2020trf}.}. Moreover, the electromagnetic couplings do not run under the RG flow at one loop because of tree-level duality~\\cite{Charles:2017dbr,Charles:2019qqt}. This has been used to argue that the low-energy behavior of the correction $\\Delta$ to the extremality bound is dominated by the Weyl anomaly coefficients that drive the running of the $c_i$, and in particular of the Weyl-squared Wilson coefficient $c_2 + 4c_3$, as we shall see in the following. However, in the present case we would like to constrain the physical parameters built out of the Wilson coefficients and the Planck scale, in order to compare the resulting bounds with the constraints of asymptotic safety. We shall describe the procedure in detail in the following section.\n\n\\subsection{de Sitter and trans-Planckian censorship}\\label{sec:dsc_tcc_intro}\n\nLet us now move on to discuss the dSC and the TCC. The former quantifies an obstruction to the existence of de Sitter vacua, in the form of a bound for the (field-space gradient of the) scalar potential $V(\\phi)$. Indeed, since in this setting de Sitter vacua would arise as positive-energy critical points of $V$, a natural bound that would prevent these takes the form\n\\begin{eqaed}\\label{eq:dsc_basic}\n M_\\text{Pl}\\, \\abs{\\nabla V} \\geq c V \n\\end{eqaed}\nfor field ranges\n\\begin{eqaed}\\label{eq:field_range}\n \\Delta \\phi \\lesssim f \\, M_\\text{Pl} \\, ,\n\\end{eqaed}\nwhere $c \\, , \\, f > 0$ are (\\emph{a priori} model-dependent) constants. Within the EFT framework, their natural values are $\\mathcal{O}(1)$, indicating that the obstruction is tied to the expected cutoff of the EFT. However, one is readily confronted with a tension between the bound in eq.~\\eqref{eq:dsc_basic} and slow-roll inflation~\\cite{Garg:2018reu, Kinney:2018nny}\\footnote{See also~\\cite{Rudelius:2019cfh,Rudelius:2021oaz,Chojnacki:2021fag,Jonas:2021xkx} for discussions on eternal inflation and the swampland.}, leading to refinements involving the Hessian matrix of the potential~\\cite{Garg:2018reu,Ooguri:2018wrx}. In particular, whenever the bound of eq.~\\eqref{eq:dsc_basic} would be violated, the matrix\n\\begin{eqaed}\\label{eq:ref_dsc}\n M_\\text{Pl}^2\\, \\text{Hess}(V) + \\, c' \\, V\n\\end{eqaed}\nwould be negative semidefinite, with $c' > 0$ another $\\mathcal{O}(1)$ constant. Further refinements were proposed in~\\cite{Andriot:2018mav}, but in our setting we shall find that the first bound of eq.~\\eqref{eq:dsc_basic} is sufficient, since it encompasses eq.~\\eqref{eq:ref_dsc} in the regions of parameter space that we are concerned with.\n\nOn the other hand, the TCC surmises that sub-Planckian quantum fluctuations in the early universe at initial time $t_i$ never grow macroscopic at a final time $t_f$. In particular, they ought to never cross the Hubble horizon and freeze. This requirement can be formulated, in terms of the scale factor $a(t)$ and the corresponding Hubble parameter $H$, by~\\cite{Bedroya:2019snp, Brandenberger:2021pzy}\n\\begin{eqaed}\\label{eq:tcc_basic}\n \\frac{a(t_f)}{a(t_i)} \\lesssim \\frac{M_\\text{Pl}}{H(t_f)} \\, ,\n\\end{eqaed}\nagain up to an $\\mathcal{O}(1)$ constant. An intriguing consequence of eq.~\\eqref{eq:tcc_basic} is that de Sitter configurations are not prohibited, but they are metastable with a lifetime $T$ bounded by\n\\begin{eqaed}\\label{eq:tcc_life}\n T \\lesssim \\frac{1}{H} \\, \\log \\frac{M_\\text{Pl}}{H} \\, ,\n\\end{eqaed}\nof the order of a trillion years. This results points to a possible resolution of the coincidence problem in this setting.\n\nThe most relevant consequence of the TCC for the purposes of this paper is that, in the presence of a scalar potential, it leads to a bound of the form of eq.~\\eqref{eq:dsc_basic} with\n\\begin{eqaed}\\label{eq:tcc_c1}\n c = \\frac{2}{\\sqrt{(d-1)(d-2)}}\n\\end{eqaed}\nin $d$ spacetime dimensions, at least in asymptotic regimes of field space. In the present setting, the scalar potential arises from the quadratic curvature terms, and the corresponding asymptotic regime for gravitational field fluctuations is that of small curvatures~\\cite{Lust:2019zwm}. This regime is mapped to a neighbourhood of the origin in the inflaton description. For generic curvatures, one expects that both the purely gravitational description and the inflaton description be modified, including the geometry of field space. Nevertheless, since our current setup does not allow for precise quantitative bounds, we shall henceforth take eq.~\\eqref{eq:tcc_c1} simply as a reference point around which to study the more general bound of eq.~\\eqref{eq:dsc_basic}. Let us also remark that this value appears in a number of related swampland bounds~\\cite{Andriot:2020lea} and is well-behaved under dimensional reduction~\\cite{Rudelius:2021oaz}, and thus it may play a more prominent role in the story. At any rate, it would be interesting to explore the more direct implications of eq.~\\eqref{eq:tcc_basic} studying cosmological solutions or exploring the considerations of~\\cite{Bedroya:2019tba, Brandenberger:2021pzy} within our setup.\n\n\\section{One-loop RG flow in quadratic gravity}\\label{sec:one-loop}\n\nLet us now discuss the concrete setting in which we shall compute the possible values of the Wilson coefficients of the effective gravitational action. In this work we focus on the quadratic truncation\\footnote{Let us remark that here ``quadratic'' refers to the order in the curvatures. In terms of derivatives, the action in eq.~\\eqref{eq:quadratic_lagrangian} is quartic.}, in the one-loop approximation. In Euclidean signature, the Lagrangian pertaining to the full quadratic truncation reads\n\\begin{eqaed}\\label{eq:quadratic_lagrangian}\n \\mathcal{L}=\\frac{2\\Lambda-R}{16\\pi G}+\\frac{1}{2\\lambda}\\,C^2-\\frac{\\omega}{3\\lambda}R^2+\\frac{\\theta}{\\lambda}E \\, ,\n\\end{eqaed}\nwhere $C^2 \\equiv C_{\\mu\\nu\\rho\\sigma}C^{\\mu\\nu\\rho\\sigma}$ is the square of the Weyl tensor, $E$ is the Gauss-Bonnet density and and the Wilson coefficients\n\\begin{eqaed}\\label{eq:wilson_coeffs}\n g_C \\equiv \\frac{1}{2\\lambda} \\, , \\qquad\n g_R \\equiv - \\, \\frac{\\omega}{3\\lambda}\n\\end{eqaed}\ncan be related to the $c_i$ coefficients in eq.~\\eqref{eq:eft_corr}. Indeed, since\n\\begin{eqaed}\\label{eq:weyl_gb}\n C^2&=R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma}-2R_{\\mu\\nu}R^{\\mu\\nu}+\\frac{R^2}{3}\\,,\\\\\n E&=R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma}-4R_{\\mu\\nu}R^{\\mu\\nu}+R^2\\,,\n\\end{eqaed}\nthe $c_i$ are related to the couplings in eq.~\\eqref{eq:quadratic_lagrangian} according to\n\\begin{eqaed}\\label{eq:couplings_relation}\n c_1&=\\frac{1}{6\\lambda}-\\frac{\\omega}{3\\lambda}+\\frac{\\theta}{\\lambda} \\, ,\\\\\n c_2&=-\\frac{1}{\\lambda}-\\frac{4\\theta}{\\lambda} \\,,\\\\\n c_3&=\\frac{1}{2\\lambda}+\\frac{\\theta}{\\lambda} \\,.\n\\end{eqaed}\nWhile this setup holds in general spacetime dimensions $d$, we now restrict to $d = 4$. The one-loop beta functions of the couplings of eq.~\\eqref{eq:quadratic_lagrangian} read~\\cite{Codello:2006in,Niedermaier:2009zz, Niedermaier:2010zz}\n\\begin{eqaed}\\label{eq:betas}\n\\beta_{\\widetilde{\\Lambda}} & =-\\,2 \\widetilde{\\Lambda}+\\frac{1}{(4 \\pi)^{2}}\\left[\\frac{1+20 \\omega^{2}}{256 \\pi \\widetilde{G} \\omega^{2}} \\lambda^{2}+\\frac{1+86 \\omega+40 \\omega^{2}}{12 \\omega} \\lambda \\widetilde{\\Lambda}\\right] \\\\\n& \\quad -\\frac{1+10 \\omega^{2}}{64 \\pi^{2} \\omega} \\lambda +\\frac{2 \\widetilde{G}}{\\pi}-\\frac{83+70 \\omega+8 \\omega^{2}}{18 \\pi} \\widetilde{G} \\widetilde{\\Lambda} \\, , \\\\\n\\beta_{\\widetilde{G}} & =2 \\widetilde{G}-\\frac{1}{(4 \\pi)^{2}} \\frac{3+26 \\omega-40 \\omega^{2}}{12 \\omega} \\lambda \\widetilde{G}-\\frac{83+70 \\omega+8 \\omega^{2}}{18 \\pi} \\widetilde{G}^{2} \\, ,\n\\\\\n\\beta_{\\lambda} & =-\\frac{1}{(4 \\pi)^{2}} \\frac{133}{10} \\lambda^{2} \\, , \\\\\n\\beta_{\\omega} & =-\\frac{1}{(4 \\pi)^{2}} \\frac{25+1098 \\,\\omega+200 \\,\\omega^{2}}{60} \\lambda \\, , \\\\\n\\beta_{\\theta} & =\\frac{1}{(4 \\pi)^{2}} \\frac{7\\,(56-171\\, \\theta)}{90} \\lambda\n\\end{eqaed}\nwhere $\\widetilde{G}_k=G_k\\,k^2$ and $\\widetilde{\\Lambda}_k=\\Lambda_k\\,k^{-2}$\nare the dimensionless Newton coupling and cosmological constant respectively, and we have suppressed the subscript $k$ in eq.~\\eqref{eq:betas} for the sake of clarity.\n\nIn our setting, the flow of the (classically) marginal couplings $\\lambda$, $\\omega$ and $\\theta$ is decoupled from that of the Einstein-Hilbert couplings. Out of the UV fixed points\n\\begin{eqaed}\\label{eq:fixed_points}\n \\lambda_\\ast = 0 \\, , \\qquad \\omega_\\ast = \\omega_\\pm \\equiv \\frac{-549 \\pm 7\\sqrt{6049}}{200} \\, , \\qquad \\theta_\\ast = \\frac{56}{171} \\, ,\n\\end{eqaed}\nUV completeness selects $\\omega_\\ast = \\omega_+ \\approx - \\, 0.023$~\\cite{Codello:2008vh}, which the solutions approach as the RG time\\footnote{Note that, since we are interested in the IR regime, our convention for the RG time is such that $t \\to +\\infty$ in the IR.} $t \\equiv \\log\\frac{k_0}{k} \\to -\\infty$, as is apparent from fig.~\\ref{fig:flowlambdaomega}. Let us remark that this fixed point is asymptotically safe, \\emph{i.e.} at least one coupling is not asymptotically free~\\cite{Codello:2008vh, Niedermaier:2009zz, Niedermaier:2010zz, Groh:2011vn}. Indeed, the critical exponents of $G$ and $\\Lambda$ are 2 and 4\\footnote{Although 2 and 4 are not the canonical mass dimensions of $G$ and $\\Lambda$, they are the canonical dimension of the couplings $1\/G$ and $\\Lambda\/G$ that multiply the operators $\\sqrt{-g}$ and $\\sqrt{-g} R$. This occurs because the transformation between these couplings is non-singular, as explained in~\\cite{Percacci:2007sz}. On the other hand, at the Gaussian fixed point the transformation between the couplings is singular, and the dimensions change accordingly.}, while in the IR they become the canonical -2 and 2 respectively~\\cite{Litim:2012vz}. The fact that all couplings are attracted to the fixed point in the UV~\\cite{Codello:2008vh} is instead an artifact of the one-loop approximation. Indeed, more sophisticated FRG computations yield a fixed point with a three-dimensional critical surface~\\cite{Benedetti:2009rx}.\n\nThe flow can be solved analytically in terms of the deformations $\\delta \\lambda \\, , \\, \\delta \\omega$ from the UV fixed point, and yields the closed-form solution\n\\begin{eqaed}\\label{eq:marginal_flow}\n \\lambda(t) & = \\frac{\\delta \\lambda}{1 - \\frac{133}{160\\pi^2} \\, \\delta \\lambda \\, t} \\,,\n \\\\\n \\omega(t) & = \\frac{\\omega_- - \\, \\omega_+ \\, \\left(1 + \\frac{\\Delta}{\\delta \\omega}\\right) \\left(1 - \\, \\frac{133}{160\\pi^2} \\, \\delta \\lambda \\, t \\right)^{\\frac{7\\sqrt{6049}}{399}}}{1 - \\, \\left(1 + \\frac{\\Delta}{\\delta \\omega}\\right) \\left(1 - \\, \\frac{133}{160\\pi^2} \\, \\delta \\lambda \\, t \\right)^{\\frac{7\\sqrt{6049}}{399}}} \\,,\n \\\\\n \\theta(t) & = \\frac{56}{171} + \\frac{\\delta \\theta}{1 - \\frac{133}{160\\pi^2} \\, \\delta \\lambda \\, t} \\,,\n\\end{eqaed}\nwhere $\\Delta \\equiv \\omega_+ - \\omega_-$. The vector field generating this flow is displayed in fig.~\\ref{fig:flowlambdaomega} in the $(\\omega,\\lambda)$ subspace and in fig.~\\ref{fig:flowMC} with various 3D plots. Let us observe that the UV completeness of the trajectory requires $\\delta \\lambda > 0$, and that the IR flow ends at $t = t_\\text{IR} \\equiv \\frac{160\\pi^2}{133 \\delta \\lambda}$. However, since $\\delta \\lambda \\ll 1$ this RG time is parametrically large, and one can reliably extract the perturbative IR behavior for the Wilson coefficients. Furthermore, reaching a physical IR regime with $\\widetilde{G} \\to 0^+$ requires that $\\delta \\widetilde{G} \\, , \\, \\delta \\omega < 0$, in order that their flows remain between the UV and IR fixed-point values avoiding runaway. The flow of the relevant deformations from the fixed point is shown in Fig.~\\ref{fig:flow-reldeformations}.\n\n\\begin{figure}\n\\centering\\includegraphics[scale=0.7]{\"Images\/FlowLO\".pdf}\n\\caption{Flow of one-loop quadratic gravity in the $(\\omega,\\lambda)$ subspace. The arrows point toward the IR. In this setting, two non-trivial fixed points are present, and the Reuter fixed point is the one with the smaller absolute value~\\cite{Codello:2008vh}. \\label{fig:flowlambdaomega}}\n\\end{figure}\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[scale=0.4]{\"Images\/Flow3D\".pdf}\\\\\\belowbaseline[0pt]{\n\\includegraphics[scale=0.45]{\"Images\/3Dplot-marg-4\".pdf}}~$\\qquad$\\belowbaseline[0pt]{\\includegraphics[scale=0.47]{\"Images\/3Dplot-marg-2\".pdf}}\n\\caption{Flow of the classically marginal couplings $(\\omega,\\lambda,\\theta)$ in one-loop quadratic gravity. The arrows point towards the IR, and different viewpoints are shown to better visualize the flow. The color coding of the arrows is identical to that of fig.~\\ref{fig:flowlambdaomega}.\\label{fig:flowMC}}\n\\end{figure}\n\n\\begin{figure}\n\\centering\\includegraphics[scale=0.68]{\"Images\/FlowRDomlam\".pdf}\\includegraphics[scale=0.68]{\"Images\/FlowRDglambda\".pdf}\n\\caption{Flow of the relevant deformations from the fixed point. The left panel depicts the flow in the $(\\omega,\\lambda)$ subspace. The right panel depicts the flow in the $(G, \\Lambda)$ subspace where the classically marginal couplings have been set to the UV fixed point. The arrows indicate the the flow from the UV to the IR.\\label{fig:flow-reldeformations}}\n\\end{figure}\n\nSubstituting the expressions of eq.~\\eqref{eq:marginal_flow} in eq.~\\eqref{eq:betas}, one can then solve the remaining flow equations numerically varying the initial conditions, or, equivalently, the deformations $\\delta \\omega, \\delta \\lambda, \\delta \\widetilde{G} , \\delta \\widetilde{\\Lambda}$ from the UV fixed point. The RG flow then drives the running couplings to the weakly coupled IR, where the running couplings $g_C$ and $g_R$, defined in eq.~\\eqref{eq:wilson_coeffs}, behave logarithmically (linearly in $t$) as $t \\to t_\\text{IR}^-$. This result is consistent with perturbative computations, and the resulting asymptotic expressions read\n\\begin{eqaed}\\label{eq:log_running}\n g_C(t) & \\sim \\frac{1}{2\\delta \\lambda} - \\frac{133}{320\\pi^2} \\, t \\, , \\\\\n g_R(t) & \\sim - \\, \\frac{\\omega_-}{3\\delta \\lambda} + \\frac{133}{480\\pi^2} \\, \\omega_- \\, t \\, .\n\\end{eqaed}\nIn order to extract the physical IR parameters, we shall identify the (square of the) RG scale $k^2$ with the covariant Laplacian\/d'Alembertian $\\Box$. In order to eliminate the arbitrary reference scale $k_0$ that defines the initial condition for the RG flow, one can express every quantity in units of the IR Planck mass\\footnote{Notice that our convention for the Planck mass differs from the more widespread ``reduced'' Planck mass $\\widehat{M}_\\text{Pl}^{-2} = 8\\pi G$.} $M_\\text{Pl}^{-2} = G$. To this end, since $e^{2t} \\, \\widetilde{G}(t) \\to \\widetilde{G}_0$ tends to a constant in the IR, one can evaluate the running Wilson coefficients of eq.~\\eqref{eq:log_running} replacing $t \\to - \\, \\frac{1}{2} \\log \\widetilde{G}(t)$, so that\n\\begin{eqaed}\\label{eq:t_planck_sub}\n \\log \\frac{M_\\text{Pl}}{k} & = \\log \\frac{M_\\text{Pl}}{k_0} + t \\\\\n & \\sim - \\, \\frac{1}{2} \\log \\widetilde{G}(t) \\\\\n & \\sim - \\, \\frac{1}{2} \\log \\widetilde{G}_0 + t \\, .\n\\end{eqaed}\nSince $\\widetilde{G}_0$ can be extracted from the numerical solution of eqs.~\\eqref{eq:betas}, identifying \n\\begin{eqaed}\\label{eq:form_factor_sub}\n \\log \\frac{k^2}{M_\\text{Pl}^2} \\longrightarrow \\log \\frac{\\Box}{M_\\text{Pl}^2} \\, ,\n\\end{eqaed}\naccording to the preceding considerations, one can reconstruct an effective action of the form\n\\begin{eqaed}\\label{eq:form_factor_eft}\n \\Gamma = \\int d^4x \\sqrt{g} \\, \\left(\\frac{2\\Lambda - R}{16\\pi G} + g_C \\, C^2 + g_R \\, R^2 + b_C \\, C \\log \\frac{\\Box}{M_\\text{Pl}^2} \\, C + b_R \\, R \\log \\frac{\\Box}{M_\\text{Pl}^2} \\, R \\right) \\, , \n\\end{eqaed}\nwhere Weyl-tensor contraction is understood. The appearance of non-local form factors resonates with the considerations in~\\cite{Knorr:2019atm, Draper:2020bop, Draper:2020knh}. While we shall neglect them in the following, the presence of form factors of this type seems largely consistent with preceding results~\\cite{Riegert:1984kt, Deser:1996na, Erdmenger:1996yc, Erdmenger:1997gy, Deser:1999zv, Bautista:2017enk} (see also~\\cite{Donoghue:2015nba} for a discussion of logarithmic form factors). Note that, despite their behavior at low energies, one expects a resummation of such non-local form factors to yield a result that is both well-defined and subleading in the IR compared to the local terms~\\cite{Draper:2020bop}\\footnote{One exception could be a non-local form factor of the type $\\sim 1\/\\Box$, as discussed in~\\cite{Belgacem:2017cqo,Knorr:2018kog}.}. Furthermore, they do not contribute to the scalar potential that we shall discuss in Section~\\ref{sec:dsc_constraints} in the context of the dSC. Notwithstanding the importance of form factors in establishing a non-local behavior of gravity, we would like to understand which values of the three dimensionless combinations\n\\begin{eqaed}\\label{eq:IR_params}\n G\\Lambda \\, , \\quad g_C \\, , \\quad g_R\n\\end{eqaed}\nare allowed starting from any initial condition, \\emph{i.e.}, any perturbation of the asymptotically safe UV fixed point along UV-attractive directions. To this end, we have evaluated numerically these combinations in the IR, implementing the substitution of eq.~\\eqref{eq:t_planck_sub}. The following plots highlighting the swampland constraints, the IR limits of asymptotically safe RG trajectories, as well as the final intersection between the allowed regions, will pertain to the $(G\\Lambda,g_C,g_R)$ theory space.\n\nTo conclude this section, let us collect a few words of caution regarding the one-loop approximation. In general, in the context of gravity, one expects it to be only reliable in the IR, despite the appearance of a UV fixed point outside of the perturbative regime. The methods of the functional RG have been employed, both in earlier~\\cite{Benedetti:2009rx,Benedetti:2009gn} and recent~\\cite{Knorr:2021slg} efforts, to obtain non-perturbative flow equations in the quadratic truncation, but applying our method to extract the allowed region of parameter space in the IR entails highly involved and unstable numerical analysis. In order to circumvent these obstacles, and address the problem in a more quantitative fashion, a natural first step would entail performing novel FRG computations. The simplest relevant setting would include the most general quadratic truncation coupling the electromagnetic field to gravity, which, while daunting, appears feasible via the methods that have been very recently introduced in~\\cite{Knorr:2021slg} to study the purely gravitational sector. In light of these (and other related) issues, in this work we have focused on the one-loop approximation as a proof of principle, with the hope of uncovering some instructive general lessons from the results that we are now about to present.\n\nFinally, let us stress that, although quadratic truncations of the gravitational action are typically associated with a loss of physical unitarity~\\cite{Stelle:1977ry}, the Stelle ghost could be a truncation artifact~\\cite{Platania:2020knd}. Integrating out quantum fluctuations could lead to well-behaved, unitary scattering amplitudes~\\cite{Draper:2020bop,Draper:2020knh}, as explicit computations seem to suggest~\\cite{Bonanno:2021squ}. This issue was also discussed within the setting explored in this paper in~\\cite{Niedermaier:2009zz, Niedermaier:2010zz}.\n\n\\section{Results}\\label{sec:results}\n\nLet us now describe in detail our results on the allowed values of physical IR parameters that we have obtained from the calculations outlined in the preceding section, along with the swampland constraints that we have discussed in sect.~\\ref{sec:swampland}.\n\n\\subsection{Infrared limit of asymptotically safe RG trajectories in one-loop quadratic gravity}\\label{sec:IR_space}\n\nIn order to uncover the space of physical parameters appearing in the (local sector of the) effective action of eq.~\\eqref{eq:form_factor_eft}, we have sampled the space of allowed deformations $(\\delta \\omega, \\delta \\lambda, \\delta \\widetilde{G} , \\delta \\widetilde{\\Lambda})$ from the UV fixed point, and extracted the resulting IR values of the parameters in eq.~\\eqref{eq:IR_params} evaluating the flow of $\\widetilde{G}$ and $\\widetilde{\\Lambda}$ for a suitably large RG time $t \\approx 30$, exploiting the rapid convergence of the combination in eq.~\\eqref{eq:t_planck_sub}. The resulting values for $\\widetilde{G}\\widetilde{\\Lambda} = G\\Lambda$, or equivalently $\\Lambda\/M_\\text{Pl}^2$, span a wide range of values, of the order of $10^5$ for the region of initial deformations that we have explored. Moreover, the closed-form flow that one obtains setting the classically marginal couplings to their fixed-point values spans the whole real axis~\\cite{Codello:2008vh}. We are thus led to conclude that the allowed (IR) values of $G\\Lambda$ are unrestricted. On the other hand, the values of $g_C$ and $g_R$ appear to lie on the line\n\\begin{eqaed}\\label{eq:marginal_line}\n g_R \\approx - \\, 0.74655 + 3.64447 \\, g_C \\, ,\n\\end{eqaed}\nas depicted in fig.~\\ref{fig:IR2d-fit} and fig.~\\ref{fig:IR3d}. This result appears to be very robust upon increase of the sample size, and in particular for $10^6$ points the covariance matrix of the fit is of the order $\\mathcal{O}(10^{-8})$. Let us observe that, neglecting the intercept term, eq.~\\eqref{eq:marginal_line} follows from eq.~\\eqref{eq:log_running} as $t \\to t_\\text{IR}^-$, whereby $g_R \\sim - \\, 2\\omega_-\/3 \\, g_C$. Since we instead evaluate the IR couplings at a fixed, albeit sufficiently large, RG time, it is tempting to speculate that the intercept term in eq.~\\eqref{eq:marginal_line} is a correction arising from RG trajectories that approach the IR more slowly. Therefore, at least within the scope of our approximations, the presence of a UV fixed point appears to constrain the allowed physical coefficients in eq.~\\eqref{eq:IR_params} to a specific plane, and we shall now compare this result to the constraints arising from the swampland conjectures that we have discussed in sect.~\\ref{sec:swampland}. \n\n\\begin{figure}[t!]\n\\centering\\includegraphics[scale=0.5]{\"Images\/LineEqs\".pdf}\n\\caption{The line of equation $g_R = - \\, 0.74655 + 3.64447 \\, g_C$ fitting the IR values obtained from the flow of eq.~\\eqref{eq:betas} sampling UV initial conditions. The covariance matrix evaluates to $\\mathcal{O}(10^{-8})$ with~$10^6$ data points.}\\label{fig:IR2d-fit}\n\\end{figure}\n\n\\begin{figure}[t!]\n$\\hspace{-0.2cm}$\\includegraphics[scale=0.53]{\"Images\/Plot3D-1\".pdf}$\\hspace{-0.5cm}$\\includegraphics[scale=0.53]{\"Images\/Plot3D-2\".pdf}\\\\\n\\centering\\includegraphics[scale=0.55]{\"Images\/Plot3D-3\".pdf}\n\\caption{Plots depicting the IR endpoints of asymptotically safe RG trajectories. The points lie on the plane of equation $g_R = - \\, 0.74655 + 3.64447 \\, g_C$. The values of $G\\Lambda$ span a vast range, which, together with the closed-form flow of~\\cite{Codello:2008vh} depicted in fig.~\\ref{fig:flow-reldeformations}, leads us to infer that they are unrestricted.}\\label{fig:IR3d}\n\\end{figure}\n\n\\subsection{Constraints on quadratic gravity from WGC}\\label{sec:wgc_constraints}\n\nAs we have discussed in sect.~\\ref{sec:wgc_intro}, the WGC entails positivity bounds for the Wilson coefficients of the higher-derivative corrections to Einstein gravity. Since these bounds involve charged particles and black holes, higher-derivative couplings of a $U(1)$ gauge field ought to be included, although the resulting RG flow is extremely involved technically and has not been computed hitherto. On the other hand, the considerations of~\\cite{Charles:2017dbr, Charles:2019qqt, Cano:2021tfs}, based on electromagnetic duality, show that one can still make use of our results to constrain higher-derivative corrections in a \\emph{duality-invariant} scenario using the WGC. To this end, expressing the higher-curvature in terms of the $c_i$ coefficients of eq.~\\eqref{eq:eft_corr}, the (family of) positivity bound(s) of~\\cite{Cheung:2018cwt} reads\n\\begin{eqaed}\\label{eq:positivity_bound}\n (1-\\xi)^2\\,c_0+20\\xi c_3-5\\xi(1-\\xi)(2c_3)>0\n\\end{eqaed}\nwhere $\\xi\\equiv\\sqrt{1-Q^2\/M^2}$ is the extremality parameter of Reissner-Nordstr\\\"{o}m black holes with mass $M$ and charge $Q$, $0<\\xi<1\/2$ for black holes with positive specific heat and\n\\begin{eqaed}\\label{eq:c0_coeff}\n c_0 \\equiv c_2 + 4 \\, c_3 \\, .\n\\end{eqaed}\nIn terms of the couplings in eq.~\\eqref{eq:quadratic_lagrangian}, the bound of eq.~\\eqref{eq:positivity_bound} takes the simpler form\n\\begin{eqaed}\\label{eq:simpler_wgc_bound}\n \\frac{1}{\\lambda}(10\\, \\theta \\, (\\xi +1) \\xi +6 \\xi ^2+3 \\xi +1)>0 \\, ,\n\\end{eqaed}\nwhich holds for~$\\lambda>0$ provided that~$\\xi>0$ (which is always satisfied by the extremality parameter) and that~$\\theta>0$. As we have discussed in sect.~\\ref{sec:one-loop}, the latter condition is fulfilled if $\\delta \\theta > 0$, since $\\delta \\lambda > 0$ in eq.~\\eqref{eq:marginal_flow}. Hence, the (duality-invariant) WGC constrains~$\\lambda > 0$, which is included by the analysis of the preceding section and does not entail additional conditions. Let us observe that, although $\\theta$ encodes the coupling of the Gauss-Bonnet invariant, it contributes to the entropy of a black hole~\\cite{Myers:1988ze,Myers:1998gt,Clunan:2004tb} even in four dimensions, where it is topological. It does not contribute in the limit $\\xi \\ll 1$, since the resulting bound also describes the positivity of the extremality ratio~\\cite{Kats:2006xp,Charles:2019qqt}.\n\n\\subsection{Constraints on quadratic gravity from dS and TC conjectures}\\label{sec:dsc_constraints}\n\nLet us now discuss the constraints arising from the dSC and the TCC. As we have anticipated in sect.~\\ref{sec:dsc_tcc_intro}, we shall focus on the bounds that the dSC and the TCC entail for the scalar potential that arises from the higher-derivative corrections of eq.~\\eqref{eq:form_factor_eft}. In order to extract the potential proper, we shall concern ourselves with the local sector of the theory, neglecting the form factors and the Weyl term, which vanishes on cosmological backgrounds.\n\nFollowing the standard procedure to obtain inflaton potentials from $F(R)$ Lagrangians (see, \\emph{e.g.},~\\cite{inflationaris,Platania:2019qvo}), one begins from $R^2$ gravity with a cosmological constant,\n\\begin{eqaed}\\label{eq:our_f}\n F(R)=\\frac{1}{16\\pi G}\\left(R-2\\Lambda + \\frac{R^2}{6m^2}\\right) \\, ,\n\\end{eqaed}\nwhere the coupling $g_R$ is related to the inflaton mass according to\n\\begin{eqaed}\\label{eq:grm2}\n g_R = - \\, \\frac{M_\\text{Pl}^2}{(8\\pi)\\cdot 12m^2} \\, .\n\\end{eqaed}\nOne then arrives at the inflaton potential\n\\begin{eqaed}\\label{eq:scalar_pot}\n V(\\phi)=\\frac{M_\\text{Pl}^2}{8\\pi}\\,e^{-2\\sqrt{\\frac{2}{3}}\\frac{\\phi}{M_\\text{Pl}}} \\left(\\frac{3 m^2}{4}\\left(e^{\\sqrt{\\frac{2}{3}}\\frac{\\phi}{M_\\text{Pl}}}-1\\right)^2+\\Lambda \\right) \\, .\n\\end{eqaed}\nIn order to retain compatibility with the EFT, we shall consider field values in eq.~\\eqref{eq:field_range} around $\\phi \\ll M_\\text{Pl}$, since it corresponds to small curvatures. Indeed, the procedure to obtain inflationary potentials from quadratic gravity yields $\\phi = \\sqrt{\\frac{3}{2}} \\, M_\\text{Pl} \\log\\left( 1 + \\mathcal{O}(M_\\text{Pl}^{-2})\\right)$~\\cite{inflationaris,Platania:2019qvo}. One can then study dSC and TCC constraints of eqs.~\\eqref{eq:dsc_basic},~\\eqref{eq:ref_dsc} and~\\eqref{eq:tcc_basic} numerically varying the $\\mathcal{O}(1)$ constants $c$ and $f$, imposing that the bounds be satisfied for all $\\phi$ in the range allowed by eq.~\\eqref{eq:field_range}. The resulting regions are highlighted in fig.~\\ref{fig:TCC1} and in fig.~\\ref{fig:TCC2}, where each panel corresponds to a particular value of $f$ and consists of two plots which display the bounds in the $(m^2,\\Lambda)$ (left panels) and $(g_R,G\\Lambda)$ (right panels) planes. Due to the inverse relation in eq.~\\eqref{eq:grm2} between $m^2\/M_{\\text{Pl}}^2$ and $g_R$, the linear bounds in the~$(m^2, \\Lambda)$ plane translate into hyperbolas in the~$(g_R,G\\Lambda)$ plane. Note that whether the dimensionless minimum $\\phi_{\\mathrm{min}}\/M_{\\mathrm{Pl}}$, which exists for $\\Lambda \\,\/{m^2} >-3\/4$, falls inside the interval $(-f,+f)$ depends on the ratio $\\Lambda\/m^2$. In particular, the minimum falls outside the interval for $f<|\\phi_\\mathrm{min}\/M_\\mathrm{Pl}|$. Consequently, even if $V(\\phi_\\text{min})$ were to be positive for some $\\Lambda$ and $m^2$, this would not necessarily violate the dSC\/TCC bounds for fixed values of $f$ and $c$. Consequently, the bounds displayed in fig.~\\ref{fig:TCC1} and in fig.~\\ref{fig:TCC2} are non-trivially affected by eq.~\\eqref{eq:field_range}, and by the specific values of $f$ and $c$. For instance, it is worth noticing that smaller values of $f$ entail smaller field intervals where the dSC\/TCC bounds are to be satisfied. Thus, the bounds are less stringent, and the allowed region bigger. Similarly, the bound is more restrictive for higher values of $c$. In particular, fig.~\\ref{fig:TCC2} depicts the bounds derived from $c = \\sqrt{2\/3}$, the value pertaining to the TCC. While our analysis cannot probe the TCC in the large-excursion regime, where it differs substantially from the dSC, the additional considerations of~\\cite{Andriot:2020lea, Rudelius:2021oaz} point to a deeper role of this value of $c$ which could manifest itself, in the low-curvature regime at stake, in further investigations of swampland bounds and\/or dimensional reduction.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[scale=0.65]{\"Images\/TCCf01-m2\".pdf}$\\,\\,$\\includegraphics[scale=0.65]{\"Images\/TCCf01-gr\".pdf}\\\\\n\\centering\\includegraphics[scale=0.65]{\"Images\/TCCf05-m2\".pdf}$\\,\\,$\\includegraphics[scale=0.65]{\"Images\/TCCf05-gr\".pdf}\\\\\n\\centering\\includegraphics[scale=0.65]{\"Images\/TCCf1-m2\".pdf}$\\,\\,$\\includegraphics[scale=0.65]{\"Images\/TCCf1-gr\".pdf}\n\\caption{dSC\/TCC constraints for $f=0.1$ (top panel), $f=0.5$ (central panel) and $f=1$ (bottom panel), and various values of $c$. Due to the inverse relation in eq.~\\eqref{eq:grm2} between the inflaton mass in Planck units $m^2\/M_{\\text{Pl}}^2$ and the coupling $g_R$, the linear bounds in the~$(m^2, \\Lambda)$ plane translate into hyperbolas in the~$(g_R,G\\Lambda)$ plane. \\label{fig:TCC1}}\n\\end{figure}\n\\begin{figure}[t!]\n\\centering\\includegraphics[scale=0.65]{\"Images\/TCCc23-m2\".pdf}$\\,\\,$\\includegraphics[scale=0.65]{\"Images\/TCCc23-gr\".pdf}\n\\caption{TCC constraints, corresponding to $c=\\sqrt{2\/3}$, for various values of $f$. The bounds are not qualitatively different from to the dSC bounds displayed in fig.~\\ref{fig:TCC1}.\\label{fig:TCC2}}\n\\end{figure}\n\n\\subsection{Intersections of allowed regions: compatibility of asymptotic safety with dS, TC and WG conjectures}\\label{sec:intersections}\n\nWe are now ready to collect the results that we have discussed in the preceding sections, and to visualize the intersection between the different allowed regions. Within (an extrapolation of the) one-loop approximation, asymptotic safety of the RG flow constrains the physical IR parameters of eq.~\\eqref{eq:IR_params} to lie on the plane of eq.~\\eqref{eq:marginal_line}. On the other hand, while the (duality-invariant) WGC does not entail any additional constraint, the dSC\/TCC conditions for the inflaton potential place constraints on the cosmological constant and the inflaton mass. One can plot the intersections for any values of the $\\mathcal{O}(1)$ constants $c$ and $f$, and fig.~\\ref{fig:intersections3D}, the main result of our work, refers to the representative choice $c=f=1$. Albeit difficult to visualize, there is in general a region of the asymptotically safe plane that appears compatible with the swampland constraints that we have investigated. One can straightforwardly verify that the same conclusion is reached for different values of $f$ and $c$. Our findings, based on the quadratic one-loop approximation, thus point at a non-trivial compatibility between the conditions for UV-completeness dictated by asymptotic safety and some of the most relevant swampland conjectures. Consequently, it also points at the possibility, partially supported by~\\cite{Basile:2020dzh,Basile:2021krk}, of a connection between the frameworks of asymptotic safety and string theory~\\cite{deAlwis:2019aud}. Within this picture, field-theoretical asymptotic safety would serve as a ``pivot'' for the RG flow from string theory to low-energy gravity, in the sense that below a certain scale the flow of string theory toward the IR closely approaches a field-theoretical trajectory controlled by a UV fixed point.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[scale=0.5]{\"Images\/Intersections-3D\".pdf}\\includegraphics[scale=0.5]{\"Images\/Intersections-3D-2\".pdf}\\\\\\includegraphics[scale=0.5]{\"Images\/Intersections-3D-3\".pdf}\n\\caption{Intersections between the regions allowed by asymptotic safety, the WGC and the dSC\/TCC for the representative values $c=f=1$. The WGC bound corresponds to the yellow region, while the dSC\/TCC bound corresponds to the blue region. The green region depicts the space of IR parameters spanned by asymptotically safe trajectories, which lies within the region allowed by the WGC.}\\label{fig:intersections3D}\n\\end{figure}\n\n\\section{Conclusions}\\label{sec:conclusions}\n\nIn this paper we have analyzed the intersection of consistency conditions for Wilson coefficients of gravitational EFTs, combining the constraints of asymptotic safety and swampland conjectures. In particular, in sect.~\\ref{sec:one-loop} we have employed a systematic method to extract the hypersurface of allowed IR parameters stemming from UV-complete RG trajectories in gravitational theories by randomly sampling its relevant deformations, and we have applied this technique to the flow equations stemming from one-loop quadratic gravity~\\cite{Codello:2008vh}. Despite expecting that this approximation be reliable in the IR, the resulting RG flow exhibits a UV non-Gaussian fixed point, consistently with more refined functional RG computations~\\cite{Percacci:2017fkn, Reuter:2019byg, Pawlowski:2020qer}. However, the dimension of its critical surface is larger than what is suggested by the functional RG~\\cite{Benedetti:2009rx,Benedetti:2009gn,Falls:2020qhj,Knorr:2021slg}. As we have discussed in detail in sect.~\\ref{sec:results}, our findings suggest that the requirement that the RG flow be asymptotically safe constrains the physical parameters to lie on a plane, which we have determined, within our one-loop framework, to a precision of order $\\mathcal{O}(10^{-8})$. The values of the cosmological constant in Planck units seems not to be restricted by these considerations, while the classically marginal couplings lie on a line. \n\nIn sect.~\\ref{sec:wgc_constraints} and sect.~\\ref{sec:dsc_constraints} we have investigated the constraints on the Wilson coefficients arising from the weak gravity conjecture (WGC), the de Sitter conjecture (dSC) and the trans-Planckian censorship conjecture (TCC). In particular, the WGC does not entail additional bounds and is compatible with the UV-complete RG trajectories, while the dSC\/TCC bounds are more restrictive. To wit, the Starobinsky-like scalar potential stemming from the (local sector of the) effective action involves the ratio of the cosmological constant to the (squared) inflaton mass, and therefore the corresponding bounds place constraints on the dimensionless ratios $\\Lambda\/M_\\text{Pl}^2$ and $m\/M_\\text{Pl}$. These bounds depend on some dimensionless $\\mathcal{O}(1)$ constants, which we have varied to some extent in our analysis, and generally trace out a region in the plane allowed by asymptotic safety. While we expect that the qualitative results be unaffected by improving the truncation scheme, at least to some extent, it would be interesting to investigate the quantitative deviations in this respect.\n\nWhile in this work we have focused on the local sector of the effective action, our computation also yields the coefficients of non-local logarithmic form factors, akin to those arising from non-local heat kernel computations~\\cite{Barvinsky:1987uw, Barvinsky:1990up, Barvinsky:1990uq, Barvinsky:1993en, Avramidi:1990ap, Codello:2012kq}. It would be interesting to explore their consequences and their role within asymptotically safe gravity~\\cite{Knorr:2019atm, Draper:2020bop, Draper:2020knh} and their connection to massive matter fields~\\cite{Ohta:2020bsc}.\n\nAll in all, our results suggest that swampland constraints can be compatible with restrictions coming from UV completeness of the RG flow, but in a non-trivial fashion: the allowed parameter space is restricted to a non-trivial intersection. In retrospect, one could have expected this result on the grounds that some swampland criteria purport to be necessary conditions for UV completeness that cannot be derived from purely field-theoretical considerations, and thus they could constrain further the parameter space compatible with asymptotic safety. On the other hand, the one-loop approximation that we have studied already features the appearance of non-local form factors. In general, non-locality at the level of the effective action is a feature of any standard (local) QFT, and thus it is in principle unrelated to possible fundamental non-localities in the bare (fixed-point) action. Precisely how the notion of (non-)locality is realized in quantum gravity is an open and intriguing question, partly related to the problem of observables~\\cite{Donnelly:2015hta, Rejzner:2016yuy, Donnelly:2016rvo, Klitgaard:2017ebu, Rudelius:2021azq}. However, a number of semi-classical considerations~\\cite{Giddings:1992hh, Susskind:1993if, Almheiri:2012rt, Giddings:2012gc, Dvali:2014ila, Keltner:2015xda, Mann:2015luq} point to the breaking of the familiar concept of locality microscopically. Whether asymptotically safe gravity is realized by a bare action polynomial in derivatives (and thus ``local'' in some sense) is not established yet. Should fundamental non-locality turn out to emerge as a feature of asymptotic safety, this would strengthen its potential connections with the frameworks of non-local gravity~\\cite{Modesto:2011kw, Modesto:2017sdr, Buoninfante:2018mre, Buoninfante:2018xiw} and string theory~\\cite{Giddings:2006vu}.\n\nDue to the nature of our approximations, this work constitutes only a first step toward determining whether the asymptotic safety scenario is compatible with the peculiar behavior and UV\/IR mixing that gravity could exhibit already at the semi-classical level due to black holes (see~\\cite{Buoninfante:2021ijy} for a very recent discussion on their validity and limitations), or with general indications from string theory. In particular, a possible connection between asymptotically safe gravity and string theory has been conjectured in~\\cite{deAlwis:2019aud}, and it is tempting to speculate that it could explain our findings. Computations combining the functional renormalization group techniques~\\cite{Dupuis:2020fhh} with symmetries of string theory~\\cite{Veneziano:1991ek, Meissner:1991zj, Meissner:1996sa, Hohm:2015doa, Hohm:2019ccp, Hohm:2019jgu} have provided preliminary evidence in favour of this scenario~\\cite{Basile:2020dzh,Basile:2021krk}. This possibility extends to the more general notion of ``effective asymptotic safety''~\\cite{Held:2020kze}, and swampland bounds could further constrain which RG flows controlled by the ``effective'' fixed point are closely approached by the RG flow arising from the proper UV completion in the IR.\n\nMost prominently, the absence of continuous global symmetries\\footnote{The fate of global discrete symmetries has been investigated in the context of asymptotically safe gravity in~\\cite{Eichhorn:2020sbo, Ali:2020znq}.} is supported by a variety of arguments from black-hole physics, string theory and holography~\\cite{Misner:1957mt, Banks:1988yz, Kallosh:1995hi, Polchinski:2003bq, Banks:2010zn, Harlow:2018tng, McNamara:2019rup}, and it would be interesting to explore this foundational issue further in the direction that we have outlined in this paper.\n\n\\section*{Acknowledgements}\n\nThe authors would like to thank F. Saueressig for insighful discussions and B. Knorr for feedback on the manuscript. The authors thank also B. Holdom for spotting a typo.\n\nThe work of I.B. is supported by the Fonds de la Recherche Scientifique - FNRS under Grants No. F.4503.20 (\"HighSpinSymm\") and T.0022.19 (\"Fundamental issues in extended gravitational theories\"). A.P. acknowledges support by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities.\n\n\\bibliographystyle{JHEP}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{s:intro}\n\\protect\\setcounter{equation}{0}\n\n\n\\subsection{}\nThe quantum systems studied in this paper are obtained by coupling a\ncertain number (finite or infinite) of $N$-body systems. A\n(standard) $N$-body system consists of a fixed number $N$ of\nparticles which interact through $k$-body forces which preserve $N$\n(arbitrary $1\\leq k \\leq N$). The many-body type interactions\ninclude forces which allow the system to make transitions between\nstates with different numbers of particles. These transitions are\nrealized by creation-annihilation processes as in quantum field\ntheory.\n\nThe Hamiltonians we want to analyze are rather complex objects and\nstandard Hilbert space techniques seem to us inefficient in this\nsituation. Our approach is based on the observation that the\n$C^*$-algebra $\\mathscr{C}$ generated by a class of physically interesting\nHamiltonians often has a quite simple structure which allows one to\ndescribe its quotient with respect to the ideal of compact operators\nin rather explicit terms \\cite{GI1,GI2}. From this one can deduce\ncertain important spectral properties of the Hamiltonians. We refer\nto $\\mathscr{C}$ as the \\emph{Hamiltonian algebra} (or $C^*$-algebra of\nHamiltonians) of the system. \n\nThe main difficulty in this algebraic approach is to isolate the\ncorrect $C^*$-algebra. This is especially problematic in the present\nsituations since it is not a priori clear how to define the\ncouplings between the various $N$-body systems but in very special\nsituations. It is rather remarkable that the $C^*$-algebra\ngenerated by a small class of elementary and natural Hamiltonians\nwill finally prove to be a fruitful choice. These elementary\nHamiltonians are analogs of the Pauli-Fierz Hamiltonians.\n\n\nThe purpose of the preliminary Section \\ref{s:euclid} is to present\nthis approach in the simplest but physically important case when the\nconfiguration spaces of the $N$-body systems are Euclidean\nspaces. We start with a fundamental example, the standard $N$-body\ncase. Then we describe the many-body formalism in the Euclidean case\nand we state our main results on the spectral analysis of the\ncorresponding Hamiltonians.\n\nThere is one substantial simplification in the Euclidean case: each\nsubspace has a canonical supplement, the subspace orthogonal to\nit. This plays a role in the way we present the framework in Section\n\\ref{s:euclid}. However, the main constructions and results do not\ndepend on the existence of a supplement but to see this requires\nmore sophisticated tools from the theory of crossed product\n$C^*$-algebras and Hilbert $C^*$-modules which are not apparent in\nthis introductory part. In the rest of the paper we consider\nmany-body type couplings of systems whose configuration space is an\narbitrary abelian locally compact group. One of the simplest\nnontrivial physically interesting cases covered by this framework is\nthat when the configuration spaces of the $N$-body systems are\ndiscrete groups, e.g. discretizations $\\mathbb{Z}^D$ of $\\mathbb{R}^D$.\n\n\\subsection{}\nWe summarize now the content of the paper. Section \\ref{s:euclid}\nstarts with a short presentation of the standard \\mbox{$N$-body}\nformalism, the rest of the section being devoted to a rather\ndetailed description of our framework and main results in the case\nwhen the configuration spaces of the $N$-body subsystems are\nEuclidean spaces. These results are proven in a more general and\nnatural setting in the rest of the paper. In Section \\ref{s:grad}\nwe recall some facts concerning $C^*$-algebras graded by a\nsemilattice $\\mathcal{S}$ (we take here into account the results of Athina\nMageira's thesis \\cite{Ma}) and then we present some results on\n$\\mathcal{S}$-graded Hilbert $C^*$-modules. This notion, due to Georges\nSkandalis \\cite{Sk}, proved to be very natural and useful in our\ncontext: thanks to it many results can be expressed in a simple and\nsystematic way thus giving a new and interesting perspective to the\nsubject (this is discussed in more detail in \\cite{DG4}). The heart\nof the paper is Section \\ref{s:grass}, where we define the many-body\nHamiltonian algebra $\\mathscr{C}$ in a general setting and prove that it is\nnaturally graded by a certain semilattice $\\mathcal{S}$. In Section\n\\ref{s:id} we give alternative descriptions of the components of\n$\\mathscr{C}$ which are important for the affiliation criteria presented in\nSection \\ref{s:af}, where we point out a large class of self-adjoint\noperators affiliated to the many-body algebra. The $\\mathcal{S}$-graded\nstructure of $\\mathscr{C}$ gives then an HVZ type description of the\nessential spectrum for all these operators. The main result of\nSection \\ref{s:mou} is the proof of the Mourre estimate for\nnonrelativistic many-body Hamiltonians. Finally, an Appendix is\ndevoted to the question of generation of some classes of\n$C^*$-algebras by \"elementary\" Hamiltonians.\n\n\n\\subsection{Notations} \n\\label{ss:inotations}\nWe recall some notations and terminology. If $\\mathcal{E},\\mathcal{F}$ are normed\nspaces then $L(\\mathcal{E},\\mathcal{F})$ is the space of bounded operators\n$\\mathcal{E}\\rightarrow\\mathcal{F}$ and $K(\\mathcal{E},\\mathcal{F})$ the subspace consisting of compact\noperators. If $\\mathcal{G}$ is a third normed space and $(e,f)\\mapsto ef$ is\na bilinear map $\\mathcal{E}\\times\\mathcal{F}\\to\\mathcal{G}$ then $\\mathcal{E}\\mathcal{F}$ is the linear\nsubspace of $\\mathcal{G}$ generated by the elements $ef$ with\n$e\\in\\mathcal{E},f\\in\\mathcal{F}$ and $\\mathcal{E}\\cdot\\mathcal{F}$ is its closure. If $\\mathcal{E}=\\mathcal{F}$\nthen we set $\\mathcal{E}^2=\\mathcal{E}\\cdot\\mathcal{E}$. Two unusual abbreviations are\nconvenient: by \\emph{lspan} and \\emph{clspan} we mean ``linear\nspan'' and ``closed linear span'' respectively. If $\\mathcal{A}_i$ are\nsubspaces of a normed space then $\\sum^\\mathrm{c}_i\\mathcal{A}_i$ is the clspan of\n$\\cup_i\\mathcal{A}_i$. If $X$ is a locally compact topological space then\n$\\cc_{\\mathrm{o}}(X)$ is the space of continuous complex functions which tend to\nzero at infinity and $\\cc_{\\mathrm{c}}(X)$ the subspace of functions with compact\nsupport.\n\nBy \\emph{ideal} in a $C^*$-algebra we mean a closed self-adjoint\nideal. A $*$-homomorphism between two \\mbox{$C^*$-algebras} will be\ncalled \\emph{morphism}. We write $\\mathscr{A}\\simeq\\mathscr{B}$ if the\n$C^*$-algebras $\\mathscr{A},\\mathscr{B}$ are isomorphic and $\\mathscr{A}\\cong\\mathscr{B}$ if they\nare canonically isomorphic (the isomorphism should be clear from the\ncontext).\n\n\nA self-adjoint operator $H$ on a Hilbert space $\\mathcal{H}$ is\n\\emph{affiliated} to a $C^*$-algebra $\\mathscr{A}$ of operators on $\\mathcal{H}$ if\n$(H+i)^{-1}\\in\\mathscr{A}$; then $\\varphi(H)\\in\\mathscr{A}$ for all\n$\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R})$. If $\\mathscr{A}$ is the closed linear span of the\nelements $\\varphi(H)A$ with $\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R})$ and $A\\in\\mathscr{A}$, we\nsay that $H$ is \\emph{strictly affiliated to $\\mathscr{A}$}. The\n$C^*$-algebra generated by a set $\\mathscr{E}$ of self-adjoint operators is\nthe smallest $C^*$-algebra such that each $H\\in\\mathscr{E}$ is affiliated to\nit.\n\nWe now recall the definition of $\\mathcal{S}$-graded $C^*$-algebras\nfollowing \\cite{Ma2}. Here $\\mathcal{S}$ is a \\emph{semilattice}, i.e. a set\nequipped with an order relation $\\leq$ such that the lower bound\n$\\sigma\\wedge\\tau$ of each couple of elements $\\sigma,\\tau$ exists.\nWe say that a subset $\\mathcal{T}$ of $\\mathcal{S}$ is a \\emph{sub-semilattice} of\n$\\mathcal{S}$ if $\\sigma,\\tau\\in\\mathcal{T}\\Rightarrow\\sigma\\wedge\\tau\\in\\mathcal{T}$. The\nset $\\mathscr{S}$ of all closed subgroups of a locally compact abelian group\nis a semilattice for the order relation given by set inclusion. The\nsemilattices which are of main interest for us are (inductive limits\nof) sub-semilattices of $\\mathscr{S}$.\n\nA $C^*$-algebra $\\mathscr{A}$ is called \\emph{$\\mathcal{S}$-graded} if a linearly\nindependent family $\\{\\mathscr{A}(\\sigma)\\}_{\\sigma\\in\\mathcal{S}}$ of\n$C^*$-subalgebras of $\\mathscr{A}$ has been given such that\n$\\sum^\\mathrm{c}_{\\sigma\\in\\mathcal{S}}\\mathscr{A}(\\sigma)=\\mathscr{A}$ and\n$\\mathscr{A}(\\sigma)\\mathscr{A}(\\tau)\\subset\\mathscr{A}(\\sigma\\wedge\\tau)$ for all\n$\\sigma,\\tau$. The algebras $\\mathscr{A}(\\sigma)$ are the \\emph{components\n of $\\mathscr{A}$}. It is useful to note that some of the algebras\n$\\mathscr{A}(\\sigma)$ could be zero. If $\\mathcal{T}$ is a sub-semilattice of\n$\\mathcal{S}$ and $\\mathscr{A}(\\sigma)=\\{0\\}$ for $\\sigma\\notin\\mathcal{T}$ we say that $\\mathscr{A}$\nis \\emph{supported} by $\\mathcal{T}$; then $\\mathscr{A}$ is in fact\n$\\mathcal{T}$-graded. Reciprocally, any $\\mathcal{T}$-graded $C^*$-algebra becomes\n$\\mathcal{S}$-graded if we set $\\mathscr{A}(\\sigma)=\\{0\\}$ for $\\sigma\\notin\\mathcal{T}$.\n\n\n\n\n\\subsection{Note}\nThe preprint \\cite{DG4} is a preliminary version of this paper. We\ndecided to change the title because the differences between the two\nversions are rather important: the preliminaries concerning the\ntheory of Hilbert $C^*$-modules and the role of the imprimitivity\nalgebra of a Hilbert $C^*$-module in the spectral analysis of\nmany-body systems are now reduced to a minimum; on the other hand,\nthe Euclidean case and the spectral theory of the corresponding\nHamiltonians are treated in more detail.\n\n\\vspace{3mm}\n\n\\begin{acknowledgement}{\\rm\nThe authors thank Georges Skandalis for very helpful suggestions and\nremarks.}\n\\end{acknowledgement}\n\n\n\\section{Euclidean framework: main results}\n\\label{s:euclid}\n\\protect\\setcounter{equation}{0}\n\n\\subsection{The Hamiltonian algebra of a standard $N$-body system}\n\\label{ss:inb}\n\nConsider a system of $N$ particles moving in the physical space\n$\\mathbb{R}^d$. In the nonrelativistic case the Hamiltonian is of the form\n\\begin{equation}\\label{eq:inonrel}\nH={\\textstyle\\sum_{j=1}^N P_j^2\/2m_j + \\sum_{j=1}^N V_j(x_j)+\n\\sum_{j0$ we have $\\mathcal{G}^s_{\\ge X}=\\mathcal{G}^s\\cap\\mathcal{H}_{\\ge X}$.\n\n{\\bf(c)} The simplest type of interactions that we may consider are\ngiven by symmetric elements $I$ of the multiplier algebra of\n$\\mathscr{C}$. Then $H=K+I$ is strictly affiliated to $\\mathscr{C}$ and $\\mathscr{P}_{\\geq\n X}(H)=K_{\\geq X}+\\mathscr{P}_{\\geq X} (I)$ where $\\mathscr{P}_{\\geq X}$ is\nextended to the multiplier algebras as explained in\n\\cite[p. 18]{La}.\n\n{\\bf(d)} In order to cover singular interactions (form bounded but\nnot necessarily operator bounded by $K$) we assume that the\nfunctions $h_X$ are equivalent to regular weights. This is a quite\nweak assumption, cf. page \\pageref{p:regw}. For example, it\nsuffices that $c'|x|^{\\alpha}\\leq h_X(x)\\leq c''|x|^{\\alpha}$ for\nlarge $x$ where $c',c'',\\alpha>0$ are numbers depending on $X$.\nThen $U_a,V_a$ induce continuous operators in each of the spaces\n$\\mathcal{G}^s_X$, $\\mathcal{G}^s$, $\\mathcal{G}^s_{\\ge X}$. \n\n\n{\\bf(e)} The interaction will be of the form $I=\\sum_{Z\\in\\mathcal{S}} I(Z)$\nwhere the $I(Z)$ are continuous symmetric sesquilinear forms on\n$\\mathcal{G}^1$ such that $ I(Z) \\geq -\\mu_Z K -\\nu$ for some positive\nnumbers $\\mu_Z$ and $\\nu$ with $\\sum_Z\\mu_Z<1$. Then the form sum\n$K+I$ defines a self-adjoint operator $H$ on $\\mathcal{H}$.\n\n{\\bf(f)} We identify $I(Z)$ with a symmetric operator\n$\\mathcal{G}^1\\to\\mathcal{G}^{-1}$ and we assume that $I(Z)$ is supported by the\nsubspace $\\mathcal{H}_{\\ge Z}$. In other terms, $I(Z)$ is the sesquilinear\nform on $\\mathcal{G}^1$ associated to an operator $I(Z):\\mathcal{G}^1_{\\ge\n Z}\\to\\mathcal{G}^{-1}_{\\ge Z}$. Moreover, we assume that this last\noperator satisfies\n\\begin{equation}\\label{eq:iaff}\nU_a I(Z)=I(Z) U_a \\text{ if } a\\in Z, \\ \nI(Z)(V_a-1)\\to 0 \\text{ if } a\\to 0 \\text{ in } Z^\\perp, \\ \nV^*_a I(Z) V_a\\to I(Z) \\text{ if } a\\to 0\n\\end{equation}\nwhere the limits hold in norm in $L(\\mathcal{G}^2_{\\ge Z},\\mathcal{G}^{-1}_{\\ge Z})$.\n\nNote that the first part of condition {\\bf(f)}, saying that $I(Z)$\nis supported by $\\mathcal{H}_{\\ge Z}$, is equivalent to an estimate of the\nform $\\pm I(Z)\\le\\mu K_{\\geq Z}+ \\nu\\Pi_{\\geq Z}$ for some positive\nnumbers $\\mu,\\nu$. See also Remark \\ref{re:iaff}.\n\n\\begin{theorem}\\label{th:iaff}\n The Hamiltonian $H$ is a self-adjoint operator strictly affiliated\n to $\\mathscr{C}$, we have \n $H_{\\geq X}=K_{\\geq X}+\\sum_{Z\\geq X} I(Z)$, \n and $\\mathrm{Sp_{ess}}(H)=\\textstyle{\\bigcup}_{X\\in\\mathcal{P}(\\mathcal{S})}\\mathrm{Sp}(H_{\\geq X})$.\n\\end{theorem}\n\n\\begin{remark}\\label{re:pauli}\n We required the $h_X$ to be bounded from below only for the\n simplicity of the statements. Moreover, a simple extension of the\n formalism allows one to treat particles with arbitrary\n spin. Indeed, if $E$ is a complex Hilbert then Theorem \\ref{th:CG}\n remains true if $\\mathscr{C}$ is replaced by $\\mathscr{C}^E=\\mathscr{C}\\otimes K(E)$ and\n the $\\mathscr{C}(Z)$ by $\\mathscr{C}(Z)\\otimes K(E)$. If $E$ is the spin space\n then it is finite dimensional and one obtains $\\mathscr{C}^E$ exactly as\n above by replacing the $\\mathcal{H}(X)$ by $\\mathcal{H}(X)\\otimes\n E=L^2(X;E)$. Then one may consider instead of scalar kinetic\n energy functions $h$ self-adjoint operator valued functions\n $h:X^*\\to L(E)$. For example, we may take as one particle kinetic\n energy operators the Pauli or Dirac Hamiltonians.\n\\end{remark}\n\n\n\\begin{remark}\\label{re:iaff}\nWe give here a second, more explicit version of condition\n{\\bf(f)}. Since \n$I(Z)$ is a continuous symmetric operator $\\mathcal{G}^1\\to\\mathcal{G}^{-1}$ we may\nrepresent it as a matrix $I(Z)=(I_{XY}(Z))_{X,Y\\in\\mathcal{S}}$ of\ncontinuous operators $I_{XY}(Z):\\mathcal{G}^1_Y\\to\\mathcal{G}^{-1}_X$ with\n$I_{XY}(Z)^*=I_{YX}(Z)$. We take $I_{XY}(Z)=0$ if $Z\\not\\subset\nX\\cap Y$ and if $Z\\subset X\\cap Y$ we assume\n$V^*_a I_{XY}(Z) V_a\\to I_{XY}(Z) \\text{ if } a\\to 0 \\text{ in }\nX+Y$ and \n\\begin{equation}\\label{eq:iiaff}\nU_a I_{XY}(Z)=I_{XY}(Z) U_a \\text{ if }a\\in Z, \\quad\nI_{XY}(Z)(V_a-1)\\to 0 \\text{ if } a\\to 0 \\text{ in } Y\/Z.\n\\end{equation}\nThe limits should hold in norm in $L(\\mathcal{G}^2_Y,\\mathcal{G}^{-1}_X)$.\n\\end{remark}\n\nThe operators $I_{XY}(Z)$satisfying \\eqref{eq:iiaff} are described\nin more detail in Proposition \\ref{pr:zxy}. In the next example we\nconsider the simplest situation which is useful in the\nnonrelativistic case.\n\nIf $E$ is an Euclidean space and $s$ is a real number let $\\mathcal{H}^s_E$\nbe the Sobolev space defined by the norm\n\\[\n\\|u\\|_{\\mathcal{H}^s} = \\|(1+\\Delta_E)^{s\/2}u\\|\n\\] \nwhere $\\Delta_E$ is the (positive) Laplacian associated to the\nEuclidean space $E$. The space $\\mathcal{H}^s_E$ is equipped with two\ncontinuous representations of $E$, a unitary one induced by\n$\\{U_x\\}_{x\\in E}$ and a non-unitary one induced by $\\{V_x\\}_{x\\in\n E}$. If $E=O:=\\{0\\}$ we define $\\mathcal{H}_E^s=\\mathbb{C}$.\n\n\\begin{definition}\\label{df:small}\n If $E,F$ are Euclidean spaces and $T:\\mathcal{H}^s_E\\to\\mathcal{H}^t_F$ is a\n linear map, we say that \\emph{$T$ is small at infinity} if there\n is $\\varepsilon>0$ such that when viewed as a map\n $\\mathcal{H}^{s+\\varepsilon}_E\\to\\mathcal{H}^{t}_F$ the operator $T$ is compact.\n\\end{definition}\nBy the closed graph theorem $T$ is continuous and the compactness\nproperty holds for all $\\varepsilon>0$. If $E=O$ or $F=O$ then we\nconsider that all the operators $T:\\mathcal{H}^s_E\\to\\mathcal{H}^t_F$ are small at\ninfinity.\n\n\\begin{example}\\label{ex:zxy}\n Due to assumption {\\bf(d)} the form domains of $K_X$ and $K_Y$ are\n Sobolev spaces, for example $\\mathcal{G}^1_X=\\mathcal{H}^s_X$ and\n $\\mathcal{G}^1_Y=\\mathcal{H}^t_Y$. Let $I_{XY}^Z:\\mathcal{H}^t_{Y\/Z}\\to\\mathcal{H}^{-s}_{X\/Z}$ be\n a linear small at infinity map. Then we may take\n $I_{XY}(Z)=1_Z\\otimes I_{XY}^Z$ relatively to the tensor\n factorizations \\eqref{eq:xyzint}.\n\\end{example}\n\n\nWe make now some comments to clarify the conditions {\\bf(a)} -\n{\\bf(f)}. Assume, more generally, that $\\mathscr{C}$ is a $C^*$-algebra of\noperators on a Hilbert space $\\mathcal{H}$ and that $K$ is a self-adjoint\noperator on $\\mathcal{H}$ affiliated to $\\mathscr{C}$. Let $I$ be a continuous\nsymmetric sesquilinear form on the domain of $|K|^{1\/2}$. Then for\nsmall real $\\nu$ the form sum $K+\\nu I$ is a self-adjoint operator\n$H_\\nu$. If $H_\\nu$ is affiliated to $\\mathscr{C}$ for small $\\nu$, and\nsince the derivative with respect to $\\nu$ at zero of\n$(H_\\nu+i)^{-1}$ exists in norm, we get\n$(K+i)^{-1}I(K+i)^{-1}\\in\\mathscr{C}$. This clearly implies\n$\\jap{K}^{-2}I\\jap{K}^{-2}\\in\\mathscr{C}$. Since\n$\\jap{K}^{-1\/2}I\\jap{K}^{-1\/2}$ is a bounded operator, the map\n$z\\mapsto\\jap{K}^{-z}I\\jap{K}^{-z}$ is holomorphic on $\\Re{z}>1\/2$\nhence we get\n\\begin{equation}\\label{eq:ent}\n\\jap{K}^{-\\alpha}I\\jap{K}^{-\\alpha}\\in\\mathscr{C} \\text{ if } \\alpha>1\/2. \n\\end{equation}\nReciprocally, if $K$ is strictly affiliated to $\\mathscr{C}$ (and $K$ as\ndefined at (b) has this property) then Theorem 2.8 from \\cite{DG3}\nsays that $\\jap{K}^{-1\/2}I\\jap{K}^{-\\alpha}\\in\\mathscr{C}$ suffices to\nensure that $H=K+I$ is strictly affiliated to $\\mathscr{C}$ under a quite\ngeneral condition needed to make this operator well defined (this is\nthe role of assumption (e) above). Condition (f) is formulated such\nas to imply $\\jap{K}^{-1\/2}I\\jap{K}^{-1}\\in\\mathscr{C}$. To simplify the\nstatement we added condition (d) which implies that the spaces\n$\\mathcal{G}^s$ are stable under the group $V_a$. Formally\n\\[\n(\\jap{K}^{-1\/2}I\\jap{K}^{-1})_{XY}=\n\\jap{K_X}^{-1\/2}I_{XY}\\jap{K_Y}^{-1}.\n\\]\nSo this should belong to $\\mathscr{C}_{XY}=\\sum_{Z\\subset X\\cap\n Y}\\mathscr{C}_{XY}(Z)$. Thus $I_{XY}$ must be a sum of terms $I_{XY}(Z)$\nwith\n\\[\n\\jap{K_X}^{-1\/2}I_{XY}(Z)\\jap{K_Y}^{-1}\\in\\mathscr{C}_{XY}(Z). \n\\]\nConditions (d) and (f) are formulated such as this to hold,\ncf. Remark \\ref{re:iaff} and Theorem \\ref{th:CZU}.\n\n\n\n\\subsection{Pauli-Fierz Hamiltonians}\n\\label{ss:affil} \n\nThe next result is an a priori argument which supports our\ninterpretation of $\\mathscr{C}$ as Hamiltonian algebra of a many-body\nsystem: we show that $\\mathscr{C}$ is the $C^*$-algebra generated by a\nsimple class of Hamiltonians which have a natural quantum field\ntheoretic interpretation. For simplicity we state this only for\nfinite $\\mathcal{S}$.\n\nFor each couple $X,Y\\in\\mathcal{S}$ such that $X\\supset Y$ we have\n$\\mathcal{H}_X=\\mathcal{H}_Y\\otimes\\mathcal{H}_{X\/Y}$. Then we define\n$\\Phi_{XY}\\subset\\mathscr{L}_{XY}$ as the closed linear subspace consisting\nof ``creation operators'' associated to states from $\\mathcal{H}_{X\/Y}$,\ni.e. operators $a^*(\\theta):\\mathcal{H}_Y\\to\\mathcal{H}_X$ with $\\theta\\in\\mathcal{H}_{X\/Y}$\nwhich act as $u\\mapsto u\\otimes\\theta$. We set\n$\\Phi_{YX}=\\Phi_{XY}^*\\subset\\mathscr{L}_{YX}$, this is the space of\n``annihilation operators'' $a(\\theta)=a^*(\\theta)^*$ defined by\n$\\mathcal{H}_{X\/Y}$. This defines $\\Phi_{XY}$ when $X,Y$ are comparable,\ni.e. $X\\supset Y$ or $X\\subset Y$, which we abbreviate by $X\\sim\nY$. If $X\\not\\sim Y$ then we take $\\Phi_{XY}=0$. Note that\n$\\Phi_{XX}=\\mathbb{C} 1_X$, where $1_X$ is the identity operator on\n$\\mathcal{H}_X$. We have\n\\begin{equation}\\label{eq:Phi}\n\\mathscr{T}_X\\cdot\\Phi_{XY}=\\Phi_{XY}\\cdot \\mathscr{T}_Y=\\mathscr{T}_{XY} \\quad\n\\text{if } X\\sim Y.\n\\end{equation}\n\nNow let $\\Phi=(\\Phi_{XY})_{X,Y\\in\\mathcal{S}}\\subset\\mathscr{L}$. This is\na closed self-adjoint linear space of bounded operators on $\\mathcal{H}$. A\nsymmetric element $\\phi\\in\\Phi$ will be called \\emph{field\n operator}. Giving such a $\\phi$ is equivalent to giving a family\n$\\theta=(\\theta_{XY})_{X\\supset Y}$ of elements\n$\\theta_{XY}\\in\\mathcal{H}_{X\/Y}$, the components of the operator\n$\\phi\\equiv\\phi(\\theta)$ being given by:\n$\\phi_{XY}=a^*(\\theta_{XY})$ if $X\\supset Y$,\n$\\phi_{XY}=a(\\theta_{YX})$ if $X\\subset Y$, and $\\phi_{XY}=0$ if\n$X\\not\\sim Y$. \n\nThe operators of the form $K+\\phi$, where $K$ is a standard kinetic\nenergy operator and $\\phi\\in\\Phi$ is a field operator, will be\ncalled \\emph{Pauli-Fierz Hamiltonians}.\n\n\\begin{theorem}\\label{th:motiv}\n If $\\mathcal{S}$ is finite then $\\mathscr{C}$ is the $C^*$-algebra\n generated by the Pauli-Fierz Hamiltonians.\n\\end{theorem}\n\nThus $\\mathscr{C}$ is generated by a class of Hamiltonians involving only\nelementary field type interactions. On the other hand, we have seen\nbefore that the class of Hamiltonians affiliated to $\\mathscr{C}$ is very\nlarge and covers \\mbox{$N$-body} systems interacting between\nthemselves with field type interactions. We emphasize that the\n$k$-body type interactions \\emph{inside} each of the $N$-body\nsubsystems are generated by pure field interactions.\n\n\n\\subsection{Nonrelativistic Hamiltonians and Mourre\n estimate}\n\\label{ss:mouint} \n\nWe prove the Mourre estimate only for nonrelativistic many-body systems. \nThere are serious difficulties when\nthe kinetic energy is not a quadratic form even in the much simpler\ncase of $N$-body Hamiltonians, but see \\cite{De1,Ger1,DG2} for some\npartial results which could be extended to our setting. Note that\nthe quantum field case is much easier from this point of view\nbecause of the special nature of the interactions\n\\cite{DeG2,Ger2,Geo}. \n\nLet $\\mathcal{S}$ be a finite semilattice of subspaces of $\\mathcal{X}$. Recall\nthat for $X\\in\\mathcal{S}$ we denote $\\mathcal{S}\/X$ the set of subspaces $Y\/X=Y\\cap\nX^\\perp$ with $Y\\in\\mathcal{S}_{\\ge X}$. This is a finite semilattice of\nsubspaces of $\\mathcal{X}$ which contains $O$. Hence the Hilbert space\n$\\mathcal{H}_{\\mathcal{S}\/X}$ and the $C^*$-algebra $\\mathscr{C}_{\\mathcal{S}\/X}$ are well defined\nby our general rules and (cf. \\S\\ref{ss:fact}):\n\\begin{equation}\\label{eq:f}\n\\mathcal{H}_{\\geq X}=\\mathcal{H}_X\\otimes\\mathcal{H}_{\\mathcal{S}\/X}\\quad \\text{and} \\quad\n\\mathscr{C}_{\\geq X}=\\mathscr{T}_X\\otimes\\mathscr{C}_{\\mathcal{S}\/X} .\n\\end{equation}\nDenote $\\Delta_X$ the (positive) Laplacian associated to the Euclidean\nspace $X$ with the convention $\\Delta_O=0$. We have\n$\\Delta_X=h_X(P)$ with $h_X(x)=\\|x\\|^2$. We set\n$\\Delta\\equiv\\Delta_\\mathcal{S}=\\oplus_X \\Delta_X$ and define $\\Delta_{\\geq X}$\nsimilarly. If $Y\\supset X$ then $\\Delta_Y=\\Delta_X\\otimes 1\n+1\\otimes\\Delta_{Y\/X}$ hence $\\Delta_{\\geq X}=\\Delta_X\\otimes 1\n+1\\otimes\\Delta_{\\mathcal{S}\/X}$. The domain and form domain of the\noperator $\\Delta_\\mathcal{S}$ are given by $\\mathcal{H}_\\mathcal{S}^2$ and $\\mathcal{H}_\\mathcal{S}^1$ where\n$\\mathcal{H}_\\mathcal{S}^s\\equiv\\mathcal{H}^s=\\oplus_X \\mathcal{H}^s(X)$ for any real\n$s$.\n\n \nWe define nonrelativistic many-body Hamiltonian by extending to the\npresent setting \\cite[Def. 9.1]{ABG}. We consider only strictly\naffiliated operators to avoid working with not densely defined\noperators. Note that the general case of affiliated operators covers\ninteresting physical situations (hard-core interactions).\n\n\n\\begin{definition}\\label{df:NR}\n\\emph{A nonrelativistic many-body Hamiltonian of type $\\mathcal{S}$} is a\nbounded from below self-adjoint operator $H=H_\\mathcal{S}$ on $\\mathcal{H}=\\mathcal{H}_\\mathcal{S}$\nwhich is strictly affiliated to $\\mathscr{C}=\\mathscr{C}_\\mathcal{S}$ and has the following\nproperty: for each $X\\in\\mathcal{S}$ there is a bounded from below\nself-adjoint operator $H_{\\mathcal{S}\/X}$ on $\\mathcal{H}_{\\geq X}$\nsuch that \n\\begin{equation}\\label{eq:NR}\n\\mathscr{P}_{\\geq X}(H)\\equiv H_{\\geq X}=\\Delta_X\\otimes1+ 1\\otimes H_{\\mathcal{S}\/X}\n\\end{equation}\nrelatively to the tensor factorization $\\mathcal{H}_{\\geq\n X}=\\mathcal{H}_X\\otimes\\mathcal{H}_{\\mathcal{S}\/X}$. \n\\end{definition}\n\nThen \\emph{each $H_{\\mathcal{S}\/X}$ is a nonrelativistic many-body\n Hamiltonian of type $\\mathcal{S}\/X$}. Indeed, the argument from \\cite[p.\\\n415]{ABG} extends in a straightforward way to the present situation.\n\n\\begin{remark}\\label{re:maxs}\n If $X$ is a maximal element in $\\mathcal{S}$ then $\\mathcal{S}\/X=\\{O\\}$ hence\n $\\mathcal{H}_{\\mathcal{S}\/X}=\\mathcal{H}_O=\\mathbb{C}$ and $H_O$ will necessarily be a real\n number. Then we get $\\mathcal{H}_{\\geq X}=\\mathcal{H}_X$, $\\mathscr{C}_{\\geq X}=\\mathscr{T}_X$,\n and $H_{\\ge X}=\\Delta_X +H_O$ on $\\mathcal{H}_X$.\n\\end{remark}\n\n\\begin{remark}\\label{re:mins}\n Since $\\mathcal{S}$ is a finite semilattice, it has a least element\n $\\min\\mathcal{S}$. If $\\mathcal{S}_o=\\mathcal{S}\/{\\min\\mathcal{S}}$, we get\n\\begin{equation}\\label{eq:min}\n\\mathcal{H}_{\\mathcal{S}}=\\mathcal{H}_{\\min\\mathcal{S}}\\otimes\\mathcal{H}_{\\mathcal{S}_o}, \\quad\n\\mathscr{C}_{\\mathcal{S}}=\\mathscr{T}_X\\otimes \\mathscr{C}_{\\mathcal{S}_o}, \\quad \nH_{\\mathcal{S}}=\\Delta_{\\min\\mathcal{S}}\\otimes 1 + 1\\otimes H_{\\mathcal{S}_o}.\n\\end{equation}\n\\end{remark}\n\nNow we give an HVZ type description of the essential spectrum of a\nnonrelativistic many-body Hamiltonian. For a more detailed\nstatement, see the proof.\n\n\\begin{theorem}\\label{th:nrhvz}\n Denote $\\tau_X=\\inf H_{\\mathcal{S}\/X}$ the bottom of the spectrum of\n $H_{\\mathcal{S}\/X}$. Then\n\\begin{equation}\\label{eq:hvzunif}\n \\mathrm{Sp_{ess}}(H)=[\\tau,\\infty[ \\hspace{2mm}\\text{with}\\hspace{2mm}\n \\tau=\\min \\{ \\tau_X \\mid X \\text{ is minimal in } \\mathcal{S}\\setminus\n \\{O\\} \\}. \n\\end{equation}\n\\end{theorem}\n\\proof\n From \\eqref{eq:NR} we get\n\\begin{equation}\\label{eq:speint}\n\\mathrm{Sp}(H_{\\geq X})=[0,\\infty[\\ +\\, \\mathrm{Sp}(H_{\\mathcal{S}\/X})=\n[\\tau_X,\\infty[ \\quad\n\\text{if } X\\neq O.\n\\end{equation}\nIn particular, if $O\\notin\\mathcal{S}$ then by taking $X=\\min\\mathcal{S}$\nin \\eqref{eq:min} we get \n\\begin{equation}\\label{eq:mino}\n\\mathrm{Sp}(H)=\\mathrm{Sp_{ess}}(H)=[\\inf H_{\\mathcal{S}_o},\\infty[ .\n\\end{equation}\nIf $O\\in\\mathcal{S}$ then Theorem \\ref{th:imp4} implies \n\\begin{equation}\\label{eq:hvznrint}\n\\mathrm{Sp_{ess}}(H)=[\\tau,\\infty[ \\hspace{2mm}\\text{with}\\hspace{2mm}\n\\tau=\\min_{X\\in\\mathcal{P}(\\mathcal{S})} \\tau_X.\n\\end{equation}\nThe relation \\eqref{eq:hvzunif} expresses \\eqref{eq:mino} and\n\\eqref{eq:hvznrint} in a unified way. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\nFor $X\\in\\mathcal{S}$ we consider the dilation group $W_\\tau=\\mathrm{e}^{i\\tau D}$\ndefined on $\\mathcal{H}_X$ by (set $n=\\dim X$):\n\\begin{equation}\\label{eq:fed}\n(W_\\tau u)(x)=\\mathrm{e}^{n\\tau\/4}u(\\mathrm{e}^{\\tau\/2}x), \n\\quad \n2iD=x\\cdot\\nabla_x+n\/2= \\nabla_x\\cdot x-n\/2. \n\\end{equation}\nLet $D_O=0$. We keep the same notation for the unitary operator\n$\\oplus_X W_\\tau$ on the direct sum $\\mathcal{H}=\\oplus_X\\mathcal{H}_X$ and we do\nnot indicate explicitly the dependence on $X$ or $\\mathcal{S}$ of $W_\\tau$\nand $D$ unless this is really needed. Note that $D$ has\nfactorization properties similar to that of the Laplacian,\ne.g. $D_{\\geq X}=D_X\\otimes 1 +1\\otimes D_{\\mathcal{S}\/X}$.\n\n\nWe refer to Subsection \\ref{ss:mest} for terminology related to the\nMourre estimate. We take $D$ as conjugate operator and we denote by\n$\\widehat\\rho_H(\\lambda)$ the best constant (which could be infinite)\nin the Mourre estimate at point $\\lambda$. The \\emph{threshold set}\n$\\tau(H)$ of $H$ with respect to $D$ is the set where\n$\\widehat\\rho_H(\\lambda)\\leq0$. If $A$ is a real set then we define\n$N_A:\\mathbb{R}\\to[-\\infty,\\infty[$ by $N_A(\\lambda)=\\sup\\{ x\\in A \\mid\nx\\leq\\lambda\\}$ with the convention $\\sup\\emptyset=-\\infty$. Denote\n$\\mathrm{ev}(T)$ the set of eigenvalues of an operator $T$.\n\n\\begin{theorem}\\label{th:thrintr}\n Let $H=H_\\mathcal{S}$ be a nonrelativistic many-body Hamiltonian of type\n $\\mathcal{S}$ and of class $C^1_\\mathrm{u}(D)$. Then $\\tau(H)$ is a closed\n \\emph{countable} real set given by\n\\begin{equation}\\label{eq:thrintr}\n\\tau(H)=\\textstyle{\\bigcup}_{X\\neq O}\\mathrm{ev}(H_{\\mathcal{S}\/X}).\n\\end{equation}\nThe eigenvalues of $H$ which do not belong to $\\tau(H)$ are of\nfinite multiplicity and may accumulate only to points from\n$\\tau(H)$. We have\n$\\widehat\\rho_H(\\lambda)=\\lambda-N_{\\tau(H)}(\\lambda)$ for all real\n$\\lambda$.\n\\end{theorem}\n\nWe emphasize that if $O\\notin\\mathcal{S}$ the threshold set\n\\begin{equation}\\label{eq:thrinto}\n\\tau(H)=\\textstyle{\\bigcup}_{X\\in\\mathcal{S}} \\mathrm{ev}(H_{\\mathcal{S}\/X})\n\\end{equation}\nis very rich although the spectrum of\n$H=\\Delta_{\\min\\mathcal{S}}\\otimes1+1\\otimes H_{\\mathcal{S}_o}$ is purely absolutely\ncontinuous.\n\n\\begin{remark}\\label{re:NM}\nWe thus see that there is no difference between nonrelativistic\n$N$-body and many-body Hamiltonians from the point of view of their\nchannel structure. The formulas which give the essential spectrum\nand the threshold set relevant in the Mourre estimate are identical,\ncf. \\eqref{eq:hvznrint} and \\eqref{eq:thrintr}. This is due to\nthe fact that both Hamiltonian algebras are graded by the same\nsemilattice $\\mathcal{S}$.\n\\end{remark}\n\n\n\\subsection{Examples of nonrelativistic many-body Hamiltonians}\n\\label{ss:examples}\n\nLet $H=K+I$ with kinetic energy $K=\\Delta$. Hence\n$\\mathcal{G}^1=\\mathcal{H}^1=\\oplus_X \\mathcal{H}^1_X$ and $\\mathcal{G}^{-1}=\\mathcal{H}^{-1}=\\oplus_X\n\\mathcal{H}^{-1}_X$ with the notations of \\S\\ref{ss:ex}. The interaction\nterm is an operator $I:\\mathcal{H}^1\\to\\mathcal{H}^{-1}$ given by a sum\n$I=\\sum_{Z\\in\\mathcal{S}} I(Z)$ where each $I(Z)$ is defined with the help\nof the tensor factorization $\\mathcal{H}_{\\ge Z}=\\mathcal{H}_Z\\otimes\\mathcal{H}_{\\mathcal{S}\/Z}$.\n\n\\begin{proposition}\\label{pr:exnr}\n Let $I^Z:\\mathcal{H}^1_{\\mathcal{S}\/Z}\\to\\mathcal{H}^{-1}_{\\mathcal{S}\/Z}$ be symmetric and small\n at infinity and let $I(Z):=1_Z\\otimes I^Z$ which is naturally\n defined as a symmetric operator $\\mathcal{H}^1\\to\\mathcal{H}^{-1}$. Assume that\n $I(Z)\\geq -\\mu_Z\\Delta-\\nu$ for some numbers $\\mu_Z,\\nu\\ge0$\n with $\\sum\\mu_Z<1$. Then $H=\\Delta+I$ defined in the quadratic\n form sense is a nonrelativistic many-body Hamiltonian of type\n $\\mathcal{S}$ and we have $H_{\\geq X}=\\Delta_{\\geq X}+\\sum_{Z\\supset X}\n I(Z)$.\n\\end{proposition}\n\n\nThe first condition on $I^Z$ can be stated in terms of its\ncoefficients as follows: if $Z\\subset X\\cap Y$ then the operator\n$I_{XY}^Z:\\mathcal{H}^1_{Y\/Z}\\to\\mathcal{H}^{-1}_{X\/Z}$ is small at infinity and\nsuch that $(I_{XY}^Z)^*=I_{YX}^Z$. On the other hand, note that if\nthe operators $I^Z:\\mathcal{H}^1_{\\mathcal{S}\/Z}\\to\\mathcal{H}^{-1}_{\\mathcal{S}\/Z}$ are compact\nthen they are small at infinity and for any $\\mu>0$ there is a\nnumber $\\nu$ such that $\\pm I(Z)\\leq \\mu\\Delta_\\mathcal{S}+\\nu$ for all\n$Z$. The more general smallness at infinity condition covers second\norder perturbations of $\\Delta_\\mathcal{S}$.\n\n\n\nIn the next proposition we give examples of nonrelativistic\noperators of class $C^1_\\mathrm{u}(D)$. The operator $H$ is constructed as\nin Proposition \\ref{pr:exnr} but we consider only interactions which\nare relatively bounded in \\emph{operator} sense with respect to the\nkinetic energy such as to force the domain of $H$ to be equal to the\ndomain of $\\Delta$, hence to $\\mathcal{H}^2=\\oplus_X\\mathcal{H}^2_X$. Since\nthis space is stable under the action of the operators $W_\\tau$,\nwe shall get a simple condition for $H$ to be of class\n$C^1_\\mathrm{u}(D)$. \n\n\n\\begin{proposition}\\label{pr:nrm}\n For each $Z\\in\\mathcal{S}$ assume that $I^Z:\\mathcal{H}^2_{\\mathcal{S}\/Z}\\to \\mathcal{H}_{\\mathcal{S}\/Z}$\n is compact and symmetric as operator on $\\mathcal{H}_{\\mathcal{S}\/Z}$ and that\n $[D,I^Z]: \\mathcal{H}^2_{\\mathcal{S}\/Z}\\to\\mathcal{H}^{-2}_{\\mathcal{S}\/Z}$ is compact. Then the\n conditions of Proposition \\ref{pr:exnr} are fulfilled and each\n operator $I(Z):\\mathcal{H}^2\\to\\mathcal{H}$ is $\\Delta$-bounded with relative\n bound zero. The operator $H$ is self-adjoint on $\\mathcal{H}^2$ and of\n class $C^1_\\mathrm{u}(D)$.\n\\end{proposition}\n\nSo for the coefficients $I^Z_{XY}$ we ask $I^Z_{XY}=0$ if\n$Z\\not\\subset X\\cap Y$ and if $Z\\subset X\\cap Y$ then\n$(I^{Z}_{XY})^*\\supset I^Z_{YX}$ and\n\\begin{equation}\\label{eq:3d}\nI^Z_{XY}:\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}_{X\/Z} \\text{ and } \n[D,I^Z_{XY}]: \\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z} \\text{ are compact\n operators.} \n\\end{equation}\nThe expression $[D,I^Z_{XY}]=D_{X\/Z}I^Z_{XY}-I^Z_{XY}D_{Y\/Z}$ is not\nreally a commutator. Indeed, if we denote $E=(X\\cap Y)\/Z$, so\n$Y\/Z=E\\oplus(Y\/X)$ and $X\/Z=E\\oplus(X\/Y)$, then $\\mathcal{H}_{X\/Z}\n=\\mathcal{H}_E\\otimes\\mathcal{H}_{X\/Y}$ and $\\mathcal{H}_{Y\/Z} =\\mathcal{H}_E\\otimes\\mathcal{H}_{Y\/X}$.\nHence the relation $D_{X\/Z}=D_E\\otimes 1 + 1\\otimes D_{X\/Y}$ and a\nsimilar one for $Y\/Z$ give\n\\begin{equation*}\n[D,I^Z_{XY}] =[D_E,I^Z_{XY}] +D_{X\/Y}I^Z_{XY} -I^Z_{XY}D_{Y\/X}.\n\\end{equation*}\nThe first term above is a commutator and so is of a different nature\nthan the next two. Since $I^Z_{XY}D_{Y\/X}$ is a restriction of\n$(D_{Y\/X}I^Z_{YX})^*$ it is clear that the second part of condition\n\\eqref{eq:3d} follows from:\n\\begin{equation}\\label{eq:2d}\n[D_E,I^Z_{XY}] \\text{ and } D_{X\/Y}I^Z_{XY} \\text{ are compact\n operators } \\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z} \\text{ for all } X,Y,Z.\n\\end{equation}\nWe consider some simple examples of operators $I_{XY}^Z$ to clarify\nthe difference with respect to the $N$-body situation (see\n\\S\\ref{ss:scs} for details and generalizations). If $E,F$ are\nEuclidean spaces we denote\n\\begin{equation}\\label{eq:ikef}\n\\mathscr{K}^2_{FE} =K(\\mathcal{H}^2_E,\\mathcal{H}_F) \\quad \\text{and} \\quad\n\\mathscr{K}^{2}_E=\\mathscr{K}^2_{E,E}=K(\\mathcal{H}^2_E,\\mathcal{H}_E). \n\\end{equation}\nDenote $X \\boxplus Y = X\/Y\\oplus Y\/X$ and embed $L^2(X \\boxplus\nY)\\subset \\mathscr{K}_{X\/Y,Y\/X}$ by identifying a Hilbert-Schmidt operator\nwith its kernel. Then\n\\begin{equation*}\nL^2(X \\boxplus Y;\\mathscr{K}^2_E) \\subset \\mathscr{K}^2_E\\otimes \\mathscr{K}_{X\/Y,Y\/X}\n\\subset \\mathscr{K}^2_{X\/Z,Y\/Z}.\n\\end{equation*}\nThus $I_{XY}^Z \\in L^2(X \\boxplus Y;\\mathscr{K}^2_E)$ is a simple example of\noperator satisfying the first part of condition \\eqref{eq:3d}. Such\nan $I_{XY}^Z$ acts as follows: if $u\\in\\mathcal{H}^2_{Y\/Z}\\subset\nL^2(Y\/X;\\mathcal{H}^2_E)$ then\n\\[\nI_{XY}^Z u\\in \\mathcal{H}_{X\/Z} = L^2(X\/Y;\\mathcal{H}_E) \\quad\\text{is given by}\\quad\n(I_{XY}^Z u)(x')={\\textstyle\\int_{Y\/X}} I_{XY}^Z(x',y')u(y') \\text{d} y'.\n\\]\nNow we consider \\eqref{eq:2d}.\nSince $(x',y')\\mapsto[D_E,I^Z_{XY}(x',y')]$ is the kernel of the\noperator $[D_E,I^Z_{XY}]$, if\n\\[\n[D_E,I^Z_{XY}]\\in L^2(X \\boxplus Y;K(\\mathcal{H}^2_E,\\mathcal{H}^{-2}_E)\n\\]\nthen $[D_E,I^Z_{XY}]$ is a compact operator\n$\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z}$. For the term $D_{X\/Y}I^Z_{XY}$ it\nsuffices to require the compactness of the operator\n\\[\nD_{X\/Y}I^Z_{XY} = 1_E\\otimes D_{X\/Y} \\cdot I^Z_{XY}: \\mathcal{H}^2_{Y\/Z}\\to\n\\mathcal{H}_E\\otimes\\mathcal{H}^{-2}_{X\/Y}.\n\\]\nFrom \\eqref{eq:fed} we see that this is a condition on the kernel\n$x'\\cdot\\nabla_{x'}I_{XY}^Z(x',y')$. For example, it suffices that\nthe operator $\\jap{Q_{X\/Y}}I^Z_{XY}:\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}_{X\/Z}$ be\ncompact, which is a short range assumption. In summary:\n\n\\begin{example}\\label{ex:hschmidt}\n For each $Z\\subset X\\cap Y$ let $I^Z_{XY} \\in L^2(X \\boxplus\n Y;\\mathscr{K}^2_E)$ such that the adjoint of $I^Z_{XY}(x',y')$ is an\n extension of $I^Z_{YX}(y',x')$. Assume that kernel\n $[D_E,I^Z_{XY}(x',y')]$ belongs to $L^2(X \\boxplus\n Y;K(\\mathcal{H}^2_E,\\mathcal{H}^{-2}_E)$ and that the kernel\n $x'\\cdot\\nabla_{x'}I_{XY}^Z(x',y')$ defines a compact operator\n $\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z}$. Then \\eqref{eq:3d} is fulfilled.\n\\end{example}\n\n\\begin{example}\\label{ex:anc}\n Here we consider the particular case $Y\\subset X$ to see the\n structure of a generalized creation operator which appears in this\n context. For each $Z\\subset Y$ let $I^Z_{XY}\\in\n \\mathscr{K}^2_{Y\/Z}\\otimes\\mathcal{H}_{X\/Y}$, where the tensor product is a kind\n of weak version of $L^2(X\/Y; \\mathscr{K}^2_{Y\/Z})$ discussed in\n \\S\\ref{ss:ha}. Furthermore, assume that $[D_{Y\/Z},I^Z_{XY}]\\in\n K(\\mathcal{H}^2(Y\/Z),\\mathcal{H}^{-2}_{Y\/Z})\\otimes\\mathcal{H}_{X\/Y}$ and\n $D_{X\/Y}I^Z_{XY}\\in \\mathscr{K}^2_{Y\/Z}\\otimes\\mathcal{H}^{-2}_{X\/Y}$. Then\n \\eqref{eq:3d} holds.\n\\end{example}\n\n\n\\subsection{Boundary values of the resolvent}\n\\label{ss:bvr}\n\nTheorem \\ref{th:thrintr} has important consequences in the spectral\ntheory of the operator $H$: we shall use it together with\n\\cite[Theorem 7.4.1]{ABG} to show that $H$ has no singular\ncontinuous spectrum and to prove the existence of the boundary\nvalues of its resolvent in the class of weighted $L^2$ spaces that\nwe define now. Let $\\mathcal{H}_{s,p}=\\oplus_X L^2_{s,p}(X)$ where the\n$L^2_{s,p}(X)$ are the Besov spaces associated to the position\nobservable on $X$ (these are obtained from the usual Besov spaces\nassociated to $L^2(X)$ by a Fourier transformation). Note that\n$\\mathcal{H}_{s}=\\mathcal{H}_{s,2}$ is the Fourier transform of the Sobolev space\n$\\mathcal{H}^s$. Let $\\mathbb{C}_+$ be the open upper half plane and\n$\\mathbb{C}^H_+=\\mathbb{C}_+\\cup(\\mathbb{R}\\setminus\\tau(H))$. If we replace the upper\nhalf plane by the lower one we similarly get the sets $\\mathbb{C}_-$ and\n$\\mathbb{C}^H_-$. We define two holomorphic maps $R_\\pm:\\mathbb{C}_\\pm\\to\nL(\\mathcal{H})$ by $R_\\pm(z)=(H-z)^{-1}$ and note that we have continuous\nembeddings\n\\[\nL(\\mathcal{H})\\subset L(\\mathcal{H}_{1\/2,1},\\mathcal{H}_{-1\/2,\\infty}) \\subset\nL(\\mathcal{H}_{s},\\mathcal{H}_{-s}) \\quad\\text{if } s>1\/2\n\\]\nso we may consider $R_\\pm$ as maps with values in\n$L(\\mathcal{H}_{1\/2,1},\\mathcal{H}_{-1\/2,\\infty})$.\n\n\\begin{theorem}\\label{th:c11}\nIf $H$ is of class $C^{1,1}(D)$ then its singular continuous\nspectrum is empty and the holomorphic maps $R_\\pm:\\mathbb{C}_\\pm\\to\nL(\\mathcal{H}_{1\/2,1},\\mathcal{H}_{-1\/2,\\infty})$ extend to weak$^*$ continuous\nfunctions $\\bar{R}_\\pm$ on $\\mathbb{C}^H_\\pm$. The maps\n$\\bar{R}_\\pm:\\mathbb{C}^H_\\pm\\to L(\\mathcal{H}_{s},\\mathcal{H}_{-s})$ are norm continuous\nif $s>1\/2$.\n\\end{theorem}\n\nThis result is optimal both with regard to the regularity of the\nHamiltonian relatively to the conjugate operator $D$ and to the\nBesov spaces in which we establish the existence of the boundary\nvalues of the resolvent. The class $C^{1,1}(D)$ will be discussed\nand its optimality will be made precise in \\S\\ref{ss:scsc} but we\ngive some examples below.\n\nWe state first the simplest sufficient condition: \\emph{assume that\n $H$ is as in Proposition \\ref{pr:exnr} and that its domain is\n equal to $\\mathcal{H}^2$;if $\\,[D,[D,I^Z]]\\in\n L(\\mathcal{H}^2_{\\mathcal{S}\/Z},\\mathcal{H}^{-2}_{\\mathcal{S}\/Z})$ for all $Z$ then $H$ is of\n class $C^{1,1}(D)$}. This follows from Theorem 6.3.4 in\n\\cite{ABG}. The condition on $\\,[D,[D,I^Z]]$ can easily be written\nin terms of the coefficients $I^Z_{XY}$ by arguments similar to\nthose of \\S\\ref{ss:examples}. Refinements allow the addition of\nlong range and short range interactions as in \\cite[\\S 9.4.2]{ABG}.\n\nLet $\\xi:\\mathbb{R}\\to\\mathbb{R}$ be of class $C^\\infty$ and such that\n$\\xi(\\lambda)=0$ if $\\lambda\\le 1$ and $\\xi(\\lambda)=1$ if\n$\\lambda\\ge 2$. For each Euclidean space $X$ and real $r\\ge 1$ we\ndenote $\\xi^r_X$ the operator of multiplication by the function\n$x\\mapsto\\xi(|x|\/r)$ on any Sobolev space over $X$. Then we define\n$\\xi^r_\\mathcal{S}=\\oplus_{X\\in\\mathcal{S}}\\xi^r_X$ considered as operator on\n$\\mathcal{H}^s_\\mathcal{S}$ for any real $s$.\n\n\\begin{definition}\\label{df:slr}\n Let $T:\\mathcal{H}^2_\\mathcal{S}\\to\\mathcal{H}_\\mathcal{S}$ be a symmetric operator. We say that\n $T$ is a \\emph{long range interaction} if $[D,T]\\in\n L(\\mathcal{H}^2_\\mathcal{S},\\mathcal{H}^{-1}_\\mathcal{S})$ and $\\int_1^\\infty \\|\\xi^r_\\mathcal{S}\n [D,T]\\|_{\\mathcal{H}^2_\\mathcal{S}\\to\\mathcal{H}^{-1}_\\mathcal{S}} \\text{d} r\/r <\\infty$. We say that\n $T$ is a \\emph{short range interaction} if $\\int_1^\\infty\n \\|\\xi^r_\\mathcal{S} [D,T]\\|_{\\mathcal{H}^2_\\mathcal{S}\\to\\mathcal{H}_\\mathcal{S}} \\text{d} r <\\infty$.\n\\end{definition}\n\n\n\\begin{theorem}\\label{th:BVR}\n Assume that $H=\\Delta_\\mathcal{S} + \\sum_{Z\\in\\mathcal{S}} 1_Z\\otimes I^Z$ where\n each $I^z:\\mathcal{H}^2_{\\mathcal{S}\/Z}\\to\\mathcal{H}_{\\mathcal{S}\/Z}$ is symmetric, compact, and\n is the sum of a long range and a short range interaction. Then $H$\n is a nonrelativistic many-body Hamiltonian of class $C^{1,1}(D)$,\n hence the conclusions of Theorem \\ref{th:c11} are true.\n\\end{theorem}\n\nScattering channels may be defined in a natural way in the context\nof the theorem. If the long range interactions are absent we expect\nthat asymptotic completeness holds.\n\n\n\n\n\n\\section{Graded Hilbert $C^*$-modules}\n\\label{s:grad}\n\\protect\\setcounter{equation}{0}\n\n\\subsection{Graded $\\boldsymbol{C^*}$-algebras}\n\\label{ss:grca}\n\nThe natural framework for the systems considered in this paper is\nthat of $C^*$-algebras graded by semilattices. We refer to\n\\cite{Ma2,Ma3} for a detailed study of this class of algebras.\n\nLet $\\mathcal{S}$ be a semilattice and $\\mathscr{A}$ a graded $C^*$-algebra.\nFollowing \\cite{Ma2} we say that $\\mathscr{B}\\subset\\mathscr{A}$ is a \\emph{graded\n $C^*$-subalgebra} if $\\mathscr{B}$ is a $C^*$-subalgebra of $\\mathscr{A}$ equal to\n$\\sum^\\mathrm{c}_\\sigma\\mathscr{B}\\cap\\mathscr{A}(\\sigma)$. Then $\\mathscr{B}$ has a natural\ngraded $C^*$-algebra structure: $\\mathscr{B}(\\sigma)=\\mathscr{B}\\cap\\mathscr{A}(\\sigma)$.\nIf $\\mathscr{B}$ is also an ideal of $\\mathscr{A}$ then $\\mathscr{B}$ is a \\emph{graded\n ideal}.\n\nA subset $\\mathcal{T}$ of a semilattice $\\mathcal{S}$ is a \\emph{sub-semilattice of\n $\\mathcal{S}$} if $\\sigma,\\tau\\in\\mathcal{T} \\Rightarrow\n\\sigma\\wedge\\tau\\in\\mathcal{T}$. We say that $\\mathcal{T}$ is an \\emph{ideal of\n $\\mathcal{S}$} if $\\sigma\\leq\\tau\\in\\mathcal{T} \\Rightarrow \\sigma\\in\\mathcal{T}$. If\n$\\mathscr{A}$ is an $\\mathcal{S}$-graded $C^*$-algebra and $\\mathcal{T}\\subset\\mathcal{S}$ let\n$\\mathscr{A}(\\mathcal{T})=\\sum^\\mathrm{c}_{\\sigma\\in\\mathcal{T}}\\mathscr{A}(\\sigma)$ (if $\\mathcal{T}$ is finite\nthe sum is already closed). If $\\mathcal{T}$ is a sub-semilattice or an\nideal then clearly $\\mathscr{A}(\\mathcal{T})$ is a $C^*$-subalgebra or an ideal of\n$\\mathscr{A}$ respectively.\n\nWe say that $\\mathscr{A}$ is \\emph{supported by a sub-semilattice $\\mathcal{T}$} if\n$\\mathscr{A}=\\mathscr{A}(\\mathcal{T})$, i.e. $\\mathscr{A}(\\sigma)=\\{0\\}$ for $\\sigma\\notin\\mathcal{T}$. Then\n$\\mathscr{A}$ is also $\\mathcal{T}$-graded. The smallest sub-semilattice with this\nproperty will be called \\emph{support of $\\mathscr{A}$}. If $\\mathcal{T}$ is a\nsub-semilattice of $\\mathcal{S}$ and $\\mathscr{A}$ is a $\\mathcal{T}$-graded algebra then\n$\\mathscr{A}$ is $\\mathcal{S}$-graded: set $\\mathscr{A}(\\sigma)=\\{0\\}$ for\n$\\sigma\\in\\mathcal{S}\\setminus\\mathcal{T}$.\n\nThe next result is obvious if $\\mathcal{S}$ is finite. For the general case,\nsee the proof of Proposition 3.3 in \\cite{DG3}.\n\n\n\\begin{proposition}\\label{pr:gsalg}\n Let $\\mathcal{T}$ be a sub-semilattice of $\\mathcal{S}$ such that\n $\\mathcal{T}'=\\mathcal{S}\\setminus\\mathcal{T}$ is an ideal. Then $\\mathscr{A}(\\mathcal{T})$ is a\n $C^*$-subalgebra of $\\mathscr{A}$, $\\mathscr{A}(\\mathcal{T}')$ is an ideal of $\\mathscr{A}$, and\n $\\mathscr{A}=\\mathscr{A}(\\mathcal{T})+\\mathscr{A}(\\mathcal{T}')$ with $\\mathscr{A}(\\mathcal{T})\\cap\\mathscr{A}(\\mathcal{T}')=\\{0\\}$.\n In particular, the natural linear projection\n $\\mathscr{P}(\\mathcal{T}):\\mathscr{A}\\to\\mathscr{A}(\\mathcal{T})$ is a morphism.\n\\end{proposition}\n\nIf $\\mathcal{T}$ is a sub-semilattice then $\\mathcal{T}'$ is an ideal if and only if\n$\\mathcal{T}$ is a filter\n(i.e. $\\sigma\\ge\\tau\\in\\mathcal{T}\\Rightarrow\\sigma\\in\\mathcal{T}$). Thus if $\\mathcal{S}$\nis finite then the only sub-semilattices which have this property\nare the $\\mathcal{S}_{\\ge\\sigma}$ introduced below.\n\nThe simplest sub-semilattices are the chains (totally ordered\nsubsets). If $\\sigma\\in \\mathcal{S}$ and\n\\begin{equation}\\label{eq:Aa}\n\\mathcal{S}_{\\geq\\sigma}=\\{\\tau\\in \\mathcal{S}\\mid \\tau\\geq\\sigma\\},\n\\quad%\n\\mathcal{S}_{\\not\\geq\\sigma}=\n\\mathcal{S}'_{\\geq\\sigma}=\\{\\tau\\in\\mathcal{S}\\mid\\tau\\not\\geq\\sigma\\},\n\\quad\n\\mathcal{S}_{\\leq\\sigma}=\\{\\tau\\in \\mathcal{S}\\mid \\tau\\leq\\sigma\\}\n\\end{equation}\nthen $\\mathcal{S}_{\\geq\\sigma}$ is a sub-semilattice and\n$\\mathcal{S}_{\\not\\geq\\sigma}$ and $\\mathcal{S}_{\\leq\\sigma}$ are ideals. So\n$\\mathscr{A}_{\\ge\\sigma}\\equiv\\mathscr{A}(\\mathcal{S}_{\\geq\\sigma})$ is a graded\n$C^*$-subalgebra of $\\mathscr{A}$ supported by $\\mathcal{S}_{\\geq\\sigma}$ and\n$\\mathscr{A}(\\mathcal{S}_{\\not\\geq\\sigma})$ is a graded ideal supported by\n$\\mathcal{S}_{\\not\\geq\\sigma}$ such that\n\\begin{equation}\\label{eq:dsum}\n\\mathscr{A}=\\mathscr{A}_{\\ge\\sigma}+\\mathscr{A}(\\mathcal{S}_{\\not\\geq\\sigma})\n\\quad\\text{with} \\quad\n\\mathscr{A}_{\\ge\\sigma}\\cap\\mathscr{A}(\\mathcal{S}_{\\not\\geq\\sigma})=\\{0\\}. \n\\end{equation}\nThe projection morphism $\\mathscr{P}_{\\ge\\sigma}:\\mathscr{A}\\to\\mathscr{A}_{\\geq\\sigma}$\ndefined by \\eqref{eq:dsum} is the unique linear continuous map\n$\\mathscr{P}_{\\geq\\sigma}:\\mathscr{A}\\rightarrow\\mathscr{A}$ such that $\\mathscr{P}_{\\geq\\sigma}A=A$ if\n$A\\in\\mathscr{A}(\\tau)$ for some $\\tau\\geq\\sigma$ and $\\mathscr{P}_{\\geq\\sigma}A=0$\notherwise.\n\n$\\mathcal{S}$ is called \\emph{atomic} if it has a smallest element\n$o\\equiv\\min \\mathcal{S}$ and if each $\\sigma\\neq o$ is minorated by an\natom. We denote by $\\mathcal{P}(\\mathcal{S})$ the set of atoms of $\\mathcal{S}$. If $\\mathcal{T}$\nis an ideal of $\\mathcal{S}$ and $\\mathcal{S}$ is atomic then $\\mathcal{T}$ is atomic, we\nhave $\\min\\mathcal{T}=\\min \\mathcal{S}$, and $\\mathcal{P}(\\mathcal{T})=\\mathcal{P}(\\mathcal{S})\\cap\\mathcal{T}$. This next\nresult is also easy to prove \\cite{DG3}.\n\n\\begin{theorem}\\label{th:ga} \n If $\\mathcal{S}$ is atomic then $\\mathscr{P}\n A=(\\mathscr{P}_{\\geq\\alpha}A)_{\\alpha\\in\\mathcal{P}(\\mathcal{S})}$ defines a morphism\n $\\mathscr{P}:\\mathscr{A}\\to\\prod_{\\alpha\\in\\mathcal{P}(\\mathcal{S})}\\mathscr{A}_{\\geq\\alpha}$ with\n $\\mathscr{A}(o)$ as kernel. This gives us a canonical embedding\n\\begin{equation}\\label{eq:quot}\n\\mathscr{A}\/\\mathscr{A}(o)\\subset\\displaystyle\\mbox{$\\textstyle\\prod$}_{\\substack{i}}_{\\alpha\\in\\mathcal{P}(\\mathcal{S})}\\mathscr{A}_{\\geq\\alpha}.\n\\end{equation}\n\\end{theorem}\n\nWe call this ``theorem'' because it has important consequences in\nthe spectral theory of many-body Hamiltonians: it allows us to\ncompute their essential spectrum and to prove the Mourre estimate.\n\nWe assume that $\\mathcal{S}$ is atomic so that $\\mathscr{A}$ comes equipped with a\nremarkable ideal $\\mathscr{A}(o)$. Then for $A\\in\\mathscr{A}$ we define its\n\\emph{essential spectrum} (relatively to $\\mathscr{A}(o)$) by the formula\n\\begin{equation}\\label{eq:eso}\n\\mathrm{Sp_{ess}}(A)\\equiv\\mathrm{Sp}(\\mathscr{P} A).\n\\end{equation}\nIn our concrete examples $\\mathscr{A}$ is represented on a Hilbert space\n$\\mathcal{H}$ and $\\mathscr{A}(o)= K(\\mathcal{H})$, so we get the usual Hilbertian notion of\nessential spectrum. \n\nIn order to extend this to unbounded operators it is convenient to\ndefine an \\emph{observable affiliated to $\\mathscr{A}$} as a morphism\n$H:\\cc_{\\mathrm{o}}(\\mathbb{R})\\to\\mathscr{A}$. We set $\\varphi(H)\\equiv H(\\varphi)$. If\n$\\mathscr{A}$ is realized on $\\mathcal{H}$ then a self-adjoint\noperator on $\\mathcal{H}$ such that $(H+i)^{-1}\\in\\mathscr{A}$ is said to be\naffiliated to $\\mathscr{A}$; then $H(\\varphi)=\\varphi(H)$ defines an\nobservable affiliated to $\\mathscr{A}$ (see Appendix A in \\cite{DG3} for a\nprecise description of the relation between observables and\nself-adjoint operators affiliated to $\\mathscr{A}$). The spectrum of an\nobservable is by definition the support of the morphism $H$:\n\\begin{equation}\\label{eq:sp}\n\\mathrm{Sp}(H)=\\{\\lambda\\in\\mathbb{R} \\mid\n\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R}),\\varphi(\\lambda)\\neq 0 \\Rightarrow\n\\varphi(H)\\neq0\\}. \n\\end{equation}\nNow note that $\\mathscr{P} H\\equiv\\mathscr{P}\\circ H$ is an observable affiliated to\nthe quotient algebra $\\mathscr{A}\/\\mathscr{A}(o)$ so we may define the essential\nspectrum of $H$ as the spectrum of $\\mathscr{P} H$. Explicitly, we get:\n\\begin{equation}\\label{eq:es1}\n\\mathrm{Sp_{ess}}(H)=\\{\\lambda\\in\\mathbb{R} \\mid \n\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R}),\\varphi(\\lambda)\\neq 0 \\Rightarrow\n\\varphi(H)\\notin \\mathscr{A}(o)\\}. \n\\end{equation}\nNow the first assertion of the next theorem follows immediately from\nTheorem \\ref{th:ga}. For the second assertion, see the proof of\nTheorem 2.10 in \\cite{DG2}. By $\\overline{\\cup}$ we denote the\nclosure of the union.\n\n\\begin{theorem}\\label{th:gas}\nLet $\\mathcal{S}$ be atomic. If $H$ is an observable affiliated to $\\mathscr{A}$\nthen $H_{\\geq\\alpha}=\\mathscr{P}_{\\geq\\alpha}H$ is an observable affiliated\nto $\\mathscr{A}_{\\geq\\alpha}$ and we have:\n\\begin{equation}\\label{eq:es2}\n\\mathrm{Sp_{ess}}(H)=\\overline{\\textstyle{\\bigcup}}_{\\alpha\\in\\mathcal{P}(\\mathcal{S})}\\mathrm{Sp}(H_{\\geq\\alpha}).\n\\end{equation}\nIf for each $A\\in\\mathscr{A}$ the set of $\\mathscr{P}_{\\geq\\alpha}A$ with\n$\\alpha\\in\\mathcal{P}(\\mathcal{S})$ is compact in $\\mathscr{A}$ then the union in\n\\eqref{eq:es2} is closed.\n\\end{theorem}\n\n\\subsection{Hilbert $C^*$-modules}\n\\label{ss:preh}\n\nSome basic knowledge of the theory of Hilbert\n$C^*$-modules is useful but not indispensable for understanding our\nconstructions. We translate here the necessary facts in a purely\nHilbert space language. Our main reference for the general theory\nof Hilbert $C^*$-modules is \\cite{La} but see also \\cite{Bl,RW}.\nThe examples of interest in this paper are the ``concrete'' Hilbert\n$C^*$-modules described below as Hilbert $C^*$-submodules of\n$L(\\mathcal{E},\\mathcal{F})$. We recall, however, the general definition.\n\n\nIf $\\mathscr{A}$ is a $C^*$-algebra then a \\emph{Banach $\\mathscr{A}$-module} is a\nBanach space $\\mathscr{M}$ equipped with a continuous bilinear map\n$\\mathscr{A}\\times\\mathscr{M}\\ni(A,M)\\mapsto MA\\in\\mathscr{M}$ such that $(MA)B=M(AB)$. We\ndenote $\\mathscr{M}\\cdot\\mathscr{A}$ the clspan of the elements $MA$ with $A\\in\\mathscr{A}$\nand $M\\in\\mathscr{M}$. By the Cohen-Hewitt theorem \\cite{FD} for each\n$N\\in\\mathscr{M}\\cdot\\mathscr{A}$ there are $A\\in\\mathscr{A}$ and $M\\in\\mathscr{M}$ such that\n$N=MA$, in particular $\\mathscr{M}\\cdot\\mathscr{A}=\\mathscr{M}\\mathscr{A}$. Note that by module we\nmean ``right module'' but the Cohen-Hewitt theorem is also valid for\nleft Banach modules.\n\nLet $\\mathscr{A}$ be a $C^*$-algebra. A (right) \\emph{Hilbert $\\mathscr{A}$-module}\nis a Banach $\\mathscr{A}$-module $\\mathscr{M}$ equipped with an $\\mathscr{A}$-valued\nsesquilinear map\n$\\braket{\\cdot}{\\cdot}\\equiv\\braket{\\cdot}{\\cdot}_\\mathscr{A}$ which is\npositive (i.e. $\\braket{M}{M}\\geq0$) $\\mathscr{A}$-sesquilinear\n(i.e. $\\braket{M}{NA}=\\braket{M}{N}A$) and such that\n$\\|M\\|\\equiv\\|\\braket{M}{M}\\|^{1\/2}$. Then $\\mathscr{M}=\\mathscr{M}\\mathscr{A}$. The clspan\nof the elements $\\braket{M}{M}$ is an ideal of $\\mathscr{A}$ denoted\n$\\braket{\\mathscr{M}}{\\mathscr{M}}$. One says that $\\mathscr{M}$ is \\emph{full} if\n$\\braket{\\mathscr{M}}{\\mathscr{M}}=\\mathscr{A}$. If $\\mathscr{A}$ is an ideal of a $C^*$-algebra\n$\\mathscr{C}$ then $\\mathscr{M}$ is equipped with an obvious structure of Hilbert\n$\\mathscr{C}$-module. Left Hilbert $\\mathscr{A}$-modules are defined similarly. \n\nIf $\\mathscr{M},\\mathscr{N}$ are Hilbert $\\mathscr{A}$-modules and $(M,N)\\in\\mathscr{M}\\times\\mathscr{N}$\nthen $M'\\mapsto N\\braket{M}{M'}$ is a linear continuous map\n$\\mathscr{M}\\to\\mathscr{N}$ denoted $\\ket{N}\\bra{M}$ or $NM^*$. The closed linear\nsubspace of $L(\\mathscr{M},\\mathscr{N})$ generated by these elements is denoted\n$\\mathcal{K}(\\mathscr{M},\\mathscr{N})$. There is a unique antilinear isometric map $T\\mapsto\nT^*$ of $\\mathcal{K}(\\mathscr{M},\\mathscr{N})$ onto $\\mathcal{K}(\\mathscr{N},\\mathscr{M})$ which sends\n$\\ket{N}\\bra{M}$ into $\\ket{M}\\bra{N}$. The space\n$\\mathcal{K}(\\mathscr{M})\\equiv\\mathcal{K}(\\mathscr{M},\\mathscr{M})$ is a $C^*$-algebra called\n\\emph{imprimitivity algebra} of the Hilbert $\\mathscr{A}$-module $\\mathscr{M}$.\n\nAssume that $\\mathscr{N}$ is a closed subspace of a Hilbert $\\mathscr{A}$-module $\\mathscr{M}$\nand let $\\braket{\\mathscr{N}}{\\mathscr{N}}$ be the clspan of the elements\n$\\braket{N}{N}$ in $\\mathscr{A}$. If $\\mathscr{N}$ is an $\\mathscr{A}$-submodule of $\\mathscr{M}$ then\nit inherits an obvious Hilbert $\\mathscr{A}$-module structure from $\\mathscr{M}$. If\n$\\mathscr{N}$ is not an $\\mathscr{A}$-submodule of $\\mathscr{M}$ it may happen that there is a\n$C^*$-subalgebra $\\mathscr{B}\\subset\\mathscr{A}$ such that $\\mathscr{N}\\mathscr{B}\\subset\\mathscr{N}$ and\n$\\braket{\\mathscr{N}}{\\mathscr{N}}\\subset\\mathscr{B}$. Then clearly we get a Hilbert\n$\\mathscr{B}$-module structure on $\\mathscr{N}$. On the other hand, it is clear that\nsuch a $\\mathscr{B}$ exists if and only if $\\mathscr{N}\\braket{\\mathscr{N}}{\\mathscr{N}}\\subset\\mathscr{N}$\nand then $\\braket{\\mathscr{N}}{\\mathscr{N}}$ is a $C^*$-subalgebra of $\\mathscr{A}$. Under\nthese conditions we say that \\emph{$\\mathscr{N}$ is a Hilbert $C^*$-submodule}\nof the Hilbert $\\mathscr{A}$-module $\\mathscr{M}$. Then $\\mathscr{N}$ inherits a Hilbert\n$\\braket{\\mathscr{N}}{\\mathscr{N}}$-module structure and this defines the\n$C^*$-algebra $\\mathcal{K}(\\mathscr{N})$. Moreover, if $\\mathscr{B}$ is as above then\n$\\mathcal{K}(\\mathscr{N})=\\mathcal{K}_\\mathscr{B}(\\mathscr{N})$.\n\nIf $\\mathscr{N}$ is a closed subspace of a Hilbert $\\mathscr{A}$-module $\\mathscr{M}$ then\nlet $\\mathcal{K}(\\mathscr{N}|\\mathscr{M})$ be the closed subspace of $\\mathcal{K}(\\mathscr{M})$ generated by\nthe elements $NN^*$ with $N\\in\\mathscr{N}$. It is easy to prove that\n\\emph{if $\\mathscr{N}$ is a Hilbert $C^*$-submodule of $\\mathscr{M}$ then\n$\\mathcal{K}(\\mathscr{N}|\\mathscr{M})$ is a $C^*$-subalgebra of $\\mathcal{K}(\\mathscr{M})$ and the map\n$T\\mapsto T|_\\mathscr{N}$ sends $\\mathcal{K}(\\mathscr{N}|\\mathscr{M})$ onto $\\mathcal{K}(\\mathscr{N})$ and is an\nisomorphism of $C^*$-algebras}. Then we identify $\\mathcal{K}(\\mathscr{N}|\\mathscr{M})$\nwith $\\mathcal{K}(\\mathscr{N})$.\n\nIf $\\mathcal{E},\\mathcal{F}$ are Hilbert spaces then we equip $L(\\mathcal{E},\\mathcal{F})$ with the\nHilbert $L(\\mathcal{E})$-module structure defined as follows: the\n$C^*$-algebra $L(\\mathcal{E})$ acts to the right by composition and we take\n$\\braket{M}{N}=M^*N$ as inner product, where $M^*$ is the usual\nadjoint of the operator $M$. Note that $L(\\mathcal{E},\\mathcal{F})$ is also equipped\nwith a natural left Hilbert $L(\\mathcal{F})$-module structure: this time the\ninner product is $MN^*$.\n\nIf $\\mathscr{M}\\subset L(\\mathcal{E},\\mathcal{F})$ is a linear subspace then $\\mathscr{M}^*\\subset\nL(\\mathcal{F},\\mathcal{E})$ is the set of adjoint operators $M^*$ with $M\\in\\mathscr{M}$.\nClearly $\\mathscr{M}_1\\subset\\mathscr{M}_2\\Rightarrow\\mathscr{M}_1^*\\subset\\mathscr{M}_2^*$. If\n$\\mathcal{G}$ is a third Hilbert spaces and $\\mathscr{N}\\subset L(\\mathcal{F},\\mathcal{G})$ is a\nlinear subspace then $(\\mathscr{N}\\cdot\\mathscr{M})^*=\\mathscr{M}^*\\cdot\\mathscr{N}^*$. In\nparticular, if $\\mathcal{E}=\\mathcal{F}=\\mathcal{G}$, $\\mathscr{M}=\\mathscr{M}^*$, and $\\mathscr{N}=\\mathscr{N}^*$ then\n$\\mathscr{M}\\cdot\\mathscr{N}\\subset\\mathscr{N}\\cdot\\mathscr{M}$ is equivalent to\n$\\mathscr{M}\\cdot\\mathscr{N}=\\mathscr{N}\\cdot\\mathscr{M}$.\n\n\nNow let $\\mathscr{M}\\subset L(\\mathcal{E},\\mathcal{F})$ be a closed linear subspace. Then\n\\emph{$\\mathscr{M}$ is a Hilbert $C^*$-submodule of $L(\\mathcal{E},\\mathcal{F})$ if and only\n if $\\mathscr{M}\\mr^*\\mathscr{M}\\subset\\mathscr{M}$}.\n\nThese are the ``concrete'' Hilbert $C^*$-modules we are interested\nin. It is clear that $\\mathscr{M}^*$ will be a Hilbert $C^*$-submodule of\n$L(\\mathcal{F},\\mathcal{E})$. We mention that $\\mathscr{M}^*$ is canonically identified with\nthe left Hilbert $\\mathscr{A}$-module $\\mathcal{K}(\\mathscr{M},\\mathscr{A})$ dual to $\\mathscr{M}$. \n\n\\begin{proposition}\\label{pr:ss}\nLet $\\mathcal{E},\\mathcal{F}$ be Hilbert spaces and let $\\mathscr{M}$ be a Hilbert\n$C^*$-submodule of $L(\\mathcal{E},\\mathcal{F})$. Then $\\mathscr{A}\\equiv\\mathscr{M}^*\\cdot\\mathscr{M}$ and\n$\\mathscr{B}\\equiv\\mathscr{M}\\cdot\\mathscr{M}^*$ are $C^*$-algebras of operators on $\\mathcal{E}$\nand $\\mathcal{F}$ respectively and $\\mathscr{M}$ is equipped with a canonical\nstructure of $(\\mathscr{B},\\mathscr{A})$ imprimitivity bimodule.\n\\end{proposition}\n\nFor the needs of this paper the last assertion of the proposition\ncould be interpreted as a definition.\n\n\\begin{proposition}\\label{pr:clsubmod}\nLet $\\mathscr{N}$ be a $C^*$-submodule of $L(\\mathcal{E},\\mathcal{F})$ such that\n$\\mathscr{N}\\subset\\mathscr{M}$ and $\\mathscr{N}^*\\cdot\\mathscr{N}=\\mathscr{M}^*\\cdot\\mathscr{M}$,\n$\\mathscr{N}\\cdot\\mathscr{N}^*=\\mathscr{M}\\cdot\\mathscr{M}^*$. Then $\\mathscr{N}=\\mathscr{M}$.\n\\end{proposition}\n\\proof If $M\\in\\mathscr{M}$ and $N\\in\\mathscr{N}$ then $MN^*\\in\\mathscr{B}=\\mathscr{N}\\cdot\\mathscr{N}^*$ and\n$\\mathscr{N}\\rn^*\\mathscr{N}\\subset\\mathscr{N}$ hence $MN^*N\\in\\mathscr{N}$. Since $\\mathscr{N}^*\\cdot\\mathscr{N}=\\mathscr{A}$\nwe get $MA\\in\\mathscr{N}$ for all $A\\in\\mathscr{A}$. Let $A_i$ be an approximate\nidentity for the $C^*$-algebra $\\mathscr{A}$. Since one can factorize $M=M'A'$\nwith $M'\\in\\mathscr{M}$ and $A'\\in\\mathscr{A}$ the sequence $MA_i=M'A'A_i$ converges\nto $ M'A'=M$ in norm. Thus $M\\in\\mathscr{N}$.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\begin{proposition}\\label{pr:2ss}\nLet $\\mathcal{E},\\mathcal{F},\\mathcal{H}$ be Hilbert spaces and let $\\mathscr{M}\\subset L(\\mathcal{H},\\mathcal{E})$\nand $\\mathscr{N}\\subset L(\\mathcal{H},\\mathcal{F})$ be Hilbert $C^*$-submodules. Let $\\mathscr{A}$ be\na $C^*$-algebra of operators on $\\mathcal{H}$ such that $\\mathscr{M}^*\\cdot\\mathscr{M}$ and\n$\\mathscr{N}^*\\cdot\\mathscr{N}$ are ideals of $\\mathscr{A}$ and let us view $\\mathscr{M}$ and $\\mathscr{N}$ as\nHilbert $\\mathscr{A}$-modules. Then $\\mathcal{K}(\\mathscr{M},\\mathscr{N})\\cong\\mathscr{N}\\cdot\\mathscr{M}^*$ the\nisometric isomorphism being determined by the condition\n$\\ket{N}\\bra{M}=NM^*$. \n\\end{proposition}\n\n\\subsection{Graded Hilbert $C^*$-modules}\n\\label{ss:gf}\n\nThis is due to Georges Skandalis \\cite{Sk} (see also Remark\n\\ref{re:squant}).\n\n\\begin{definition}\\label{df:grm}\nLet $\\mathcal{S}$ be a semilattice and $\\mathscr{A}$ an $\\mathcal{S}$-graded\n$C^*$-algebra. A Hilbert $\\mathscr{A}$-module $\\mathscr{M}$ is an \\emph{$\\mathcal{S}$-graded\n Hilbert $\\mathscr{A}$-module} if a linearly independent family\n$\\{\\mathscr{M}(\\sigma)\\}_{\\sigma\\in \\mathcal{S}}$ of closed subspaces of $\\mathscr{M}$ is\ngiven such that $\\sum_\\sigma\\mathscr{M}(\\sigma)$ is dense in $\\mathscr{M}$ and:\n\\begin{equation}\\label{eq:grm}\n\\mathscr{M}(\\sigma)\\mathscr{A}(\\tau)\\subset\\mathscr{M}(\\sigma\\wedge\\tau) \n\\hspace{2mm}\\text{and}\\hspace{2mm}\n\\braket{\\mathscr{M}(\\sigma)}{\\mathscr{M}(\\tau)}\\subset\\mathscr{A}(\\sigma\\wedge\\tau)\n\\hspace{2mm} \\text{for all } \\sigma,\\tau\\in \\mathcal{S}.\n\\end{equation}\n\\end{definition}\nNote that $\\mathscr{A}$ equipped with its canonical Hilbert $\\mathscr{A}$-module\nstructure is an $\\mathcal{S}$-graded Hilbert \\mbox{$\\mathscr{A}$-module}. \n\\eqref{eq:grm} implies that each $\\mathscr{M}(\\sigma)$ is a\nHilbert $\\mathscr{A}(\\sigma)$-module and if $\\sigma\\leq\\tau$ then\n$\\mathscr{M}(\\sigma)$ is an $\\mathscr{A}(\\tau)$-module.\n\n\nFrom \\eqref{eq:grm} we also see that \\emph{the imprimitivity algebra\n $\\mathcal{K}(\\mathscr{M}(\\sigma))$ of the Hilbert $\\mathscr{A}(\\sigma)$-module\n $\\mathscr{M}(\\sigma)$ is naturally identified with the clspan in\n $\\mathcal{K}(\\mathscr{M})$ of the elements $MM^*$ with $M\\in\\mathscr{M}(\\sigma)$}. Thus\n$\\mathcal{K}(\\mathscr{M}(\\sigma))$ is identified with a $C^*$-subalgebra of\n$\\mathcal{K}(\\mathscr{M})$. We use this identification below.\n\n\\begin{theorem}\\label{th:kghm}\nIf $\\mathscr{M}$ is a graded Hilbert $\\mathscr{A}$-module then $\\mathcal{K}(\\mathscr{M})$ becomes a\ngraded $C^*$-algebra if we define\n$\\mathcal{K}(\\mathscr{M})(\\sigma)=\\mathcal{K}(\\mathscr{M}(\\sigma))$. If $M\\in\\mathscr{M}(\\sigma)$ and\n$N\\in\\mathscr{M}(\\tau)$ then there are elements $M'$ and $N'$ in\n$\\mathscr{M}(\\sigma\\wedge\\tau)$ such that $MN^*=M'N'^*$;\nin particular $MN^*\\in\\mathcal{K}(\\mathscr{M})(\\sigma\\wedge\\tau)$.\n\\end{theorem}\n\\proof As explained before, $\\mathcal{K}(\\mathscr{M})(\\sigma)$ are $C^*$-subalgebras\nof $\\mathcal{K}(\\mathscr{M})$. To show that they are linearly independent, let\n$T(\\sigma)\\in\\mathcal{K}(\\mathscr{M})(\\sigma)$ such that $T(\\sigma)=0$ but for a\nfinite number of $\\sigma$ and assume $\\sum_\\sigma T(\\sigma)=0$. Then\nfor each $M\\in\\mathscr{M}$ we have $\\sum_\\sigma T(\\sigma)M=0$. Note that the\nrange of $T(\\sigma)$ is included in $\\mathscr{M}(\\sigma)$. Since the linear\nspaces $\\mathscr{M}(\\sigma)$ are linearly independent we get $T(\\sigma)M=0$\nfor all $\\sigma$ and $M$ hence $T(\\sigma)=0$ for all $\\sigma$.\n\nWe now prove the second assertion of the proposition. Since\n$\\mathscr{M}(\\sigma)$ is a Hilbert $\\mathscr{A}(\\sigma)$-module there are\n$M_1\\in\\mathscr{M}(\\sigma)$ and $S\\in\\mathscr{A}(\\sigma)$ such that $M=M_1S$, cf. the\nCohen-Hewitt theorem or Lemma 4.4 in \\cite{La}. Similarly, $N=N_1T$\nwith $N_1\\in\\mathscr{M}(\\tau)$ and $T\\in\\mathscr{A}(\\tau)$. Then $MN^*=M_1(S\nT^*)N_1^*$ and $S T^*\\in \\mathscr{A}(\\sigma\\wedge\\tau)$ so we may factorize it\nas $S T^*=UV^*$ with $U,V\\in \\mathscr{A}(\\sigma\\wedge\\tau)$, hence\n$MN^*=(M_1U)(N_1V)^*$. By using \\eqref{eq:grm} we see that $M'=M_1U$\nand $N'=N_1V$ belong to $\\mathscr{M}(\\sigma\\wedge\\tau)$. In particular, we\nhave $MN^*\\in\\mathcal{K}(\\mathscr{M})(\\sigma\\wedge\\tau)$ if $M\\in\\mathscr{M}(\\sigma)$ and\n$N\\in\\mathscr{M}(\\tau)$.\n\nObserve that the assertion we just proved implies that\n$\\sum_\\sigma\\mathcal{K}(\\mathscr{M})(\\sigma)$ is dense in $\\mathcal{K}(\\mathscr{M})$. \nIt remains to see that\n$\\mathcal{K}(\\mathscr{M})(\\sigma)\\mathcal{K}(\\mathscr{M})(\\tau)\\subset\\mathcal{K}(\\mathscr{M})(\\sigma\\wedge\\tau)$.\nFor this it suffices that $M\\braket{M}{N}N^*$ be in\n$\\mathcal{K}(\\mathscr{M})(\\sigma\\wedge\\tau)$ if $M\\in\\mathscr{M}(\\sigma)$ and\n$N\\in\\mathscr{M}(\\tau)$. Since $\\braket{M}{N}\\in\\mathscr{A}(\\sigma\\wedge\\tau)$ we\nmay write $\\braket{M}{N}=S T^*$ with $S,T\\in\\mathscr{A}(\\sigma\\wedge\\tau)$\nso $M\\braket{M}{N}N^*=(MS)(NT)^*\\in\\mathcal{K}(\\mathscr{M})(\\sigma\\wedge\\tau)$ by\n\\eqref{eq:grm}. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip \n\n\nWe recall that the direct sum of a family $\\{\\mathscr{M}_i\\}$ of Hilbert\n$\\mathscr{A}$-modules is defined as follows: $\\oplus_i\\mathscr{M}_i$ is the space of\nelements $(M_i)_i\\in\\prod_i\\mathscr{M}_i$ such that the series\n$\\sum_i\\braket{M_i}{M_i}$ converges in $\\mathscr{A}$ equipped with the natural\n$\\mathscr{A}$-module structure and with the $\\mathscr{A}$-valued inner product defined\nby\n\\begin{equation}\\label{eq:sum}\n\\braket{(M_i)_i}{(N_i)_i} \n=\\textstyle\\sum_i\\braket{M_i}{N_i}.\n\\end{equation} \nThe algebraic direct sum of the $\\mathscr{A}$-modules $\\mathscr{M}_i$ is dense in\n$\\oplus_i\\mathscr{M}_i$.\n\n\nIt is easy to check that if each $\\mathscr{M}_i$ is graded and if we set\n$\\mathscr{M}(\\sigma)=\\oplus_i\\mathscr{M}_i(\\sigma)$ then $\\mathscr{M}$ becomes a graded\nHilbert $\\mathscr{A}$-module. For example, if $\\mathscr{N}$ is a graded Hilbert\n$\\mathscr{A}$-module then $\\mathscr{N}\\oplus\\mathscr{A}$ is a graded Hilbert $\\mathscr{A}$-module and\nso the \\emph{linking algebra $\\mathcal{K}(\\mathscr{N}\\oplus\\mathscr{A})$ is equipped with a\ngraded algebra structure}. We recall \\cite[p. 50-52]{RW} that we\nhave a natural identification\n\\begin{equation}\\label{eq:link}\n\\mathcal{K}(\\mathscr{N}\\oplus\\mathscr{A})=\n\\begin{pmatrix}\n\\mathcal{K}(\\mathscr{N})& \\mathscr{N}\\\\\n\\mathscr{N}^*& \\mathscr{A}\n\\end{pmatrix}\n\\end{equation}\nand by Theorem \\ref{th:kghm} this is a graded algebra whose\n$\\sigma$-component is equal to\n\\begin{equation}\\label{eq:links}\n\\mathcal{K}(\\mathscr{N}(\\sigma)\\oplus\\mathscr{A}(\\sigma))=\n\\begin{pmatrix}\n\\mathcal{K}(\\mathscr{N}(\\sigma))& \\mathscr{N}(\\sigma)\\\\\n\\mathscr{N}(\\sigma)^*& \\mathscr{A}(\\sigma)\n\\end{pmatrix}.\n\\end{equation}\nIf $\\mathscr{N}$ is a $C^*$-submodule of $L(\\mathcal{E},\\mathcal{F})$ and if we set\n$\\mathscr{N}^*\\cdot\\mathscr{N}=\\mathscr{A},\\mathscr{N}\\cdot\\mathscr{N}^*=\\mathscr{B}$ then the linking algebra\n$\\begin{pmatrix}\\mathscr{B}& \\mathscr{M}\\\\\\mathscr{M}^*& \\mathscr{A}\\end{pmatrix}$ of $\\mathscr{M}$ is a\n$C^*$-algebra of operators on $\\mathcal{F}\\oplus\\mathcal{E}$.\n\nSome of the graded Hilbert $C^*$-modules which we shall use later on\nwill be constructed as follows.\n\n\\begin{proposition}\\label{pr:rhm}\nLet $\\mathcal{E},\\mathcal{F}$ be Hilbert spaces and let $\\mathscr{M}\\subset L(\\mathcal{E},\\mathcal{F})$ be a\nHilbert $C^*$-submodule, so that $\\mathscr{A}\\equiv\\mathscr{M}^*\\cdot\\mathscr{M}\\subset\nL(\\mathcal{E})$ is a $C^*$-algebra and $\\mathscr{M}$ is a full Hilbert\n$\\mathscr{A}$-module. Let $\\mathcal{C}$ be a $C^*$-algebra of operators on $\\mathcal{E}$\ngraded by the family of $C^*$-subalgebras\n$\\{\\mathcal{C}(\\sigma)\\}_{\\sigma\\in\\mathcal{S}}$. Assume that we have\n\\begin{equation}\\label{eq:tas}\n\\mathscr{A}\\cdot\\mathcal{C}(\\sigma)=\\mathcal{C}(\\sigma)\\cdot\\mathscr{A}\n\\equiv\\mathscr{C}(\\sigma)\n\\hspace{2mm} \\text{for all } \\sigma\\in\\mathcal{S}\n\\end{equation}\nand that the family $\\{\\mathscr{C}(\\sigma)\\}$ of subspaces of $L(\\mathcal{F})$ is\nlinearly independent. Then the $\\mathscr{C}(\\sigma)$ are $C^*$-algebras of\noperators on $\\mathcal{E}$ and $\\mathscr{C}=\\sum^\\mathrm{c}_\\sigma\\mathscr{C}(\\sigma)$ is a\n$C^*$-algebra graded by the family $\\{\\mathscr{C}(\\sigma)\\}$. If\n$\\mathscr{N}(\\sigma)\\equiv\\mathscr{M}\\cdot\\mathcal{C}(\\sigma)$ then\n$\\mathscr{N}=\\sum^\\mathrm{c}_\\sigma\\mathscr{N}(\\sigma)$ is a full Hilbert $\\mathscr{C}$-module\ngraded by $\\{\\mathscr{N}(\\sigma)\\}$.\n\\end{proposition}\n\\proof\nWe have\n$$\n\\mathscr{C}(\\sigma)\\cdot\\mathscr{C}(\\tau)=\\mathscr{A}\\cdot\\mathcal{C}(\\sigma)\\cdot\\mathscr{A}\\cdot\\mathcal{C}(\\tau)\n=\\mathscr{A}\\cdot\\mathscr{A}\\cdot\\mathcal{C}(\\sigma)\\cdot\\mathcal{C}(\\tau)\\subset\n\\mathscr{A}\\cdot\\mathcal{C}(\\sigma\\wedge\\tau)=\\mathscr{C}(\\sigma\\wedge\\tau).\n$$ \nThis proves that the $\\mathscr{C}(\\sigma)$ are $C^*$-algebras and that\n$\\mathscr{C}$ is $\\mathcal{S}$-graded. Then:\n$$\n\\mathscr{N}(\\sigma)\\cdot\\mathscr{C}(\\tau)=\\mathscr{M}\\cdot\\mathcal{C}(\\sigma)\\cdot\\mathcal{C}(\\tau)\\cdot\\mathscr{A}\n\\subset\\mathscr{M}\\cdot\\mathcal{C}(\\sigma\\wedge\\tau)\\cdot\\mathscr{A}=\n\\mathscr{M}\\cdot\\mathscr{A}\\cdot\\mathcal{C}(\\sigma\\wedge\\tau)=\n\\mathscr{M}\\cdot\\mathcal{C}(\\sigma\\wedge\\tau)=\\mathscr{N}(\\sigma\\wedge\\tau)\n$$\nand\n$$\n\\mathscr{N}(\\sigma)^*\\cdot\\mathscr{N}(\\tau)=\n\\mathcal{C}(\\sigma)\\cdot\\mathscr{M}^*\\cdot\\mathscr{M}\\cdot\\mathcal{C}(\\tau)=\n\\mathcal{C}(\\sigma)\\cdot\\mathscr{A}\\cdot\\mathcal{C}(\\tau)=\n\\mathscr{A}\\cdot\\mathcal{C}(\\sigma)\\cdot\\mathcal{C}(\\tau)\\subset\n\\mathscr{A}\\cdot\\mathcal{C}(\\sigma\\wedge\\tau)=\\mathscr{C}(\\sigma\\wedge\\tau).\n$$\nObserve that this computation also gives\n$\\mathscr{N}(\\sigma)^*\\cdot\\mathscr{N}(\\sigma)=\\mathscr{C}(\\sigma)$. Then\n$$\n\\big({\\textstyle\\sum_\\sigma}\\mathscr{N}(\\sigma)^*\\big)\n\\big({\\textstyle\\sum_\\sigma}\\mathscr{N}(\\sigma)\\big)=\n{\\textstyle\\sum_{\\sigma,\\tau}}\\mathscr{N}(\\sigma)^*\\mathscr{N}(\\tau)\\subset\n{\\textstyle\\sum_{\\sigma,\\tau}}\\mathscr{C}(\\sigma\\wedge\\tau)\\subset\n{\\textstyle\\sum_{\\sigma}}\\mathscr{C}(\\sigma)\n$$ and by the preceding remark we get $\\mathscr{N}^*\\cdot\\mathscr{N}=\\mathscr{C}$ so $\\mathscr{N}$\nis a full Hilbert $\\mathscr{C}$-module. To show the grading property it\nsuffices to prove that the family of subspaces $\\mathscr{N}(\\sigma)$ is\nlinearly independent. Assume that $\\sum N(\\sigma)=0$ with\n$N(\\sigma)\\in\\mathscr{N}(\\sigma)$ and $N(\\sigma)=0$ for all but a finite\nnumber of $\\sigma$. Assuming that there are non-zero elements in\nthis sum, let $\\tau$ be a maximal element of the set of $\\sigma$\nsuch that $N(\\sigma)\\neq0$. From\n$\\sum_{\\sigma_1,\\sigma_2}N(\\sigma_1)^*N(\\sigma_2)=0$ and since\n$N(\\sigma_1)^*N(\\sigma_2)\\in\\mathscr{C}(\\sigma_1\\wedge\\sigma_2)$ we get\n$\\sum_{\\sigma_1\\wedge\\sigma_2=\\sigma}N(\\sigma_1)^*N(\\sigma_2)=0$ for\neach $\\sigma$. Take here $\\sigma=\\tau$ and observe that if\n$\\sigma_1\\wedge\\sigma_2=\\tau$ and $\\sigma_1>\\tau$ or $\\sigma_2>\\tau$\nthen $N(\\sigma_1)^*N(\\sigma_2)=0$. Thus $N(\\tau)^*N(\\tau)=0$ so\n$N(\\tau)=0$. But this contradicts the choice of $\\tau$, so\n$N(\\sigma)=0$ for all $\\sigma$. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\subsection{Tensor products}\n\\label{ss:ha}\n\nIn this subsection we collect some facts concerning tensor products\nwhich are useful in what follows. We recall the definition of the\ntensor product of a Hilbert space $\\mathcal{E}$ and a \\mbox{$C^*$-algebra}\n$\\mathscr{A}$ in the category of Hilbert $C^*$-modules, cf. \\cite{La}. We\nequip the algebraic tensor product $\\mathcal{E}\\odot\\mathscr{A}$ with the obvious\nright $\\mathscr{A}$-module structure and with the $\\mathscr{A}$-valued sesquilinear\nmap given by\n\\begin{equation}\\label{eq:hoa}\n\\braket{{\\textstyle\\sum_{u\\in\\mathcal{E}}}u\\otimes\n A_u}{{\\textstyle\\sum_{v\\in\\mathcal{E}}}v\\otimes B_v} \n={\\textstyle\\sum_{u,v}}\\braket{u}{v}A^*_uB_v\n\\end{equation}\nwhere $A_u=B_u=0$ outside a finite set. Then the completion of\n$\\mathcal{E}\\odot\\mathscr{A}$ for the norm $\\|M\\|:=\\|\\braket{M}{M}\\|^{1\/2}$ is a\nfull Hilbert $\\mathscr{A}$-module denoted $\\mathcal{E}\\otimes\\mathscr{A}$. Clearly its\nimprimitivity algebra is\n\\begin{equation}\\label{eq:cha}\n\\mathcal{K}(\\mathcal{E}\\otimes\\mathscr{A})=K(\\mathcal{E})\\otimes\\mathscr{A}.\n\\end{equation} \nIf $\\mathscr{A}$ is $\\mathcal{S}$-graded then $\\mathcal{E}\\otimes\\mathscr{A}$ is equipped with an\nobvious structure of $\\mathcal{S}$-graded Hilbert $\\mathscr{A}$-module.\n\nIf $\\mathscr{A}$ is realized on a Hilbert space $\\mathcal{F}$ then one has a natural\nisometric embedding $\\mathcal{E}\\otimes\\mathscr{A} \\subset L(\\mathcal{F},\\mathcal{E}\\otimes\\mathcal{F})$.\nIndeed, there is a unique linear map $\\mathcal{E}\\otimes\\mathscr{A}\\to\nL(\\mathcal{F},\\mathcal{E}\\otimes\\mathcal{F})$ which associates to $u\\otimes A$ the function\n$f\\mapsto u\\otimes(Af)$ and due to \\eqref{eq:hoa} this map is an\nisometry. Thus the Hilbert $\\mathscr{A}$-module $\\mathcal{E}\\otimes\\mathscr{A}$ is realized\nas a Hilbert $C^*$-submodule of $L(\\mathcal{F},\\mathcal{E}\\otimes\\mathcal{F})$, the dual\nmodule is realized as the set of adjoint operators\n$(\\mathcal{E}\\otimes\\mathscr{A})^*\\subset L(\\mathcal{E}\\otimes\\mathcal{F},\\mathcal{E})$, and one clearly has\n\\begin{equation}\\label{eq:etens}\n(\\mathcal{E}\\otimes\\mathscr{A})^*\\cdot (\\mathcal{E}\\otimes\\mathscr{A})=\\mathscr{A}, \\hspace{2mm}\n(\\mathcal{E}\\otimes\\mathscr{A})\\cdot(\\mathcal{E}\\otimes\\mathscr{A})^*=K(\\mathcal{E})\\otimes\\mathscr{A}.\n\\end{equation} \n\n\nIf $X$ is a locally compact space equipped with a Radon measure then\n$L^2(X)\\otimes\\mathscr{A}$ is the completion of $\\cc_{\\mathrm{c}}(X;\\mathscr{A})$ for the norm\n$\\|\\int_X F(x)^*F(x) \\text{d} x\\|^{1\/2}$. Note that $L^2(X;\\mathscr{A})\\subset\nL^2(X)\\otimes\\mathscr{A}$ strictly in general, cf.\\ the example below. If\n$\\mathscr{A}\\subset L(\\mathcal{F})$ then the norm on $L^2(X)\\otimes\\mathscr{A}$ is\n\\begin{equation}\\label{eq:L2a}\n\\|{\\textstyle\\int_X} F(x)^*F(x) \\text{d} x\\|^2=\n{\\textstyle\\sup_{f\\in\\mathcal{F},\\|f\\|=1}} {\\textstyle\\int_X} \n\\|F(x)f\\|^2 \\text{d} x.\n\\end{equation}\nIf $Y$ is a locally compact space then\n$\\mathcal{E}\\otimes\\cc_{\\mathrm{o}}(Y)\\cong\\cc_{\\mathrm{o}}(Y;\\mathcal{E})$. Hence $L^2(X)\\otimes\\cc_{\\mathrm{o}}(Y)$ is\nthe completion of $\\cc_{\\mathrm{c}}(X\\times Y)$ for the norm $\\sup_{y\\in\n Y}(\\int_X |F(x,y)|^2 \\text{d} x)^{1\/2}$. Assume that $X=Y$ is a\nlocally compact abelian group and let $f\\in L^\\infty(X)$ with\ncompact support and $g\\in L^2(X)$. It is easy to check that\n$F(x,y)=f(x)g(x+y)$ is an element of\n$\\cc_{\\mathrm{o}}(X;L^2(X))=L^2(X)\\otimes\\cc_{\\mathrm{o}}(X)$ but if $F(x,\\cdot)=f(x)U_x g$ is\nnot zero then it does not belong to $\\cc_{\\mathrm{o}}(X)$ and is not even a bounded\nfunction if $g$ is not. Thus the elements of $L^2(X)\\otimes\\mathscr{A}$ can\nnot be realized as bounded operator valued (equivalence classes of)\nfunctions on $X$.\n\n\n\nMore generally, if $\\mathcal{F}'$, $\\mathcal{F}''$ are Hilbert spaces and\n$\\mathscr{M}\\subset L(\\mathcal{F}',\\mathcal{F}'')$ is a closed subspace then we define\n$L^2(X)\\otimes\\mathscr{M}$ as the completion of the space $\\cc_{\\mathrm{c}}(X;\\mathscr{M})$ for a\nnorm similar to \\eqref{eq:L2a}. We clearly have\n$L^2(X)\\otimes\\mathscr{M}\\subset L(\\mathcal{F}',L^2(X)\\otimes\\mathcal{F}'')$ isometrically\nand $L^2(X;\\mathscr{M})\\subset L^2(X)\\otimes\\mathscr{M}$ continuously.\n\n\n\nIf $\\mathcal{E},\\mathcal{F},\\mathcal{G},\\mathcal{H}$ are Hilbert spaces and $\\mathscr{M}\\subset L(\\mathcal{E},\\mathcal{F})$\nand $\\mathscr{N}\\subset L(\\mathcal{G},\\mathcal{H})$ are closed linear subspaces then we\ndenote $\\mathscr{M}\\otimes\\mathscr{N}$ the closure in\n$L(\\mathcal{E}\\otimes\\mathcal{G},\\mathcal{F}\\otimes\\mathcal{H})$ of the algebraic tensor product of\n$\\mathscr{M}$ and $\\mathscr{N}$. Now suppose that $\\mathscr{M}$ is a $C^*$-submodule of\n$L(\\mathcal{E},\\mathcal{F})$ and that $\\mathscr{N}$ is a $C^*$-submodule of $L(\\mathcal{G},\\mathcal{H})$ and\nlet $\\mathscr{A}=\\mathscr{M}^*\\cdot\\mathscr{M}$ and $\\mathscr{B}=\\mathscr{N}^*\\cdot\\mathscr{N}$. Then $\\mathscr{M}$ is a\nHilbert $\\mathscr{A}$-module and $\\mathscr{N}$ is a Hilbert $\\mathscr{B}$-module hence the\nexterior tensor product, denoted temporarily\n$\\mathscr{M}\\otimes_{\\text{ext}}\\mathscr{N}$, is well defined in the category of\nHilbert $C^*$-modules \\cite{La} and is a Hilbert\n$\\mathscr{A}\\otimes\\mathscr{B}$-module. On the other hand, it is easy to check that\n$(\\mathscr{M}\\otimes\\mathscr{N})^*=\\mathscr{M}^*\\otimes\\mathscr{N}^*$ and then that $\\mathscr{M}\\otimes\\mathscr{N}$\nis a Hilbert $C^*$-submodule of $L(\\mathcal{E}\\otimes\\mathcal{G},\\mathcal{F}\\otimes\\mathcal{H})$\nsuch that\n$(\\mathscr{M}\\otimes\\mathscr{N})^*\\cdot(\\mathscr{M}\\otimes\\mathscr{N})=\\mathscr{A}\\otimes\\mathscr{B}$. Finally, it\nis clear that $L(\\mathcal{E}\\otimes\\mathcal{G},\\mathcal{F}\\otimes\\mathcal{H})$ and\n$\\mathscr{M}\\otimes_{\\text{ext}}\\mathscr{N}$ induce the same $\\mathscr{A}\\otimes\\mathscr{B}$-valued\ninner product on the algebraic tensor product of $\\mathscr{M}$ and $\\mathscr{N}$.\nThus we we get a canonical isometric isomorphism\n$\\mathscr{M}\\otimes_{\\text{ext}}\\mathscr{N}=\\mathscr{M}\\otimes\\mathscr{N}$.\n\n\nAs an application we give now an abstract version of the \"toy\nmodels\" described in Example \\ref{ex:fried}. Let $\\mathcal{E},\\mathcal{F}$ be\nHilbert spaces and let us define $\\mathcal{H}=(\\mathcal{E}\\otimes\\mathcal{F})\\oplus\\mathcal{F}$.\nLet $\\mathscr{A}$ and $\\mathscr{B}$ be $C^*$-algebras of operators on $\\mathcal{F}$ and\n$\\mathcal{E}\\otimes\\mathcal{F}$ respectively. We embed $\\mathcal{E}\\otimes\\mathscr{A} \\subset\nL(\\mathcal{F},\\mathcal{E}\\otimes\\mathcal{F})$ as above. We simplify notation and denote\n$\\mathcal{E}^*\\otimes\\mathscr{A}:=(\\mathcal{E}\\otimes\\mathscr{A})^*\\subset L(\\mathcal{E}\\otimes\\mathcal{F},\\mathcal{F})$ the\ndual module.\n\n\\begin{proposition}\\label{pr:toymodel}\n Let $\\mathcal{S}$ be a semilattice and $\\mathcal{T}$ an ideal of $\\mathcal{S}$. Assume\n that the $C^*$-algebras $\\mathscr{A}$ and $\\mathscr{B}$ are $\\mathcal{S}$-graded and that\n we have $\\mathscr{A}(\\sigma)=\\{0\\}$ if $\\sigma\\notin\\mathcal{T}$ and $\\mathscr{B}(\\tau)=\n K(\\mathcal{E})\\otimes \\mathscr{A}(\\tau)$ for $\\tau\\in\\mathcal{T}$. Then\n\\begin{equation}\\label{e:toy}\n\\mathscr{C}=\n\\begin{pmatrix}\n\\mathscr{B} & \\mathcal{E}\\otimes\\mathscr{A}\\\\\n\\mathcal{E}^*\\otimes\\mathscr{A} & \\mathscr{A}\n\\end{pmatrix}.\n\\end{equation}\nis an $\\mathcal{S}$-graded $C^*$-algebra if we define its components as\nfollows:\n\\begin{equation}\\label{e:gtoy}\n\\mathscr{C}(\\sigma)=\n\\begin{pmatrix}\n\\mathscr{B}(\\sigma) & \\mathcal{E}\\otimes\\mathscr{A}(\\sigma)\\\\\n\\mathcal{E}^*\\otimes\\mathscr{A}(\\sigma) & \\mathscr{A}(\\sigma)\n\\end{pmatrix} \\quad \\text{for all} \\quad \\sigma\\in\\mathcal{S}.\n\\end{equation}\n\\end{proposition}\n\\proof\nObserve that if we set $\\mathcal{T}'=\\mathcal{S}\\setminus\\mathcal{T}$ then\n\\begin{equation}\\label{e:imptoy}\n\\mathscr{C}=\n\\begin{pmatrix}\nK(\\mathcal{E})\\otimes\\mathscr{A} & \\mathcal{E}\\otimes\\mathscr{A}\\\\\n\\mathcal{E}^*\\otimes\\mathscr{A} & \\mathscr{A}\n\\end{pmatrix} +\n\\begin{pmatrix}\n\\mathscr{B}(\\mathcal{T}') & 0 \\\\\n0 & 0\n\\end{pmatrix} =\n\\mathcal{K}(\\mathscr{N}\\oplus\\mathscr{A}) +\n\\begin{pmatrix}\n\\mathscr{B}(\\mathcal{T}') & 0 \\\\\n0 & 0\n\\end{pmatrix}\n\\end{equation}\nwhere $\\mathscr{N}=\\mathcal{E}\\otimes\\mathscr{A}$ is an $\\mathcal{S}$-graded Hilbert $\\mathscr{A}$-module,\ncf. \\eqref{eq:link} and \\eqref{eq:cha}. It is easy to see that the\nfamily $\\{\\mathscr{C}(\\sigma)\\}$ is linearly independent and that $\\mathscr{C}$ is\nthe closure of its sum. By taking into account \\eqref{eq:links} we\nsee that it suffices to show that\n$\\mathscr{C}(\\sigma)\\mathscr{C}(\\tau)\\subset\\mathscr{C}(\\sigma\\wedge\\tau)$ if\n$\\sigma\\in\\mathcal{T}'$ and $\\tau\\in\\mathcal{T}$. After computing the coefficients\nof the matrices we see that it suffices to check that\n$\\mathscr{B}(\\sigma)\\cdot \\mathcal{E}\\otimes\\mathscr{A}(\\tau)\n\\subset\\mathcal{E}\\otimes\\mathscr{A}(\\sigma\\wedge\\tau)$. \nBut:\n\\begin{align*}\n\\mathscr{B}(\\sigma)\\cdot\\mathcal{E}\\otimes\\mathscr{A}(\\tau) & =\n\\mathscr{B}(\\sigma)\\cdot K(\\mathcal{E})\\otimes\\mathscr{A}(\\tau)\\cdot \\mathcal{E}\\otimes\\mathscr{A}(\\tau) = \n\\mathscr{B}(\\sigma)\\cdot\\mathscr{B}(\\tau)\\cdot \\mathcal{E}\\otimes\\mathscr{A}(\\tau) \\\\\n& \\subset \\mathscr{B}(\\sigma\\wedge\\tau)\\cdot \\mathcal{E}\\otimes\\mathscr{A}(\\tau)\n=K(\\mathcal{E})\\otimes\\mathscr{A}(\\sigma\\wedge\\tau)\\cdot \\mathcal{E}\\otimes\\mathscr{A}(\\tau)\n\\subset \\mathcal{E}\\otimes\\mathscr{A}(\\sigma\\wedge\\tau)\n\\end{align*}\nwhich finishes the proof.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\nThe extension to an increasing family of ideals\n$\\mathcal{T}_1\\subset\\mathcal{T}_2\\dots\\subset\\mathcal{S}$ is straightforward. \n\n\n\\section{The many-body $C^*$-algebra }\n\\label{s:grass}\n\\protect\\setcounter{equation}{0}\n\nIn this section we introduce the many-body $C^*$-algebra and\ndescribe its main properties (in particular, we prove the theorems\n\\ref{th:C} and \\ref{th:CG}). Subsection \\ref{ss:hilbert} contains\nsome preparatory material on concrete realizations of Hilbert\n$C^*$-modules which implement the Morita equivalence between some\ncrossed products.\n\n\n\\subsection{Notations}\n\\label{ss:group}\n\n\nLet $X$ be a locally compact abelian group with operation denoted\nadditively equipped with a Haar measures $\\text{d} x$. We abbreviate\nthis by saying that \\emph{$X$ is an lca group}. We set $\\mathscr{L}_X\\equiv\nL(L^2(X))$ and $\\mathscr{K}_X\\equiv K(L^2(X))$ and note that these are\n$C^*$-algebras independent of the choice of the measure on $X$. If\n$Y$ is a second lca group we shall use the abbreviations\n\\begin{equation}\\label{eq:lkxy}\n\\mathscr{L}_{XY}=L(L^2(Y),L^2(X)) \\quad\\text{and}\\quad\n\\mathscr{K}_{XY}=K(L^2(Y),L^2(X)). \n\\end{equation}\nWe denote by $\\varphi(Q)$ the operator in $L^2(X)$ of multiplication\nby a function $\\varphi$ and if $X$ has to be explicitly specified we\nset $Q=Q_X$. The bounded uniformly continuous functions on $X$ form\na $C^*$-algebra $\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ which contains the algebras $\\cc_{\\mathrm{c}}(X)$ and\n$\\cc_{\\mathrm{o}}(X)$. The map $\\varphi\\mapsto\\varphi(Q)$ is an embedding\n$\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)\\subset\\mathscr{L}_X$.\n\nThe group $\\mathcal{C}^*$-algebra $\\mathscr{T}_X$ of $X$ is the closed linear\nsubspace of $\\mathscr{L}_X$ generated by the convolution operators of the\nform $(\\varphi*f)(x)=\\int_X \\varphi(x-y)f(y)\\text{d} y$ with\n$\\varphi\\in\\cc_{\\mathrm{c}}(X)$. Observe that $f\\mapsto\\varphi*f$ is equal to\n$\\int_X\\varphi(-a)U_a\\,\\text{d} a$ where $U_a$ is the unitary\ntranslation operator on $L^2(X)$ defined by $(U_af)(x)=f(x+a)$.\n\nLet $X^*$ be the group dual to $X$ with operation denoted\nadditively\\symbolfootnote[2]{\\ Then $(k+p)(x)=k(x)p(x)$, $0(x)=1$,\n and the element $-k$ of $X^*$ represents the function\n $\\bar{k}$. In order to avoid such strange looking expressions one\n might use the notation $k(x)=[x,k]$. }. If $k\\in X^*$ we define\na unitary operator $V_k$ on $L^2(X)$ by $(V_ku)(x)=k(x) u(x)$. The\nFourier transform of an integrable measure $\\mu$ on $X$ is defined\nby $(F\\mu)(k)=\\int \\bar{k}(x)\\mu(\\text{d} x)$. Then $F$ induces a\nbijective map $L^2(X)\\rightarrow L^2(X^*)$ hence a canonical isomorphism\n$S\\mapsto F^{-1}S F$ of $\\mathscr{L}_{X^*}$ onto $\\mathscr{L}_X$. If $\\psi$ is a\nfunction on $X^*$ we set $\\psi(P)\\equiv\\psi(P_X)=F^{-1}M_\\psi F$,\nwhere $M_\\psi=\\psi(Q_{X^*})$ is the operator of multiplication by\n$\\psi$ on $L^2(X^*)$. The map $\\psi\\mapsto\\psi(P)$ gives an\nisomorphism $\\cc_{\\mathrm{o}}(X^*)\\cong \\mathscr{T}_X$.\n\n\nIf $Y\\subset X$ is a closed subgroup then $\\pi_Y:X\\to X\/Y$ is the\ncanonical surjection. We embed $\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X\/Y)\\subset\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ with the\nhelp of the injective morphism $\\varphi\\mapsto\\varphi\\circ\\pi_Y$. So\n$\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X\/Y)$ is identified with the set of functions\n$\\varphi\\in\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ such that $\\varphi(x+y)=\\varphi(x)$ for all\n$x\\in X$ and $y\\in Y$.\n\nIn particular, $\\cc_{\\mathrm{o}}(X\/Y)$ is identified with the set of continuous\nfunctions $\\varphi$ on $X$ such that $\\varphi(x+y)=\\varphi(x)$ for\nall $x\\in X$ and $y\\in Y$ and such that for each $\\varepsilon>0$\nthere is a compact $K\\subset X$ such that $|\\varphi(x)|<\\varepsilon$\nif $x\\notin K+Y$. By $x\/Y\\to\\infty$ we mean $\\pi_Y(x)\\to\\infty$, so\nthe last condition is equivalent to $\\varphi(x)\\to0$ if\n$x\/Y\\to\\infty$. For coherence with later notations we set\n\\begin{equation}\\label{eq:coxy}\n\\mathcal{C}_X(Y)=\\cc_{\\mathrm{o}}(X\/Y)\n\\end{equation}\nObserve that to an element $y\\in Y$ we may associate a translation\noperator $U_y$ in $L^2(X)$ and another translation operator in\n$L^2(Y)$. However, in order not to overcharge the writing we shall\ndenote the second operator also by $U_y$. The restriction map\n$k\\mapsto k|_Y$ is a continuous surjective group morphism $X^*\\to\nY^*$ with kernel equal to $Y^\\perp=\\{k\\in X^*\\mid\nk(y)=1\\hspace{1mm}\\forall y\\in Y\\}$ which defines the canonical\nidentification $Y^*\\cong X^*\/Y^\\perp$. We denote by the same symbol\n$V_k$ the operator of multiplication by the character $k\\in X^*$ in\n$L^2(X)$ and by the character $k|_Y\\in Y^*$ in $L^2(Y)$.\n\nWe shall write $X=Y\\oplus Z$ if $X$ is the direct sum of the two\nclosed subgroups $Y,Z$ equipped with compatible Haar measures, in\nthe sense that $\\text{d} x=\\text{d} y\\otimes \\text{d} z$. Then\n$L^2(X)=L^2(Y)\\otimes L^2(Z)$ as Hilbert spaces and\n$\\mathscr{K}_X=\\mathscr{K}_Y\\otimes \\mathscr{K}_Z$ and $\\mathcal{C}_X(Y)=1\\otimes\\cc_{\\mathrm{o}}(Z)$ as\n$C^*$-algebras.\n\nLet $O=\\{0\\}$ be the trivial group equipped with the Haar measure of\ntotal mass $1$. Then $L^2(O)=\\mathbb{C}$.\n\n\\subsection{Crossed products}\n\\label{ss:nbcrp}\n\nLet $X$ be a locally compact abelian group. A $C^*$-subalgebra\n$\\mathcal{A}\\subset\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ stable under translations will be called\n\\emph{$X$-algebra}. The \\emph{crossed product of\n $\\mathcal{A}$ by the action of $X$} is an abstractly defined $C^*$-algebra\n$\\mathcal{A}\\rtimes X$ canonically identified with the $C^*$-algebra of\noperators on $L^2(X)$ given by\n\\begin{equation}\\label{eq:crp}\n\\mathcal{A}\\rtimes X\\equiv\\mathcal{A}\\cdot \\mathscr{T}_X=\\mathscr{T}_X\\cdot\\mathcal{A}.\n\\end{equation}\nCrossed products of the form $\\mathcal{C}_X(Y)\\rtimes X$ where $Y$ is a\nclosed subgroup of $X$ play an important role in the many-body\nproblem. To simplify notations we set\n\\begin{equation}\\label{eq:Cxy}\n\\mathscr{C}_X(Y)=\\mathcal{C}_X(Y)\\rtimes X=\\mathcal{C}_X(Y)\\cdot\\mathscr{T}_X=\n\\mathscr{T}_X\\cdot\\mathcal{C}_X(Y).\n\\end{equation}\nIf $X=Y\\oplus Z$ and if we identify $L^2(X)=L^2(Y)\\otimes L^2(Z)$ then\n$\\mathscr{T}_X=\\mathscr{T}_Y\\otimes\\mathscr{T}_Z$ hence\n\\begin{equation}\\label{eq:crxyz}\n\\mathscr{C}_X(Y)=\\mathscr{T}_Y\\otimes \\mathscr{K}_Z.\n\\end{equation}\nA useful ``symmetric'' description of $\\mathscr{C}_X(Y)$ is contained in the\nnext lemma. Let $Y^{(2)}$ be the closed subgroup of $X^2\\equiv X\\oplus\nX$ consisting of elements of the form $(y,y)$ with $y\\in Y$.\n\n\\begin{lemma}\\label{lm:sym}\n$\\mathscr{C}_X(Y)$ is the closure of the\nset of integral operators with kernels $\\theta\\in\\cc_{\\mathrm{c}}(X^2\/Y^{(2)})$.\n\\end{lemma}\n\\proof Let $\\mathscr{C}$ be the norm closure of the set of integral\noperators with kernels $\\theta\\in\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X^2)$ having the properties:\n(1) $\\theta(x+y,x'+y)=\\theta(x,x')$ for all $x,x'\\in X$ and $y\\in\nY$; (2) $\\mbox{\\rm supp\\! }\\theta\\subset K_\\theta+Y$ for some compact\n$K_\\theta\\subset X^2$. We show $\\mathscr{C}=\\mathscr{C}_X(Y)$. Observe that the map\nin $X^2$ defined by $(x,x')\\mapsto(x-x',x')$ is a topological group\nisomorphism with inverse $(x_1,x_2)\\mapsto(x_1+x_2,x_2)$ and sends\nthe subgroup $Y^{(2)}$ onto the subgroup $\\{0\\}\\oplus Y$. This map\ninduces an isomorphism $X^2\/Y^{(2)}\\simeq X\\oplus(X\/Y)$. Thus any\n$\\theta\\in\\cc_{\\mathrm{c}}(X^2\/Y^{(2)})$ is of the form\n$\\theta(x,x')=\\widetilde\\theta(x-x',x')$ for some\n$\\widetilde\\theta\\in\\cc_{\\mathrm{c}}(X\\oplus(X\/Y))$. Thus $\\mathscr{C}$ is the closure in\n$\\mathscr{L}_X$ of the set of operators of the form\n$(Tu)(x)=\\int_X\\widetilde\\theta(x-x',x')u(x') \\text{d} x'$. Since we may\napproximate $\\widetilde\\theta$ with linear combinations of functions of\nthe form $a\\otimes b$ with $a\\in\\cc_{\\mathrm{c}}(X), b\\in\\cc_{\\mathrm{c}}(X\/Y)$ we see that\n$\\mathscr{C}$ is the clspan of the set of operators of the form\n$(Tu)(x)=\\int_X a(x-x')b(x')u(x') \\text{d} x'$. But this clspan is\n$\\mathscr{T}_X\\cdot\\mathcal{C}_X(Y)=\\mathscr{C}_X(Y)$. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\subsection{Compatible subgroups}\n\\label{ss:compat}\n\nIf $X,Y$ is an arbitrary pair of lca groups then $X\\oplus Y$ is the\nset $X\\times Y$ equipped with the product topology and group\nstructure. If $X,Y$ are closed subgroups of an lca group $G$ and if\nthe map $Y\\oplus Z\\to Y+Z$ defined by $(y,z)\\mapsto y+z$ is open, we\nsay that they are \\emph{compatible subgroups of $G$}. In this case\n$Y+Z$ is a closed subgroup of $X$.\n\n\n\\begin{remark}\\label{re:DP}{\\rm\nIf $G$ is $\\sigma$-compact then $X,Y$ are compatible if and only if\n$X+Y$ is closed. Indeed, a continuous surjective morphism between\ntwo locally compact $\\sigma$-compact groups is open and a subgroup\n$H$ of a locally compact group $G$ is closed if and only if $H$ is\nlocally compact for the induced topology, see Theorems 5.11 and 5.29\nin \\cite{HR}. We thank Lo\\\"ic Dubois and Benoit Pausader for\nenlightening discussions on this matter.\n}\\end{remark}\n\n\nThe importance of the compatibility condition in the context of\ngraded $C^*$-algebras has been pointed out in \\cite[Lemma 6.1.1]{Ma}\nand one may find there several descriptions of this condition (see\nalso Lemma 3.1 from \\cite{Ma3}). We quote two of them. Let $X\/Y$ be\nthe image of $X$ in $G\/Y$ considered as a subgroup of $G\/Y$ equipped\nwith the induced topology. The group $X\/(X\\cap Y)$ is equipped with\nthe locally compact quotient topology and we have a natural map\n$X\/(X\\cap Y)\\to X\/Y$ which is a bijective continuous group morphism.\nThen $X,Y$ are compatible if and only if the following equivalent\nconditions are satisfied:\n\\begin{align}\n& \\text{the natural map} \\hspace{2mm} X\/(X\\cap Y)\\to X\/Y \\hspace{2mm}\n\\text{is a homeomorphism}, \\label{eq:ma1} \\\\\n&\n\\text{the natural map }\nG\/(X\\cap Y)\\to G\/X\\times G\/Y \\hspace{1mm} \\text{is closed}.\n\\label{eq:ma2}\n\\end{align}\n\nIf $\\mathcal{A}$ is a $G$-algebra let $\\mathcal{A}|_X$ be the set of restrictions\nto $X$ of the functions from $\\mathcal{A}$. This is an $X$-algebra.\n\n\\begin{lemma}\\label{lm:reg}\nIf $X,Y$ are compatible subgroups of $G$ then\n\\begin{align}\n& \\mathcal{C}_G(X)\\cdot\\mathcal{C}_G(Y) = \\mathcal{C}_G(X\\cap Y) \\label{eq:reg1}\\\\\n& \\mathcal{C}_G(Y)|_X = \\mathcal{C}_X(X\\cap Y).\n\\label{eq:reg2}\n\\end{align}\nThe second relation remains valid for the subalgebras $\\cc_{\\mathrm{c}}$.\n\\end{lemma}\n\\proof The fact that the inclusion $\\subset$ in \\eqref{eq:reg1} is\nequivalent to the compatibility of $X$ and $Y$ is shown in Lemma\n6.1.1 from \\cite{Ma}, so we only have to prove that the equality\nholds. Let $E = (G\/X) \\times (G\/Y)$. If $\\varphi \\in \\cc_{\\mathrm{o}}(G\/X)$ and\n$\\psi \\in \\cc_{\\mathrm{o}}(G\/Y)$ then $\\varphi \\otimes \\psi$ denotes the function\n$(s, t) \\longmapsto \\varphi(s) \\psi(t)$, which belongs to\n$\\cc_{\\mathrm{o}}(E)$. The subspace generated by the functions of the form\n$\\varphi \\otimes \\psi$ is dense in $\\cc_{\\mathrm{o}}(E)$ by the Stone-Weierstrass\ntheorem. If $F$ is a closed subset of $E$ then, by the Tietze\nextension theorem, each function in $\\cc_{\\mathrm{c}}(F)$ extends to a function\nin $\\cc_{\\mathrm{c}}(E)$, so the restrictions $(\\varphi \\otimes \\psi)|_F$\ngenerate a dense linear subspace of $\\cc_{\\mathrm{o}}(F)$. Let us denote by\n$\\pi$ the map $x \\mapsto (\\pi_X(x), \\pi_Y(x))$, so $\\pi$ is a group\nmorphism from $G$ to $E$ with kernel $V=X\\cap Y$. Then by\n\\eqref{eq:ma2} the range $F$ of $\\pi$ is closed and the quotient map\n$\\widetilde\\pi : G\/V \\to F$ is a continuous and closed bijection, hence\nis a homeomorphism. So $\\theta \\mapsto \\theta \\circ \\tilde \\pi$ is\nan isometric isomorphism of $\\cc_{\\mathrm{o}}(F)$ onto $\\cc_{\\mathrm{o}}(G\/V)$. Hence for\n$\\varphi \\in \\cc_{\\mathrm{o}}(G\/X)$ and $\\psi \\in \\cc_{\\mathrm{o}}(G\/Y)$ the function $\\theta\n= (\\varphi \\otimes \\psi) \\circ \\tilde \\pi$ belongs to $\\cc_{\\mathrm{o}}(G\/V)$, it\nhas the property $\\theta \\circ \\pi_V = \\varphi \\circ \\pi_X \\cdot\n\\psi \\circ \\pi_Y$, and the functions of this form generate a dense\nlinear subspace of $\\cc_{\\mathrm{o}}(G\/V)$.\n\nNow we prove \\eqref{eq:reg2}. Recall that we identify $\\mathcal{C}_G(Y)$\nwith a subset of $\\cc_{\\mathrm{b}}^{\\mathrm{u}}(G)$ by using $\\varphi\\mapsto\\varphi\\circ\\pi_Y$\nso in terms of $\\varphi$ the restriction map which defines\n$\\mathcal{C}_G(Y)|_X$ is just $\\varphi\\mapsto\\varphi|_{X\/Y}$. Thus we have a\ncanonical embedding $\\mathcal{C}_G(Y)|_X\\subset\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X\/Y)$ for an arbitrary\npair $X,Y$ . Then the continuous bijective group morphism\n$\\theta:X\/(X\\cap Y)\\to X\/Y$ allows us to embed\n$\\mathcal{C}_G(Y)|_X\\subset\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X\/(X\\cap Y))$. That the range of this map is\nnot $\\mathcal{C}_X(X\\cap Y)$ in general is clear from the example $G=\\mathbb{R},\nX=\\pi\\mathbb{Z},Y=\\mathbb{Z}$. But if $X,Y$ are compatible then $X\/Y$ is closed\nin $G\/Y$, so $\\mathcal{C}_G(Y)|_X=\\cc_{\\mathrm{o}}(X\/Y)$ by the Tietze extension theorem,\nand $\\theta$ is a homeomorphism, hence we get \\eqref{eq:reg2}. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\\begin{lemma}\\label{lm:double}\nIf $X,Y$ are compatible subgroups of $G$ then $X^2=X\\oplus X$ and\n$Y^{(2)}=\\{(y,y)\\mid y\\in Y\\}$ is a compatible pair of closed\nsubgroups of $G^2=G\\oplus G$.\n\\end{lemma}\n\\proof Let $D=X^2\\cap Y^{(2)}=\\{(x,x)\\mid x\\in X\\cap Y\\}$. Due to to\n\\eqref{eq:ma1} it suffices to show that the natural map\n$Y^{(2)}\/D\\to Y^{(2)}\/X^2$ is a homeomorphism. Here $Y^{(2)}\/X^2$\nis the image of $Y^{(2)}$ in $G^2\/X^2\\cong (G\/X)\\oplus(G\/X)$, more\nprecisely it is the subset of pairs $(a,a)$ with $a=\\pi_X(z)$ and\n$z\\in Y$, equipped with the topology induced by\n$(G\/X)\\oplus(G\/X)$. Thus the natural map $Y\/X\\to Y^{(2)}\/X^2$ is a\nhomeomorphism. On the other hand, the natural map $Y\/(X\\cap Y)\\to\nY^{(2)}\/D$ is clearly a homeomorphism. To finish the proof note\nthat $Y\/(X\\cap Y)\\to Y\/X$ is a homeomorphism because $X,Y$ is a\nregular pair. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\begin{lemma}\\label{lm:regp}\n Let $X,Y$ be compatible subgroups of an lca group $G$ and let\n $X^\\perp,Y^\\perp$ be their orthogonals in $G^*$. Then $(X\\cap\n Y)^\\perp=X^\\perp+Y^\\perp$ and the closed subgroups\n $X^\\perp,Y^\\perp$ of $G^*$ are compatible.\n\\end{lemma}\n\\proof $X+Y$ is closed and, since\n$(x,y)\\mapsto(x,-y)$ is a homeomorphism, the map $S:X\\oplus Y\\to\nX+Y$ defined by $S(x,y)=x+y$ is an open surjective morphism. Then\nfrom the Theorem 9.5, Chapter 2 of \\cite{Gu} it follows that the\nadjoint map $S^*$ is a homeomorphism between $(X+Y)^*$ and its\nrange. In particular its range is a locally compact subgroup for the\ntopology induced by $X^*\\oplus Y^*$ hence is a closed subgroup of\n$X^*\\oplus Y^*$, see Remark \\ref{re:DP}. We\nhave $(X+Y)^\\perp=X^\\perp\\cap Y^\\perp$, cf. 23.29 in \\cite{HR}. Thus\nfrom $X^*\\cong G^*\/X^\\perp$ and similar representations for $Y^*$\nand $(X+Y)^*$ we see that\n$$\nS^*:G^*\/(X^\\perp \\cap Y^\\perp)\\to G^*\/X^\\perp\\oplus G^*\/Y^\\perp\n$$ is a closed map. But $S^*$ is clearly the natural map involved in\n\\eqref{eq:ma2}, hence the pair $X^\\perp,Y^\\perp$ is\nregular. Finally, note that $(X\\cap Y)^\\perp$ is always equal to the\nclosure of the subgroup $X^\\perp+Y^\\perp$, cf. 23.29 and 24.10 in\n\\cite{HR}, and in our case $X^\\perp+Y^\\perp$ is closed.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\subsection{Green Hilbert $C^*$-modules} \n\\label{ss:hilbert} \n\nLet $X,Y$ be a compatible pair of closed subgroups of a locally\ncompact abelian group $G$. Then the subgroup $X+Y$ of $G$ generated\nby $X\\cup Y$ is also closed. If we identify $X\\cap Y$ with the\nclosed subgroup $D$ of $X\\oplus Y$ consisting of the elements of the\nform $(z,z)$ with $z\\in X\\cap Y$ then the quotient group $X\\uplus Y\n\\equiv (X\\oplus Y)\/(X\\cap Y)$ is locally compact and the map\n\\begin{equation}\\label{eq:nat}\n\\phi:X\\oplus Y \\to X+Y \\hspace{2mm}\\text{defined by}\\hspace{2mm}\n \\phi(x,y)=x-y\n\\end{equation}\nis an open continuous surjective group morphism $X\\oplus Y\\to X+Y$\nwith $X\\cap Y$ as kernel. Hence the group morphism\n$\\phi^\\circ:X\\uplus Y\\to X+Y$ induced by $\\phi$ is a homeomorphism.\n\n\nSince $\\cc_{\\mathrm{c}}(X\\uplus Y)\\subset \\cc_{\\mathrm{b}}^{\\mathrm{u}}(X\\oplus Y)$ the elements\n$\\theta\\in\\cc_{\\mathrm{c}}(X\\uplus Y)$ are functions $\\theta:X\\times Y\\to\\mathbb{C}$\nand we may think of them as kernels of integral operators.\n\n\\begin{lemma}\\label{lm:bound}\n If $\\theta\\in\\cc_{\\mathrm{c}}(X\\uplus Y)$ then $(T_\\theta\n u)(x)=\\int_Y\\theta(x,y)u(y) \\text{d} y$ defines an operator in\n $\\mathscr{L}_{XY}$ with norm $\\|T_\\theta\\|\\leq C\\sup|\\theta|$ where $C$\n depends only on a compact which contains the support of $\\theta$.\n\\end{lemma}\n\\proof \nBy the Schur test\n$$\n\\|T_\\theta\\|^2\\leq \n{\\textstyle\\sup_{x\\in X}}\\int_Y|\\theta(x,y)\\text{d} y \\cdot\n{\\textstyle\\sup_{y\\in Y}}\\int_X|\\theta(x,y)\\text{d} x.\n$$ \nLet $K\\subset X$ and $L\\subset Y$ be compact sets such that\n$(K\\times L) + D$ contains the support of $\\theta$.\nThus if $\\theta(x,y)\\neq0$ then $x\\in z+K$ and $y\\in z+L$ for some \n$z\\in X\\cap Y$ hence $ \\int_Y|\\theta(x,y)\\text{d} y \\leq\n\\sup|\\theta| \\lambda_Y(L). $ Similarly $\\int_X|\\theta(x,y)\\text{d} x\n\\leq \\sup|\\theta| \\lambda_X(K)$. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\begin{definition}\\label{df:ryz}\n$\\mathscr{T}_{XY}$ is the norm closure in $\\mathscr{L}_{XY}$ of the set of operators\n$T_\\theta$ as in Lemma \\ref{lm:bound}.\n\\end{definition}\n\n\n\n\\begin{remark}\\label{re:rief}\n If $X\\supset Y$ then $\\mathscr{T}_{XY}$ is a ``concrete'' realization of\n the Hilbert $C^*$-module introduced by Rieffel in \\cite{Ri} which\n implements the Morita equivalence between the group $C^*$-algebra\n $\\mathcal{C}^*(Y)$ and the crossed product $\\cc_{\\mathrm{o}}(X\/Y)\\rtimes X$. More\n precisely, $\\mathscr{T}_{XY}$ is a Hilbert $\\mathcal{C}^*(Y)$-module and its\n imprimitivity algebra is canonically isomorphic with\n $\\cc_{\\mathrm{o}}(X\/Y)\\rtimes X$. If $X,Y$ is an arbitrary couple of\n compatible subgroups of $G$ then we defined $\\mathscr{T}_{XY}$ such that\n $\\mathscr{T}_{XY}=\\mathscr{T}_{XG}\\cdot\\mathscr{T}_{GY}$. On the other hand, from\n \\eqref{eq:factor} we get $\\mathscr{T}_{XY}=\\mathscr{T}_{XE}\\cdot\\mathscr{T}_{EY}$ with\n $E=X\\cap Y$, hence $\\mathscr{T}_{XY}$ is naturally a Hilbert\n $(\\cc_{\\mathrm{o}}(X\/E)\\rtimes X,\\cc_{\\mathrm{o}}(Y\/E)\\rtimes Y)$ imprimitivity bimodule.\n It has been noticed by Georges Skandalis that $\\mathscr{T}_{XY}$ is in\n fact a ``concrete'' realization of a Hilbert $C^*$-module\n introduced by Green to show the Morita equivalence of the\n $C^*$-algebras $\\cc_{\\mathrm{o}}(Z\/Y)\\rtimes X$ and $\\cc_{\\mathrm{o}}(Z\/X)\\rtimes Y$ where\n we take $Z=X+Y$, cf. \\cite[Example 4.13]{Wi}.\n\\end{remark}\n\n\nWe give now an alternative definition of $\\mathscr{T}_{XY}$. If\n$\\varphi\\in\\cc_{\\mathrm{c}}(G)$ we define $T_{XY}(\\varphi):\\cc_{\\mathrm{c}}(Y)\\to\\cc_{\\mathrm{c}}(X)$ by\n\\begin{equation}\\label{eq:ryz}\n(T_{XY}(\\varphi)u)(x)=\\int_Y\\varphi(x-y)u(y)\\text{d} y.\n\\end{equation}\nThis operator depends only the restriction $\\varphi|_{X+Y}$ hence,\nby the Tietze extension theorem, we could take $\\varphi\\in\\cc_{\\mathrm{c}}(Z)$\ninstead of $\\varphi\\in\\cc_{\\mathrm{c}}(G)$, where $Z$ is any closed subgroup of\n$G$ containing $X\\cup Y$.\n\n\\begin{proposition}\\label{pr:def2}\n$T_{XY}(\\varphi)$ extends to a bounded operator $L^2(Y)\\to L^2(X)$,\nalso denoted $T_{XY}(\\varphi)$, and for each compact $K\\subset G$\nthere is a constant $C$ such that if $\\mbox{\\rm supp\\! }\\varphi\\subset K$\n\\begin{equation}\\label{eq:nyz}\n\\|T_{XY}(\\varphi)\\|\\leq C \\sup\\nolimits_{x\\in G}|\\varphi(x)|.\n\\end{equation}\nThe adjoint operator is given by $T_{XY}(\\varphi)^*=\nT_{YX}(\\varphi^*)$ where $\\varphi^*(x)=\\bar\\varphi(-x)$. The space\n$\\mathscr{T}_{XY}$ coincides with the closure in $\\mathscr{L}_{XY}$ of the set of\noperators of the from $T_{XY}(\\varphi)$.\n\\end{proposition}\n\\proof The set $X+Y$ is closed in $G$ hence the restriction map\n$\\cc_{\\mathrm{c}}(G)\\to\\cc_{\\mathrm{c}}(X+Y)$ is surjective. On the other hand, the map\n$\\phi^\\circ:X\\uplus Y\\to X+Y$, defined after \\eqref{eq:nat}, is a\nhomeomorphism so it induces an isomorphism\n$\\varphi\\to\\varphi\\circ\\phi^\\circ$ of $\\cc_{\\mathrm{c}}(X+Y)$ onto $\\cc_{\\mathrm{c}}(X\\uplus\nY)$. Clearly $T_{XY}(\\varphi)=T_\\theta$ if $\\theta=\\varphi\\circ\\phi$,\nso the proposition follows from Lemma \\ref{lm:bound}. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\nWe discuss now some properties of the spaces $\\mathscr{T}_{XY}$.\nWe set $\\mathscr{T}_{XY}^*\\equiv(\\mathscr{T}_{XY})^*\\subset\\mathscr{L}_{YX}$. \n\n\\begin{proposition}\\label{pr:nyza}\nWe have $\\mathscr{T}_{XX}=\\mathscr{T}_X$ and:\n\\begin{align}\n& \\mathscr{T}_{XY}^* =\\mathscr{T}_{YX} \\label{eq:rad} \\\\\n& \\mathscr{T}_{XY} =\\mathscr{T}_{XY}\\cdot \\mathscr{T}_Y=\\mathscr{T}_X\\cdot\\mathscr{T}_{XY}\n\\label{eq:cyzc} \\\\\n& \\mathcal{A}|_X\\cdot\\mathscr{T}_{XY} =\\mathscr{T}_{XY}\\cdot\\mathcal{A}|_Y \\label{eq:ayza}\n\\end{align}\nwhere $\\mathcal{A}$ is an arbitrary $G$-algebra.\n\\end{proposition}\n\\proof The relations $\\mathscr{T}_{XX}=\\mathscr{T}_X$ and \\eqref{eq:rad} are\nobvious. Now we prove the first equality in \\eqref{eq:cyzc} (then\nthe second one follows by taking adjoints). If $C(\\eta)$ is the\noperator of convolution in $L^2(Y)$ with $\\eta\\in \\cc_{\\mathrm{c}}(Y)$ then a\nshort computation gives\n\\begin{equation}\\label{eq:yzc}\nT_{XY}(\\varphi)C(\\eta)=T_{XY}(T_{G Y}(\\varphi)\\eta)\n\\end{equation}\nfor $\\varphi\\in\\cc_{\\mathrm{c}}(G)$. Since $T_{G Y}(\\varphi)\\eta\\in\\cc_{\\mathrm{c}}(G)$ we get\n$T_{XY}(\\varphi)C(\\eta)\\in\\mathscr{T}_{GX}$, so $\\mathscr{T}_{XY}\\cdot\n\\mathscr{T}_Y\\subset\\mathscr{T}_{XY}$. The converse follows by a standard\napproximation argument.\n\nLet $\\varphi\\in\\cc_{\\mathrm{c}}(G)$ and $\\theta\\in\\mathcal{A}$. We shall denote by\n$\\theta(Q_X)$ the operator of multiplication by $\\theta|_X$ in\n$L^2(X)$ and by $\\theta(Q_Y)$ that of multiplication by $\\theta|_Y$\nin $L^2(Y)$. Choose some $\\varepsilon>0$ and let $V$ be a compact\nneighborhood of the origin in $G$ such that\n$|\\theta(z)-\\theta(z')|<\\varepsilon$ if $z-z'\\in V$. There are\nfunctions $\\alpha_k\\in\\cc_{\\mathrm{c}}(G)$ with $0\\leq\\alpha_k\\leq1$ such that\n$\\sum_k\\alpha_k=1$ on the support of $\\varphi$ and\n$\\mbox{\\rm supp\\! }\\alpha_k\\subset z_k+V$ for some points $z_k$. Below we shall\nprove:\n\\begin{equation}\\label{eq:byza}\n\\|T_{XY}(\\varphi)\\theta(Q_Y)- \n{\\textstyle\\sum_k}\\theta(Q_X-z_k)T_{XY}(\\varphi\\alpha_k)\\| \n\\leq\n\\varepsilon\\|T_{XY}(|\\varphi|)\\|.\n\\end{equation}\nThis implies $\\mathscr{T}_{XY}\\cdot\\mathcal{A}|_Y\\subset\\mathcal{A}|_X\\cdot\\mathscr{T}_{XY}$. If we\ntake adjoints, use \\eqref{eq:rad} and interchange $X$ and $Y$ in the\nfinal relation, we obtain $\\mathcal{A}|_X\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{A}|_Y$\nhence the proposition is proved. For $u\\in\\cc_{\\mathrm{c}}(X)$ we have:\n\\begin{align*}\n(T_{XY}(\\varphi)\\theta(Q_Y)u)(x) &=\n\\int_Y\\varphi(x-y)\\theta(y)u(y)\\text{d} y \n=\\sum_k\\int_Y\\varphi(x-y)\\alpha_k(x-y)\\theta(y)u(y)\\text{d} y \\\\\n&=\n\\sum_k\\int_Y\\varphi(x-y)\\alpha_k(x-y)\\theta(x-z_k)u(y)\\text{d} y\n+(Ru)(x)\\\\ \n&= \n\\sum_k\\left(\\theta(Q_X-z_k)T_{XY}(\\varphi\\alpha_k)u\\right)(x) \n+(Ru)(x).\n\\end{align*}\nWe can estimate the remainder as follows\n$$\n|(Ru)(x)|=\\left|\\sum_k\\int_Y\\varphi(x-y)\\alpha_k(x-y)\n[\\theta(y)-\\theta(x-z_k)]u(y)\\text{d} y \\right|\\leq\n\\varepsilon\\int_Y|\\varphi(x-y)u(y)|\\text{d} y.\n$$ \nbecause $x-z_k-y\\in V$. This proves \\eqref{eq:byza}. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\n\n\\begin{proposition}\\label{pr:ryz}\n $\\mathscr{T}_{XY}$ is a Hilbert $C^*$-submodule of $\\mathscr{L}_{XY}$ and\n\\begin{equation}\\label{eq:hyz}\n\\mathscr{T}_{XY}^*\\cdot\\mathscr{T}_{XY}=\\mathscr{C}_Y(X\\cap Y), \\hspace{2mm}\n\\mathscr{T}_{XY}\\cdot\\mathscr{T}_{XY}^*=\\mathscr{C}_X(X\\cap Y).\n\\end{equation}\nThus $\\mathscr{T}_{XY}$ is a $(\\mathscr{C}_X(X\\cap Y),\\mathscr{C}_Y(X\\cap Y))$ imprimitivity\nbimodule.\n\\end{proposition}\n\\proof Due to \\eqref{eq:rad}, to prove the first relation in\n\\eqref{eq:hyz} we have to compute the clspan $\\mathscr{C}$ of the operators\n$T_{XY}(\\varphi)T_{YX}(\\psi)$ with $\\varphi,\\psi$ in $\\cc_{\\mathrm{c}}(G)$. We\nrecall the notation $G^2=G\\oplus G$, this is a locally compact\nabelian group and $X^2=X\\oplus X$ is a closed subgroup. Let us\nchoose functions $\\varphi_k,\\psi_k\\in\\cc_{\\mathrm{c}}(G)$ and let\n$\\Phi=\\sum_k\\varphi_k\\otimes\\psi_k\\in\\cc_{\\mathrm{c}}(G^2)$. If\n$\\psi_k^\\dag(x)=\\psi_k(-x)$, then $\\sum_k\nT_{XY}(\\varphi_k)T_{YX}(\\psi_k^\\dag)$ is an integral operator on\n$L^2(X)$ with kernel $\\theta_X=\\theta|_{X^2}$ where $\\theta:G^2\\to\n\\mathbb{C}$ is given by\n$$ \n\\theta(x,x')= \\int_Y\\Phi(x+y,x'+y)\\text{d} y.\n$$ Since the set of decomposable functions is dense in $\\cc_{\\mathrm{c}}(G^2)$ in\nthe inductive limit topology, an easy approximation argument shows\nthat $\\mathscr{C}$ contains all integral operators with kernels of the same\nform as $\\theta_X$ but with arbitrary $\\Phi\\in\\cc_{\\mathrm{c}}(G^2)$. Let\n$Y^{(2)}$ be the closed subgroup of $G^2$ consisting\nof the elements $(y,y)$ with $y\\in Y$. Then $K=\\mbox{\\rm supp\\! }\\Phi\\subset G^2$\nis a compact, $\\theta$ is zero outside $K+Y^{(2)}$, and\n$\\theta(a+b)=\\theta(a)$ for all $a\\in G^2,b\\in Y^{(2)}$. Thus\n$\\theta\\in\\cc_{\\mathrm{c}}(G^2\/Y^{(2)})$, with the usual identification\n$\\cc_{\\mathrm{c}}(G^2\/Y^{(2)})\\subset\\cc_{\\mathrm{b}}^{\\mathrm{u}}(G^2)$. From Proposition 2.48 in\n\\cite{Fo} it follows that reciprocally, any function $\\theta$ in\n$\\cc_{\\mathrm{c}}(G^2\/Y^{(2)})$ can be represented in terms of some $\\Phi$ in\n$\\cc_{\\mathrm{c}}(G^2)$ as above. Thus $\\mathscr{C}$ is the closure of the set of\nintegral operators on $L^2(X)$ with kernels of the form $\\theta_X$\nwith $\\theta\\in\\cc_{\\mathrm{c}}(G^2\/Y^{(2)})$. According to Lemma\n\\ref{lm:double}, the pair of subgroups $X^2,Y^{(2)}$ is regular, so\nwe may apply Lemma \\ref{lm:reg} to get\n$\\cc_{\\mathrm{c}}(G^2\/Y^{(2)})|_{X^2}=\\cc_{\\mathrm{c}}(X^2\/D)$ where $D=X^2\\cap\nY^{(2)}=\\{(x,x)\\mid x\\in X\\cap Y\\}$. But by Lemma \\ref{lm:sym} the\nnorm closure in $\\mathscr{L}_X$ of the set of integral operators with\nkernel in $\\cc_{\\mathrm{c}}(X^2\/D)$ is $\\mathscr{C}_X\/(X\\cap Y)$. This proves\n\\eqref{eq:hyz}.\n\nIt remains to prove that $\\mathscr{T}_{XY}$ is a Hilbert $C^*$-submodule of\n$\\mathscr{L}_{XY}$, i.e. that we have\n\\begin{equation}\\label{eq:hyz1}\n\\mathscr{T}_{XY}\\cdot\\mathscr{T}_{XY}^*\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}.\n\\end{equation}\nThe first identity in \\eqref{eq:hyz} and \\eqref{eq:cyzc} imply\n\\begin{equation*}\n\\mathscr{T}_{XY}\\cdot\\mathscr{T}_{XY}^*\\cdot\\mathscr{T}_{XY} = \n\\mathscr{T}_{XY}\\cdot \\mathscr{T}_Y\\cdot\\mathcal{C}_Y(X\\cap Y)=\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y).\n\\end{equation*}\nFrom Lemma \\ref{lm:reg} we get \n$$\n\\mathcal{C}_Y(X\\cap Y)=\\mathcal{C}_G(X\\cap Y)|_Y =\n\\mathcal{C}_G(X)|_Y \\cdot \\mathcal{C}_G(Y)|_Y = \\mathcal{C}_G(X)|_Y \n$$ because $\\mathcal{C}_G(Y)|_Y=\\mathbb{C}$. Then by using Proposition\n\\ref{pr:nyza} we obtain\n$$\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y) = \\mathscr{T}_{XY}\\cdot\\mathcal{C}_G(X)|_Y =\n\\mathcal{C}_G(X)|_X \\cdot\\mathscr{T}_{XY} =\\mathscr{T}_{XY}\n$$\nbecause $\\mathcal{C}_G(X)|_X=\\mathbb{C}$. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\begin{corollary}\\label{co:txy}\nWe have\n\\begin{align}\n\\mathscr{T}_{XY} &= \\mathscr{T}_{XY}\\mathscr{T}_Y=\\mathscr{T}_{XY}\\mathcal{C}_Y(X\\cap Y) \n\\label{eq:txy1}\n\\\\\n&= \\mathscr{T}_X\\mathscr{T}_{XY}=\\mathcal{C}_X(X\\cap Y)\\mathscr{T}_{XY}.\n\\label{eq:txy2}\n\\end{align}\n\\end{corollary}\n\\proof If $\\mathscr{M}$ is a Hilbert $\\mathscr{A}$-module then $\\mathscr{M}=\\mathscr{M}\\mathscr{A}$ hence\n$\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\mathscr{C}_Y(X\\cap Y)$ by Proposition \\ref{pr:ryz}. The\nspace $\\mathscr{C}_Y(X\\cap Y)$ is a $\\mathscr{T}_Y$-bimodule and $\\mathscr{C}_Y(X\\cap\nY)=\\mathscr{C}_Y(X\\cap Y)\\cdot \\mathscr{T}_Y$ by \\eqref{eq:Cxy} hence we get\n$\\mathscr{C}_Y(X\\cap Y)=\\mathscr{C}_Y(X\\cap Y)\\mathscr{T}_Y$ by the Cohen-Hewitt theorem.\nThis proves the first equality in \\eqref{eq:txy1} and the other ones\nare proved similarly. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\nIf $\\mathcal{G}$ is a set of closed subgroups of $G$ then the\n\\emph{semilattice generated by $\\mathcal{G}$} is the set of finite\nintersections of elements of $\\mathcal{G}$.\n\n\\begin{proposition}\\label{pr:product}\nLet $X,Y,Z$ be closed subgroups of $G$ such that any two subgroups\nfrom the semilattice generated by the family $\\{X,Y,Z\\}$ are\ncompatible. Then:\n\\begin{align}\\label{eq:product}\n\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY} &=\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Y\\cap Z)= \\mathcal{C}_X(X\\cap Z)\\cdot\\mathscr{T}_{XY} \\\\\n&= \\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y\\cap Z)= \n\\mathcal{C}_X(X\\cap Y\\cap Z)\\cdot\\mathscr{T}_{XY}.\n\\end{align} \nIn particular, if $Z\\supset X\\cap Y$ then\n\\begin{equation}\\label{eq:factor}\n\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}=\\mathscr{T}_{XY}.\n\\end{equation}\n\\end{proposition}\n\\proof We first prove \\eqref{eq:factor} in the particular case\n$Z=G$. As in the proof of Proposition \\ref{pr:ryz} we see that\n$\\mathscr{T}_{XG}\\cdot\\mathscr{T}_{G Y}$ is the the closure in $\\mathscr{L}_{XY}$ of the\nset of integral operators with kernels \n$\\theta_{XY}=\\theta|_{X\\times Y}$ where $\\theta:G^2\\to \\mathbb{C}$ is\ngiven by \n$$ \n\\theta(x,y)= \\int_G\\sum_k\\varphi_k(x-z)\\psi_k(z-y)\\text{d} z=\n\\int_G\\sum_k\\varphi_k(x-y-z)\\psi_k(z)\\text{d} z\\equiv\\xi(x-y)\n$$ where $\\varphi_k,\\psi_k\\in\\cc_{\\mathrm{c}}(G)$ and $\\xi=\\sum_k\\varphi_k*\\psi_k$\nconvolution product on $G$. Since $\\cc_{\\mathrm{c}}(G)*\\cc_{\\mathrm{c}}(G)$ is dense in\n$\\cc_{\\mathrm{c}}(G)$ in the inductive limit topology, the space\n$\\mathscr{T}_{XG}\\cdot\\mathscr{T}_{G Y}$ is the the closure of the set of integral\noperators with kernels $\\theta(x,y)=\\xi(x-y)$ with $\\xi\\in\\cc_{\\mathrm{c}}(G)$.\nBy Proposition \\ref{pr:def2} this is $\\mathscr{T}_{XY}$. \n\nNow we prove \\eqref{eq:product}. From \\eqref{eq:factor} with $Z=G$\nand \\eqref{eq:hyz} we get:\n\\begin{equation*}\n\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY} =\n\\mathscr{T}_{XG}\\cdot\\mathscr{T}_{G Z}\\cdot\\mathscr{T}_{ZG}\\cdot\\mathscr{T}_{G Y}\n= \\mathscr{T}_{XG}\\cdot\\mathcal{C}_G(Z)\\cdot \\mathscr{T}_G\\cdot\\mathscr{T}_{GY}.\n\\end{equation*}\nThen from Proposition \\eqref{pr:nyza} and Lemma \\ref{lm:reg} we get:\n$$\n\\mathcal{C}_G(Z)\\cdot \\mathscr{T}_G\\cdot\\mathscr{T}_{GY}=\\mathcal{C}_G(Z)\\cdot\\mathscr{T}_{G Y}=\n\\mathscr{T}_{G Y}\\cdot\\mathcal{C}_G(Z)|_Y= \\mathscr{T}_{G Y}\\cdot\\mathcal{C}_Y(Y\\cap Z).\n$$ \nWe obtain \\eqref{eq:product} by using once again\n\\eqref{eq:factor} with $Z=G$ and taking adjoints. On the other hand,\nthe relation $\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y)$ holds because\nof \\eqref{eq:txy1}, so we have\n$$\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Y\\cap Z)=\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y)\\cdot\\mathcal{C}_Y(Y\\cap Z)=\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y\\cap Z)\n$$ where we also used \\eqref{eq:reg1} and the fact that $X\\cap Y$,\n$Z\\cap Y$ are compatible. Finally, to get \\eqref{eq:factor} for\n$Z\\supset X\\cap Y$ we use once again \\eqref{eq:hyz}. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\nThe object of main interest for us is introduced in the next\ndefinition. \n\n\\smallskip\n\n\\begin{definition}\\label{df:nxyz}\nIf $X,Y$ are compatible subgroups and $Z$ is a closed subgroup of\n$X\\cap Y$ then we set\n\\begin{equation}\\label{eq:nxyz}\n\\mathscr{C}_{XY}(Z):=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Z)=\\mathcal{C}_X(Z)\\cdot\\mathscr{T}_{XY}.\n\\end{equation} \n\\end{definition}\nThe equality above follows from \\eqref{eq:ayza} with $\\mathcal{A}=\\mathcal{C}_G(Z)$.\nWe clearly have $\\mathscr{C}_{XY}(X\\cap Y)=\\mathscr{T}_{XY}$ and\n$\\mathscr{C}_{XX}(Y)=\\mathscr{C}_X(Y)$ if $X\\supset Y$. Moreover\n\\begin{equation}\\label{eq:nadj}\n\\mathscr{C}_{XY}^*(Z):=\\mathscr{C}_{XY}(Z)^*=\\mathscr{C}_{YX}(Z)\n\\end{equation}\nbecause of \\eqref{eq:rad}.\n\n\n\\begin{theorem}\\label{th:nxyz}\n$\\mathscr{C}_{XY}(Z)$ is a Hilbert $C^*$-submodule of $\\mathscr{L}_{XY}$ such that\n\\begin{equation}\\label{eq:nnz}\n\\mathscr{C}_{XY}^*(Z)\\cdot\\mathscr{C}_{XY}(Z)=\\mathscr{C}_Y(Z)\n\\hspace{2mm}\\text{and}\\hspace{2mm}\n\\mathscr{C}_{XY}(Z)\\cdot\\mathscr{C}_{XY}^*(Z)=\\mathscr{C}_X(Z).\n\\end{equation} \nIn particular, $\\mathscr{C}_{XY}(Z)$ is a\n$(\\mathscr{C}_X(Z),\\mathscr{C}_Y(Z))$ imprimitivity bimodule.\n\\end{theorem}\n\\proof \nBy using \\eqref{eq:nadj}, the definition \\eqref{eq:nxyz}, and\n\\eqref{eq:reg1} we get\n\\begin{align*}\n\\mathscr{C}_{XY}(Z)\\cdot\\mathscr{C}_{YX}(Z) &=\n\\mathcal{C}_X(Z)\\cdot\\mathscr{T}_{XY}\\cdot \\mathscr{T}_{YX}\\cdot\\mathcal{C}_X(Z)\\\\\n&=\n\\mathcal{C}_X(Z)\\cdot\\mathcal{C}_X(X\\cap Y)\\cdot \\mathscr{T}_X\\cdot\\mathcal{C}_X(Z)\\\\\n&=\n\\mathcal{C}_X(Z)\\cdot \\mathscr{T}_X\\cdot\\mathcal{C}_X(Z)= \\mathcal{C}_X(Z)\\cdot \\mathscr{T}_X\n\\end{align*}\nwhich proves the second equality in \\eqref{eq:nnz}. The first one\nfollows by interchanging $X$ and $Y$.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\n\n\n\\subsection{Many-body systems}\n\\label{ss:gaz} \n\nHere we give a formal definition of the notion of ``many-body\nsystem'' then define and discuss the Hamiltonian algebra associated\nto it.\n\n\nLet $\\mathscr{S}$ be a set of locally compact abelian groups with the\nfollowing property: for any $X,Y\\in\\mathscr{S}$ there is $Z\\in\\mathscr{S}$ such that\n$X$ and $Y$ are compatible subgroups of $Z$. Note that this implies\nthe following: if $Y\\subset X$ then the topology and the group\nstructure of $Y$ coincide with those induced by $X$.\n\nIf $\\mathscr{S}$ is a set of $\\sigma$-compact locally compact abelian groups\nthen the compatibility assumption is equivalent to the following\nmore explicit condition: for any $X,Y\\in\\mathscr{S}$ there is $Z\\in\\mathscr{S}$ such\nthat $X$ and $Y$ are closed subgroups of $Z$ and $X+Y$ is closed in\n$Z$. \n\n\n\\begin{definition}\\label{df:mb}\nA \\emph{many-body system} is a couple $(\\mathcal{S},\\lambda)$ where:\n\\begin{compactenum}\n\\item[(i)] $\\mathcal{S}\\subset\\mathscr{S}$ is a subset such that \n$X,Y\\in\\mathcal{S}\\Rightarrow X\\cap Y\\in\\mathcal{S}$ and if\n$X\\supsetneq Y$ then $X\/Y$ is not compact,\n\\item[(ii)] \n$\\lambda$ is a map $X\\mapsto\\lambda_X$ which associates a Haar\nmeasures $\\lambda_X$ on $X$ to each $X\\in\\mathcal{S}$. \n\\end{compactenum}\n\\end{definition}\n\nWe identify $\\mathcal{S}=(\\mathcal{S},\\lambda)$ so the choice of Haar measures is\nimplicit. Note that the Hilbert space $\\mathcal{H}_\\mathcal{S}$ and the\n$C^*$-algebra $\\mathscr{C}_\\mathcal{S}$ that we introduce below depend on $\\lambda$\nbut different choices give isomorphic objects. Each $X\\in\\mathcal{S}$ is\nequipped with a Haar measure so the Hilbert spaces $\\mathcal{H}_X= L^2(X)$\nare well defined. If $Y\\subset X$ are in $\\mathcal{S}$ then $X\/Y$ is\nequipped with the quotient measure so $\\mathcal{H}_{X\/Y}= L^2(X\/Y)$ is well\ndefined.\n\n\n{\\bf Example:} Let $\\mathscr{S}$ the set of all finite dimensional vector\nsubspaces of a vector space over an infinite locally compact field\nand let $\\mathcal{S}$ be any subset of $\\mathscr{S}$ such that $X,Y\\in\\mathcal{S}\\Rightarrow\nX\\cap Y \\in \\mathcal{S}$. \n\n\nFor each $X\\in\\mathcal{S}$ let $\\mathcal{S}_X$ be the set of $Y\\in\\mathcal{S}$ such that\n$Y\\subset X$. This is an $N$-body system with $X$ as configuration\nspace in the sense of Definition \\ref{df:nb}. Then by Lemma\n\\ref{lm:reg} the space\n\\begin{equation}\\label{eq:sax}\n\\mathcal{C}_X := {\\textstyle\\sum^\\mathrm{c}_{Y\\in\\mathcal{S}_X}}\\mathcal{C}_X(Y)\n\\end{equation}\nis an $X$-algebra so the crossed product $\\mathcal{C}_X\\rtimes X$ is well\ndefined and we clearly have\n\\begin{equation}\\label{eq:crsax}\n\\mathscr{C}_X := \\mathcal{C}_X\\rtimes X \\equiv \\mathcal{C}_X\\cdot\\mathscr{T}_X =\n{\\textstyle\\sum^\\mathrm{c}_{Y\\in\\mathcal{S}_X}}\\mathscr{C}_X(Y).\n\\end{equation}\nThe $C^*$-algebra $\\mathscr{C}_X$ is realized on the Hilbert space $\\mathcal{H}_X$\nand we think of it as the Hamiltonian algebra of the $N$-body system\ndetermined by $\\mathcal{S}_X$.\n\n\\begin{theorem}\\label{th:grsax}\nThe $C^*$-algebras $\\mathcal{C}_X$ and $\\mathscr{C}_X$ are $\\mathcal{S}_X$-graded by \nthe decompositions \\eqref{eq:sax} and \\eqref{eq:crsax}. \n\\end{theorem}\n\nThe theorem is a particular case of results due to A. Mageira,\ncf. Propositions 6.1.2, 6.1.3 and 4.2.1 in \\cite{Ma} (or see\n\\cite{Ma3}). We mention that the results in \\cite{Ma,Ma3} are much\ndeeper since the groups are allowed to be noncommutative and the\ntreatment is so that the second part of condition (i) is not\nneeded. The case when $\\mathcal{S}$ consists of linear subspaces of a finite\ndimensional real vector space has been considered in \\cite{BG1,DG1}\nand the corresponding version of Theorem \\ref{th:grsax} is proved\nthere by elementary means.\n\n\n\\begin{definition}\\label{df:main}\nIf $X,Y\\in\\mathcal{S}$ then\n$\\mathscr{C}_{XY} := \\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y=\\mathcal{C}_X\\cdot\\mathscr{T}_{XY}$.\n\\end{definition}\n\nIn particular $\\mathscr{C}_{XX}=\\mathscr{C}_X$ is a $C^*$-algebra of operators on\n$\\mathcal{H}_X$. For $X\\neq Y$ the space $\\mathscr{C}_{XY}$ is a closed linear\nspace of operators $\\mathcal{H}_Y\\to \\mathcal{H}_X$ canonically associated to the\nsemilattice of groups $\\mathcal{S}_{X\\cap Y}$, cf. \\eqref{eq:main}. We call\nthese spaces \\emph{coupling modules} because they are Hilbert\n$C^*$-modules and determine the way the systems corresponding to $X$\nand $Y$ are allowed to interact.\n\n\n\nFor each pair $X,Y\\in\\mathcal{S}$ with $X\\supset Y$ we set \n\\begin{equation}\\label{eq:saX}\n\\mathcal{C}^Y_X :=\n{\\textstyle\\sum^\\mathrm{c}_{Z\\in\\mathcal{S}_Y}}\\mathcal{C}_X(Z).\n\\end{equation}\nThis is also an $X$-algebra so we may define $\\mathscr{C}^Y_X=\\mathcal{C}^Y_X\\rtimes\nX$ and we have\n\\begin{equation}\\label{eq:saX1}\n\\mathscr{C}^Y_X := \\mathcal{C}^Y_X\\rtimes X=\n{\\textstyle\\sum^\\mathrm{c}_{Z\\in\\mathcal{S}_Y}}\\mathscr{C}_X(Z).\n\\end{equation}\nIf $X=Y\\oplus Z$ then $\\mathcal{C}_X^Y\\simeq\\mathcal{C}_Y\\otimes 1$ and \n$\\mathscr{C}_X^Y\\simeq\\mathscr{C}_Y\\otimes \\mathscr{T}_Z$. \n\n\\begin{lemma}\\label{lm:xyprod}\nLet $X\\in\\mathcal{S}$ and $Y\\in\\mathcal{S}_X$. Then\n\\begin{equation}\\label{eq:xy1}\n\\mathcal{C}_X^{Y}=\\mathcal{C}_X(Y)\\cdot\\mathcal{C}_X \\hspace{2mm}\\text{and}\\hspace{2mm}\n\\mathscr{C}_X^{Y}=\\mathcal{C}_X(Y)\\cdot\\mathscr{C}_X=\\mathscr{C}_X\\cdot\\mathcal{C}_X(Y).\n\\end{equation}\nMoreover, for all $Y,Z\\in\\mathcal{S}_X$ we have\n\\begin{equation}\\label{eq:xy2}\n\\mathcal{C}_X^Y\\cdot\\mathcal{C}_X^Z=\\mathcal{C}_X^{Y\\cap Z} \n\\hspace{2mm}\\text{and}\\hspace{2mm} \n\\mathscr{C}_X^Y\\cdot\\mathscr{C}_X^Z=\\mathscr{C}_X^{Y\\cap Z}.\n\\end{equation}\n\\end{lemma}\n\\proof The abelian case follows from \\eqref{eq:reg1} and a\nstraightforward computation. For the crossed product algebras we use\n$\\mathcal{C}_X(Y)\\cdot\\mathscr{C}_X=\\mathcal{C}_X(Y)\\cdot\\mathcal{C}_X\\cdot \\mathscr{T}_X$ and the first\nrelation in \\eqref{eq:xy1} for example. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\begin{lemma}\\label{lm:nxy}\nFor arbitrary $X,Y\\in\\mathcal{S}$ we have\n\\begin{equation}\\label{eq:main}\n\\mathcal{C}_X\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y\n=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y^{X\\cap Y}=\n\\mathcal{C}_X^{X\\cap Y}\\cdot\\mathscr{T}_{XY}.\n\\end{equation}\n\\end{lemma}\n\\proof\nIf $G\\in\\mathscr{S}$ contains $X\\cup Y$ then clearly\n$$\n\\mathcal{C}_X\\cdot\\mathscr{T}_{XY}=\n{\\textstyle\\sum^\\mathrm{c}_{Z\\in\\mathcal{S}_X} }\\mathcal{C}_X(Z)\\cdot\\mathscr{T}_{XY}=\n{\\textstyle\\sum^\\mathrm{c}_{Z\\in\\mathcal{S}_X} }\\mathcal{C}_G(Z)|_X\\cdot\\mathscr{T}_{XY}.\n$$\nFrom \\eqref{eq:ayza} and \\eqref{eq:reg2} we get\n$$\n\\mathcal{C}_G(Z)|_X\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Y\\cap Z).\n$$ \nSince $Y\\cap Z$ runs over $\\mathcal{S}_{X\\cap Y}$ when $Z$ runs over\n$\\mathcal{S}_X$ we obtain $\\mathcal{C}_X\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y^{X\\cap Y}$.\nSimilarly $\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y=\\mathcal{C}_X^{X\\cap Y}\\cdot\\mathscr{T}_{XY}$. \nOn the other hand $\\mathcal{C}_X^{X\\cap Y}=\\mathcal{C}_G^{X\\cap Y}|_X$ and similarly\nwith $X,Y$ interchanged, hence \n$\\mathcal{C}_X^{X\\cap Y}\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y^{X\\cap Y}$\nbecause of \\eqref{eq:ayza}. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\n\\begin{proposition}\\label{pr:mxyz}\nLet $X,Y,Z\\in\\mathcal{S}$. Then $\\mathscr{C}_{XY}^*=\\mathscr{C}_{YX}$ and\n\\begin{equation}\\label{eq:mxyz}\n\\mathscr{C}_{XZ}\\cdot\\mathscr{C}_{ZY}=\\mathscr{C}_{XY}\\cdot\\mathcal{C}_Y^{X\\cap Y\\cap Z}=\n\\mathcal{C}_X^{X\\cap Y\\cap Z}\\cdot\\mathscr{C}_{XY} \\subset \\mathscr{C}_{XY}. \n\\end{equation}\nIn particular $\\mathscr{C}_{XZ}\\cdot\\mathscr{C}_{ZY}=\\mathscr{C}_{XY}$ if $Z\\supset X\\cap Y$.\n\\end{proposition}\n\\proof \nThe first assertion follows from \\eqref{eq:rad}. From the\nDefinition \\ref{df:main} and Proposition \\ref{pr:product} we then get\n\\begin{align*}\n\\mathscr{C}_{XZ}\\cdot\\mathscr{C}_{ZY} &=\n\\mathcal{C}_X\\cdot\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}\\cdot\\mathcal{C}_Y =\n\\mathcal{C}_X\\cdot\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y\\cap Z)\\cdot\\mathcal{C}_Y \\\\ &=\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y\\cdot\\mathcal{C}_Y(X\\cap Y\\cap Z)\\cdot\\mathcal{C}_Y =\n\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y\\cap Z)\\cdot\\mathcal{C}_Y.\n\\end{align*}\nBut $\\mathcal{C}_Y(X\\cap Y\\cap Z)\\cdot\\mathcal{C}_Y=\\mathcal{C}_Y^{X\\cap Y\\cap Z}$ by Lemma\n\\ref{lm:xyprod}. For the last inclusion in \\eqref{eq:mxyz} we use\nthe obvious relation $\\mathcal{C}_Y^{X\\cap Y\\cap Z}\\cdot\\mathcal{C}_Y\\subset\\mathcal{C}_Y$.\nThe last assertion of the proposition follows from \\eqref{eq:main}. \n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\nThe following theorem is a consequence of the results obtained so\nfar.\n\n\\begin{theorem}\\label{th:nmod}\n$\\mathscr{C}_{XY}$ is a Hilbert $C^*$-submodule of $\\mathscr{L}_{XY}$ such that \n\\begin{equation}\\label{eq:nmod}\n\\mathscr{C}_{XY}^*\\cdot\\mathscr{C}_{XY}=\\mathscr{C}_Y^{X\\cap Y} \\text{ and }\n\\mathscr{C}_{XY}\\cdot\\mathscr{C}_{XY}^*=\\mathscr{C}_X^{X\\cap Y}.\n\\end{equation}\nIn particular, $\\mathscr{C}_{XY}$ is a\n$(\\mathscr{C}_X^{X\\cap Y},\\mathscr{C}_Y^{X\\cap Y})$ imprimitivity bimodule. \n\\end{theorem}\n\nWe recall the conventions\n\\begin{align}\n& X,Y\\in\\mathcal{S} \\text{ and } Y\\not\\subset X \\Rightarrow \n\\mathcal{C}_X(Y)= \\mathscr{C}_X(Y)=\\{0\\}, \\label{eq:convn} \\\\\n& X,Y,Z\\in\\mathcal{S} \\text{ and } Z\\not\\subset X\\cap Y \\Rightarrow\n\\mathscr{C}_{XY}(Z)=\\{0\\}.\\label{eq:convn1}\n\\end{align}\nFrom now on by ``graded'' we mean $\\mathcal{S}$-graded. Then\n$\\mathscr{C}_X=\\sum^\\mathrm{c}_{Y\\in\\mathcal{S}}\\mathscr{C}_X(Y)$ is a graded\n$C^*$-algebras supported by the ideal $\\mathcal{S}_X$ of $\\mathcal{S}$, in\nparticular it is a graded ideal in $\\mathscr{C}_X$. With the notations of\nSubsection \\ref{ss:grca} the algebra $\\mathscr{C}^Y_X=\\mathscr{C}_X(\\mathcal{S}_Y)$ is a\ngraded ideal of $\\mathscr{C}_X$ supported by $\\mathcal{S}_Y$. Similarly for $\\mathcal{C}_X$\nand $\\mathcal{C}_X^Y$.\n\nSince $\\mathscr{C}_X^{X\\cap Y}$ and $\\mathscr{C}_Y^{X\\cap Y}$ are ideals in $\\mathscr{C}_X$\nand $\\mathscr{C}_Y$ respectively, Theorem \\ref{th:nmod} allows us to equip\n$\\mathscr{C}_{XY}$ with (right) Hilbert $\\mathscr{C}_Y$-module and left Hilbert\n$\\mathscr{C}_X$-module structures (which are not full in general).\n\n\\begin{theorem}\\label{th:nmain}\nThe Hilbert $\\mathscr{C}_Y$-module $\\mathscr{C}_{XY}$ is graded by the family of\n$C^*$-submodules $\\{\\mathscr{C}_{XY}(Z)\\}_{Z\\in\\mathcal{S}}$.\n\\end{theorem}\n\\proof We use Proposition \\ref{pr:rhm} with $\\mathscr{M}=\\mathscr{T}_{XY}$ and\n$\\mathcal{C}_Y(Z)$ as algebras $\\mathcal{C}(\\sigma)$. Then $\\mathscr{A}=\\mathscr{C}_Y(X\\cap Y)$ by\n\\eqref{eq:hyz} hence $\\mathscr{A}\\cdot\\mathcal{C}_Y(Z)=\\mathscr{C}_Y(Z)$ and the conditions\nof the proposition are satisfied. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\n\\begin{remark}\\label{re:precise}\nThe following more precise statement is a consequence of the Theorem\n\\ref{th:nmain}: the Hilbert $\\mathscr{C}_Y^{X\\cap Y}$-module $\\mathscr{C}_{XY}$ is\n$\\mathcal{S}_{X\\cap Y}$-graded by the family of $C^*$-submodules\n$\\{\\mathscr{C}_{XY}(Z)\\}_{Z\\in\\mathcal{S}_{X\\cap Y}}$.\n\\end{remark}\n\nFinally, we may construct the $C^*$-algebra $\\mathscr{C}$ which is of main\ninterest for us, the many-body Hamiltonian algebra. We shall\ndescribe it as an algebra of operators on the Hilbert space\n\\begin{equation}\\label{eq:bigh}\n\\mathcal{H}\\equiv\\mathcal{H}_\\mathcal{S}={\\textstyle\\oplus_{X\\in\\mathcal{S}}} \\mathcal{H}_X\n\\end{equation}\nwhich is a kind of Boltzmann-Fock space (without symmetrization or\nanti-symmetrization) determined by the semilattice $\\mathcal{S}$. Note that\nif the zero group $O=\\{0\\}$ belongs to $\\mathcal{S}$ then $\\mathcal{H}$ contains\n$\\mathcal{H}_O=\\mathbb{C}$ as a subspace, this is the vacuum sector. Let\n$\\Pi_{X}$ be the orthogonal projection of $\\mathcal{H}$ onto $\\mathcal{H}_X$ and let\nus think of its adjoint $\\Pi_{X}^*$ as the natural embedding\n$\\mathcal{H}_X\\subset\\mathcal{H}$. Then for any pair $X,Y\\in\\mathcal{S}$ we identify\n\\begin{equation}\\label{eq:identc}\n\\mathscr{C}_{XY}\\equiv\\Pi^*_{X}\\mathscr{C}_{XY}\\Pi_{Y} \\subset L(\\mathcal{H}).\n\\end{equation}\nThus we realize $\\{\\mathscr{C}_{XY}\\}_{X,Y\\in\\mathcal{S}}$ as a linearly independent\nfamily of closed subspaces of $L(\\mathcal{H})$ such that\n$\\mathscr{C}_{XY}^*=\\mathscr{C}_{YX}$ and $\\mathscr{C}_{XZ}\\mathscr{C}_{Z'Y}\\subset\\mathscr{C}_{XY}$ for all\n$X,Y,Z,Z'\\in\\mathcal{S}$. Then by what we proved before, especially\nProposition \\ref{pr:mxyz}, the space\n$\\sum\\nolimits_{X,Y\\in\\mathcal{S}}\\mathscr{C}_{XY}$ is a $*$-subalgebra of $L(\\mathcal{H})$\nhence its closure\n\\begin{equation}\\label{eq:bigco}\n\\mathscr{C}\\equiv\\mathscr{C}_\\mathcal{S}= {\\textstyle\\sum^\\mathrm{c}_{X,Y\\in\\mathcal{S}}}\\mathscr{C}_{XY}.\n\\end{equation}\nis a $C^*$-algebra of operators on $\\mathcal{H}$. Note that one may view\n$\\mathscr{C}$ as a matrix $(\\mathscr{C}_{XY})_{X,Y\\in\\mathcal{S}}$. \n\nIn a similar way one may associate to the spaces $\\mathscr{T}_{XY}$ a\nclosed self-adjoint subspace $\\mathscr{T}\\subset L(\\mathcal{H})$. It is also useful\nto define a new subspace $\\mathscr{T}^\\circ\\subset L(\\mathcal{H})$ by\n$\\mathscr{T}^\\circ_{XY}=\\mathscr{T}_{XY}$ if $X\\sim Y$ and $\\mathscr{T}^\\circ=\\{0\\}$ if\n$X\\not\\sim Y$. Here $X\\sim Y$ means $X\\subset Y$ or $Y\\subset X$\n. Clearly $\\mathscr{T}^\\circ$ is a closed self-adjoint linear subspace of\n$\\mathscr{T}$. Finally, let $\\mathcal{C}$ be the diagonal $C^*$-algebra\n$\\mathcal{C}\\equiv\\oplus_X\\mathcal{C}_X$ of operators on $\\mathcal{H}$.\n\n\\begin{theorem}\\label{th:tc}\nWe have $\\mathscr{C}=\\mathscr{T}\\cdot\\mathcal{C}=\\mathcal{C}\\cdot\\mathscr{T}=\\mathscr{T}\\cdot\\mathscr{T}=\n\\mathscr{T}^\\circ\\cdot\\mathscr{T}^\\circ$.\n\\end{theorem}\n\\proof The first two equalities are an immediate consequence of the\nDefinition \\ref{df:main}. To prove the third equality we use\nProposition \\ref{pr:product}, more precisely the relation\n\\[\n\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(X\\cap Y\\cap Z)=\n\\mathscr{C}_{XY}(X\\cap Y\\cap Z)\n\\]\nwhich holds for any $X,Y,Z$. Then\n\\[\n{\\textstyle\\sum^\\mathrm{c}_Z}\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}=\n{\\textstyle\\sum^\\mathrm{c}_Z}\\mathscr{C}_{XY}(X\\cap Y\\cap Z)=\n{\\textstyle\\sum^\\mathrm{c}_Z}\\mathscr{C}_{XY}(Z)=\\mathscr{C}_{XY}\n\\]\nwhich is equivalent to $\\mathscr{T}\\cdot\\mathscr{T}=\\mathscr{C}$. Now we prove the last\nequality in the proposition. We have\n\\[\n{\\textstyle\\sum^\\mathrm{c}_Z} \\mathscr{T}^\\circ_{XZ}\\cdot\\mathscr{T}^\\circ_{ZY}= \n\\text{ closure of the sum }\n{\\textstyle\\sum_{\\substack{Z\\sim X\\\\ Z\\sim Y}}}\n\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}. \n\\]\nIn the last sum we have four possibilities: $Z\\supset X\\cup Y$,\n$X\\supset Z\\supset Y$, $Y\\supset Z\\supset X$, and $Z\\subset X\\cap\nY$. In the first three cases we have $Z\\supset X\\cap Y$ hence\n$\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}=\\mathscr{T}_{XY}$ by \\eqref{eq:factor}. In the last\ncase we have $\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Z)$ by\n\\eqref{eq:product}. This proves $\\mathscr{T}^\\circ\\cdot\\mathscr{T}^\\circ=\\mathscr{C}$.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\nFinally, we are able to equip $\\mathscr{C}$ with an $\\mathcal{S}$-graded\n$C^*$-algebra structure.\n\n\\begin{theorem}\\label{th:cgrad}\nFor each $Z\\in\\mathcal{S}$ the space $\\mathscr{C}(Z):=\n\\sum^\\mathrm{c}_{X,Y\\in\\mathcal{S}}\\mathscr{C}_{XY}(Z)$ is a $C^*$-subalgebra of\n$\\mathscr{C}$. The family $\\{\\mathscr{C}(Z)\\}_{Z\\in\\mathcal{S}}$ defines a graded\n$C^*$-algebra structure on $\\mathscr{C}$.\n\\end{theorem}\n\\proof \nWe first prove the following relation:\n\\begin{equation}\\label{eq:xyzef}\n\\mathscr{C}_{XZ}(E)\\cdot\\mathscr{C}_{ZY}(F)=\\mathscr{C}_{XY}(E\\cap F)\n\\quad \\text{if } X,Y,Z\\in\\mathcal{S} \\text{ and } E\\subset{X\\cap Z},\nF\\subset{Y\\cap Z}.\n\\end{equation}\nFrom Definition \\ref{df:nxyz}, Proposition \\ref{pr:product},\nrelations \\eqref{eq:reg1} and \\eqref{eq:ayza}, and \n$F\\subset Y\\cap Z$, we get\n\\begin{align*}\n\\mathscr{C}_{XZ}(E)\\cdot\\mathscr{C}_{ZY}(F) &=\n\\mathcal{C}_X(E)\\cdot\\mathscr{T}_{XZ}\\cdot\\mathscr{T}_{ZY}\\cdot\\mathcal{C}_Y(F) \\\\\n&= \\mathcal{C}_X(E)\\cdot\\mathscr{T}_{XY}\\cdot\\mathcal{C}_{Y}(Y\\cap Z)\\cdot\\mathcal{C}_Y(F) \\\\\n&= \\mathcal{C}_X(E)\\cdot\\mathscr{T}_{XY}\\cdot\\mathcal{C}_{Y}(F) \\\\\n&= \\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Y\\cap E)\\cdot\\mathcal{C}_{Y}(F) \\\\\n&= \\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y(Y\\cap E\\cap F).\n\\end{align*}\nAt the next to last step we used $\\mathcal{C}_X(E)=\\mathcal{C}_G(E)|_X$ for some\n$G\\in\\mathscr{S}$ containing both $X$ and $Y$ and then \\eqref{eq:ayza},\n\\eqref{eq:reg2}. Finally, we use $\\mathcal{C}_Y(Y\\cap E\\cap F)=\\mathcal{C}_Y(E\\cap\nF)$ and the Definition \\ref{df:nxyz}. This proves \\eqref{eq:xyzef}.\nDue to the conventions \\eqref{eq:convn}, \\eqref{eq:convn1} we now\nget from \\eqref{eq:xyzef} for $E,F\\in\\mathcal{S}$\n\\[\n{\\textstyle\\sum_{Z\\in\\mathcal{S}}}\\mathscr{C}_{XZ}(E)\\cdot\\mathscr{C}_{ZY}(F)=\n\\mathscr{C}_{XY}(E\\cap F).\n\\]\nThus $\\mathscr{C}(E)\\mathscr{C}(F)\\subset\\mathscr{C}(E\\cap F)$, in particular $\\mathscr{C}(E)$ is a\n$C^*$-algebra. It remains to be shown that the family of\n$C^*$-algebras $\\{\\mathscr{C}(E)\\}_{E\\in\\mathcal{S}}$ is linearly independent. Let\n$A(E)\\in\\mathscr{C}(E)$ such that $A(E)=0$ but for a finite number of $E$\nand assume that $\\sum_E A(E)=0$. Then for all $X,Y\\in\\mathcal{S}$\nwe have $\\sum_E \\Pi_X A(E) \\Pi_Y^* =0$. Clearly \n$\\Pi_X A(E) \\Pi_Y^*\\in\\mathscr{C}_{XY}(E)$ hence from Theorem \\ref{th:nmain}\nwe get $\\Pi_X A(E) \\Pi_Y^*=0$ for all $X,Y$ so $A(E)=0$ for all $E$.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\subsection{Subsystems}\n\\label{ss:T} \n\nWe now point out some interesting subalgebras of\n$\\mathscr{C}$. If $\\mathcal{T}\\subset\\mathcal{S}$ is any subset let\n\\begin{equation}\\label{eq:t}\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}\\equiv{\\textstyle\\sum_{X,Y\\in\\mathcal{T}}^\\mathrm{c}}\\mathscr{C}_{XY} \\quad\n\\text{and} \\quad \\mathcal{H}_\\mathcal{T}\\equiv\\oplus_{X\\in\\mathcal{T}}\\mathcal{H}_X.\n\\end{equation}\nNote that the sum defining $\\mathscr{C}^\\mathcal{T}_\\mathcal{S}$ is already closed if $\\mathcal{T}$ is\nfinite and that $\\mathscr{C}^\\mathcal{T}_\\mathcal{S}$ is a $C^*$-algebra which lives on the\nsubspace $\\mathcal{H}_\\mathcal{T}$ of $\\mathcal{H}$. In fact, if $\\Pi_\\mathcal{T}$ is the orthogonal\nprojection of $\\mathcal{H}$ onto $\\mathcal{H}_\\mathcal{T}$ then\n\\begin{equation}\\label{eq:tt}\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}=\\Pi_\\mathcal{T}\\mathscr{C}_\\mathcal{S}\\Pi_\\mathcal{T}\n\\end{equation}\nand this is a $C^*$-algebra because $\\mathscr{C}\\Pi_\\mathcal{T}\\mathscr{C}\\subset\\mathscr{C}$ by\nProposition \\ref{pr:mxyz}. \nIt is easy to check that $\\mathscr{C}^\\mathcal{T}_\\mathcal{S}$ is a graded $C^*$-subalgebra of\n$\\mathscr{C}$ supported by the ideal $\\textstyle{\\bigcup}_{X\\in\\mathcal{T}}\\mathcal{S}_X$ generated by\n$\\mathcal{T}$ in $\\mathcal{S}$. Indeed, we have\n\\[\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}\\,\\textstyle{\\bigcap}\\,\\mathscr{C}(E)=\n\\left({\\textstyle\\sum_{X,Y\\in\\mathcal{T}}^\\mathrm{c}}\\mathscr{C}_{XY}\\right) \\textstyle{\\bigcap}\n\\left({\\textstyle\\sum_{X,Y\\in\\mathcal{S}}^\\mathrm{c}}\\mathscr{C}_{XY}(E)\\right) =\n{\\textstyle\\sum_{X,Y\\in\\mathcal{T}}^\\mathrm{c}}\\mathscr{C}_{XY}(E).\n\\]\nIt is clear that $\\mathscr{C}$ is the inductive limit of the increasing\nfamily of $C^*$-algebras $\\mathscr{C}^\\mathcal{T}_\\mathcal{S}$ with finite $\\mathcal{T}$.\n\nIf $\\mathcal{T}=\\{X\\}$ then $\\mathscr{C}^\\mathcal{T}_\\mathcal{S}$ is just $\\mathscr{C}_X$. If\n$\\mathcal{T}=\\{X,Y\\}$ with distinct $X,Y$ we get a simple but nontrivial\nsituation. Indeed, we shall have $\\mathcal{H}_\\mathcal{T}=\\mathcal{H}_X\\oplus\\mathcal{H}_Y$ and\n$\\mathscr{C}^\\mathcal{T}_\\mathcal{S}$ may be thought as a matrix\n\\[\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}=\n\\begin{pmatrix}\n\\mathscr{C}_X & \\mathscr{C}_{XY}\\\\\n\\mathscr{C}_{YX} & \\mathscr{C}_Y\n\\end{pmatrix}.\n\\]\nThe grading is now explicitly defined as follows: \n\\begin{compactenum}\n\\item\nIf $E\\subset X\\cap Y$ then\n\\[\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}(E)=\n\\begin{pmatrix}\n\\mathscr{C}_X(E) & \\mathscr{C}_{XY}(E)\\\\\n\\mathscr{C}_{YX}(E) & \\mathscr{C}_Y(E)\n\\end{pmatrix}.\n\\]\n\\item \\label{p:2ex}\nIf $E\\subset X$ and $E\\not\\subset Y$ then\n\\[\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}(E)=\n\\begin{pmatrix}\n\\mathscr{C}_X(E) & 0\\\\\n0 & 0\n\\end{pmatrix}.\n\\]\n\\item\nIf $E\\not\\subset X$ and $E\\subset Y$ then\n\\[\n\\mathscr{C}^\\mathcal{T}_\\mathcal{S}(E)=\n\\begin{pmatrix}\n0 & 0\\\\\n0 & \\mathscr{C}_Y(E)\n\\end{pmatrix}.\n\\]\n\\end{compactenum}\n\n\nThe case when $\\mathcal{T}$ is of the form $\\mathcal{S}_X$ for some $X\\in\\mathcal{S}$ is\nespecially interesting. We denote $\\mathscr{C}_X^\\#\\equiv\\mathscr{C}_{\\mathcal{S}_X}$ and\nwe say that the $\\mathcal{S}_X$-graded $C^*$-algebra is the\n\\emph{unfolding} of the algebra $\\mathscr{C}_X$. More explicitly\n\\begin{equation}\\label{eq:xc}\n\\mathscr{C}_X^\\#\\equiv{\\textstyle\\sum^\\mathrm{c}_{Y,Z\\in\\mathcal{S}_X}}\\mathscr{C}_{YZ}.\n\\end{equation}\nThe self-adjoint operators affiliated to $\\mathscr{C}_X$ live on the Hilbert\nspace $\\mathcal{H}_X$ and are (an abstract version of) Hamiltonians of an\n$N$-particle system $\\mathscr{S}$ with a fixed $N$ (the configuration space\nis $X$ and $N$ is the number of levels of the semilattice\n$\\mathcal{S}_X$). The unfolding $\\mathscr{C}_X^\\#$ lives on the ``Boltzmann-Fock\nspace'' $\\mathcal{H}_{\\mathcal{S}_X}$ and is obtained by adding interactions which\ncouple the subsystems of $\\mathcal{S}$ which have the groups $Y\\in\\mathcal{S}_X$ as\nconfiguration spaces and $\\mathscr{C}_Y$ as Hamiltonian algebras.\n\nClearly $\\mathscr{C}_X^\\#\\subset\\mathscr{C}_Y^\\#$ if $X\\subset Y$ and $\\mathscr{C}$ is the\ninductive limit of the algebras $\\mathscr{C}_X^\\#$. Below we give an\ninteresting alternative description of $\\mathscr{C}_X^\\#$.\n\n\\begin{theorem}\\label{th:mor}\nLet $\\mathscr{N}_X=\\oplus_{Y\\in\\mathcal{S}_X}\\mathscr{C}_{YX}$ be the direct sum of the\nHilbert $\\mathscr{C}_X$-modules $\\mathscr{C}_{YX}$ equipped with the direct sum graded\nstructure. Then $\\mathcal{K}(\\mathscr{N}_X) \\cong \\mathscr{C}_X^\\#$ the isomorphism being such\nthat the graded structure on $\\mathcal{K}(\\mathscr{N}_X)$ defined in Theorem\n\\ref{th:kghm} is transported into that of $\\mathscr{C}_X^\\#$. In other terms,\n$\\mathscr{C}_X^\\#$ is the imprimitivity algebra of the full Hilbert\n$\\mathscr{C}_X$-module $\\mathscr{N}_X$ and $\\mathscr{C}_X$ and $\\mathscr{C}_X^\\#$ are Morita\nequivalent.\n\\end{theorem}\n\\proof If $Y\\subset X$ then $\\mathscr{C}^*_{YX}\\cdot\\mathscr{C}_{YX}=\\mathscr{C}_X^Y$ and\n$\\mathscr{C}_{YX}$ is a full Hilbert $\\mathscr{C}_X^Y$-module. Since the $\\mathscr{C}_X^Y$\nare ideals in $\\mathscr{C}_X$ and their sum over $Y\\in\\mathcal{S}_X$ is equal to\n$\\mathscr{C}_X$ we see that $\\mathscr{N}_X$ becomes a full Hilbert graded\n$\\mathscr{C}_X$-module supported by $\\mathcal{S}_X$, cf. Section \\ref{s:grad}. By\nTheorem \\ref{th:kghm} the imprimitivity $C^*$-algebra $\\mathcal{K}(\\mathscr{N}_X)$\nis equipped with a canonical $\\mathcal{S}_X$-graded structure.\n\nWe shall make a comment on $\\mathcal{K}(\\mathscr{M})$ in the more general the case\nwhen $\\mathscr{M}=\\oplus_i\\mathscr{M}_i$ is a direct sum of Hilbert $\\mathscr{A}$-modules\n$\\mathscr{M}_i$, cf. \\S\\ref{ss:gf}. First, it is clear that we have \n\\[\n\\mathcal{K}(\\mathscr{M})={\\textstyle\\sum^\\mathrm{c}_{ij}}\\mathcal{K}(\\mathscr{M}_j,\\mathscr{M}_i)\\cong\n(\\mathcal{K}(\\mathscr{M}_j,\\mathscr{M}_i))_{ij}.\n\\]\nNow assume that $\\mathcal{E},\\mathcal{E}_i$ are Hilbert spaces such that $\\mathscr{A}$ is a\n$C^*$-algebra of operators on $\\mathcal{E}$ and $\\mathscr{M}_i$ is a Hilbert\n$C^*$-submodule of $L(\\mathcal{E},\\mathcal{E}_i)$ such that\n$\\mathscr{A}_i\\equiv\\mathscr{M}_i^*\\cdot\\mathscr{M}_i$ is an ideal of $\\mathscr{A}$. \nThen by Proposition \\ref{pr:2ss} we have\n$\\mathcal{K}(\\mathscr{M}_j,\\mathscr{M}_i)\\cong\\mathscr{M}_i\\cdot\\mathscr{M}_j^*\\subset L(\\mathcal{E}_j,\\mathcal{E}_i)$. \n\nIn our case we take \n\\[\ni=Y\\in\\mathcal{S}_X,\\quad \\mathscr{M}_i=\\mathscr{C}_{YX}, \\quad \\mathscr{A}=\\mathscr{C}_X,\\quad\n\\mathcal{E}=\\mathcal{H}_X,\\quad \\mathcal{E}_i=\\mathcal{H}_Y,\\quad \\mathscr{A}_i=\\mathscr{C}_X^Y.\n\\]\nThen we get\n\\[\n\\mathcal{K}(\\mathscr{M}_j,\\mathscr{M}_i)\\equiv\\mathcal{K}(\\mathscr{C}_{ZX},\\mathscr{C}_{YX})\\cong\n\\mathscr{C}_{YX}\\cdot\\mathscr{C}_{ZX}^*=\\mathscr{C}_{YX}\\cdot\\mathscr{C}_{XZ}=\\mathscr{C}_{YZ}\n\\]\nby Proposition \\ref{pr:mxyz}.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\\begin{remark}\\label{re:squant}\n We understood the role in our work of the imprimitivity algebra of\n a Hilbert $C^*$-module thanks to a discussion with Georges\n Skandalis: he recognized (a particular case of) the main\n $C^*$-algebra $\\mathscr{C}$ we have constructed as the imprimitivity\n algebra of a certain Hilbert $C^*$-module. Theorem \\ref{th:mor} is\n a reformulation of his observation and of his abstract\n construction of graded Hilbert $C^*$-modules in the present\n framework (at the time of the discussion our definition of $\\mathscr{C}$\n was rather different because we were working in a tensor product\n formalism). More generally, if $\\mathscr{M}$ is a full Hilbert\n $\\mathscr{A}$-module then the imprimitivity $C^*$-algebra $\\mathcal{K}(\\mathscr{M})$ could\n also be interpreted as Hamiltonian algebra of a system related in\n some natural way to the initial one. For example, this is a\n natural method of ``second quantizing'' \\mbox{$N$-body} systems,\n i.e. introducing interactions which couple subsystems\n corresponding to different cluster decompositions of the $N$-body\n systems. This is clear in the physical $N$-body situation\n discussed in \\S\\ref{ss:cexample}\n\\end{remark}\n\n\n\\section{An intrinsic description}\n\\label{s:id}\n\\protect\\setcounter{equation}{0}\n\nWe begin with some preliminary facts on crossed products. Let $X$\nbe a locally compact abelian group. The next result, due to\nLandstad \\cite{Ld}, gives an ``intrinsic'' characterization of\ncrossed products of \\mbox{$X$-algebras} by the action of $X$. We\nfollow the presentation from \\cite[Theorem 3.7]{GI4} which takes\nadvantage of the fact that $X$ is abelian.\n\n\\begin{theorem}\\label{th:land}\nA $C^*$-algebra $\\mathscr{A}\\subset \\mathscr{L}_X$ is a crossed product\nif and only for each $A\\in\\mathscr{A}$ we have:\n\\begin{itemize}\n\\vspace{-2mm} \n\\item\nif $k\\in X^*$ then $V_k^*AV_k\\in\\mathscr{A}$ and\n$\\lim_{k\\rarrow0}\\|V_k^*AV_k-A\\|=0$,\n\\item\nif $x\\in X$ then $U_xA\\in\\mathscr{A}$ and $\\lim_{x\\rarrow0}\\|(U_x-1)A\\|=0$.\n\\vspace{-2mm}\n\\end{itemize} \nIn this case one has $\\mathscr{A}=\\mathcal{A}\\rtimes X$ for a unique $X$-algebra\n$\\mathcal{A}\\subset\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ and this algebra is given by\n\\begin{equation}\\label{eq:land}\n\\mathcal{A} =\\{\\varphi\\in\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)\\mid \n\\varphi(Q)S \\in {\\mathscr{A}} \\hspace{1mm}\\text{and}\\hspace{1mm}\n\\bar\\varphi(Q)S \\in {\\mathscr{A}}\n\\hspace{1mm}\\text{for all}\\hspace{1mm} S \\in \\mathscr{T}_X\\}.\n\\end{equation} \n\\end{theorem}\nNote that the second condition above is equivalent\nto $\\mathscr{T}_X\\cdot\\mathscr{A}=\\mathscr{A}$, cf. Lemma \\ref{lm:help}.\n\n\nThe following consequence of Landstad's theorem is an intrinsic \ndescription of $\\mathscr{C}_X(Y)$.\n\n\\begin{theorem}\\label{th:cxy} \n$\\mathscr{C}_X(Y)$ is the set of $A\\in \\mathscr{L}_X$ such that $U_y^*AU_y=A$ for all\n$y\\in Y$ and:\n\\begin{enumerate}\n\\item[{\\rm(1)}]\n$\\|U_x^*AU_x-A\\|\\to 0$ if $x\\to 0$ in $X$ and \n$\\|V_k^*AV_k-A\\|\\to 0$ if $k\\to 0$ in $X^*$,\n\\item[{\\rm(2)}]\n$\\|(U_x-1)A\\|\\to 0$ if $x\\to 0$ in $X$ and $\\|(V_k-1)A\\|\\to 0$ if\n$k\\to 0$ in $Y^\\perp$. \n\\end{enumerate} \n\\end{theorem}\n\nBy ``$k\\to 0$ in $Y^\\perp$'' we mean: $k\\in Y^\\perp$ and $k\\to 0$.\nNote that the second condition above is equivalent to:\n\\begin{equation}\\label{eq:cxy}\n\\text{there are } \\theta\\in\\mathscr{T}_X,\\ \\psi\\in\\mathcal{C}_X(Y)\n\\text{ and } B,C\\in\\mathscr{L}_X \\text{ such that } \nA=\\theta(P)B=\\psi(Q)C.\n\\end{equation}\nFor the proof, use $Y^\\perp\\cong (X\/Y)^*$ and apply Lemma\n\\ref{lm:help}. In particular, the last factorization shows that for\neach $\\varepsilon>$ there is a compact set $M\\subset X$ such that\n$\\|\\cchi_V(Q)A\\|<\\varepsilon$, where $V=X\\setminus(M+Y)$.\n\n\\noindent{\\bf Proof of Theorem \\ref{th:cxy}:} Let $\\mathscr{A}\\subset \\mathscr{L}_X$\nbe the set of operators $A$ satisfying the conditions from the\nstatement of the theorem. We first prove that $\\mathscr{A}$ satisfies the\ntwo conditions of Theorem \\ref{th:land}. Let $A\\in\\mathscr{A}$. We have to\nshow that $A_p\\equiv V_p^*AV_p\\in\\mathscr{A}$ and $\\|V_p^*AV_p-A\\|\\to0$ as\n$p\\to0$. From the commutation relations $U_xV_p=p(x)V_pU_x$ we get\n$\\|(U_x-1)A_p\\|=\\|(U_x-p(x))A\\|\\to0$ if $x\\to0$ and the second part\nof condition 1 of the theorem is obviously satisfied by $A_p$. Then\nfor $y\\in Y$\n$$\nU_y^*A_pU_y=U_y^*V_p^*AV_pU_y=V_p^*U_y^*AU_yV_p=V_p^*AV_p=A_p.\n$$ Condition 2 is clear so we have $A_p\\in\\mathscr{A}$ and the fact that\n$\\|V_p^*AV_p-A\\|\\to0$ as $p\\to0$ is obvious. That $A$ satisfies the\nsecond Landstad condition, namely that for each $a\\in X$ we have\n$U_aA\\in\\mathscr{A}$ and $\\|(U_a-1)A\\|\\to0$ as $a\\to0$, is also clear because\n$\\|[U_a,V_k]\\|\\to0$ as $k\\to0$.\n\nNow we have to find the algebra $\\mathcal{A}$ defined by \\eqref{eq:land}.\nAssume that $\\varphi\\in\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ satisfies $\\varphi(Q)S\\in\\mathscr{A}$ for all\n$S\\in \\mathscr{T}_X$. Since $U_y^*\\varphi(Q)U_y=\\varphi(Q-y)$ we get\n$(\\varphi(Q)-\\varphi(Q-y))S=0$ for all such $S$ and all $y\\in Y$,\nhence $\\varphi(Q)-\\varphi(Q-y)=0$ which means $\\varphi\\in\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X\/Y)$.\nWe shall prove that $\\varphi\\in\\mathcal{C}_X(Y)$ by reductio ad absurdum. \n\nIf $\\varphi\\notin\\mathcal{C}_X(Y)$ then there is $\\mu>0$ and there is a\nsequence of points $x_n\\in X$ such that $x_n\/Y\\to\\infty$ and\n$|\\varphi(x_n)|>2\\mu$. From the uniform continuity of $\\varphi$ we\nsee that there is a compact neighborhood $K$ of zero in $X$ such\nthat $|\\varphi|>\\mu$ on $\\bigcup_n(x_n+K)$. Let $K'$ be a compact\nneighborhood of zero such that $K'+K'\\subset K$ and let us choose\ntwo positive not zero functions $\\psi,f\\in\\cc_{\\mathrm{c}}(K')$. We define $S\\in\n\\mathscr{T}_X$ by $Su=\\psi*u$ and recall that $\\mbox{\\rm supp\\! } Su\\subset\\mbox{\\rm supp\\! }\n\\psi+\\mbox{\\rm supp\\! } u$. Thus $\\mbox{\\rm supp\\! } SU_{x_n}^*f\\subset K'+x_n+K'\\subset\nx_n+K$. Now let $V$ be as in the remarks after \\eqref{eq:cxy}. Since\n$\\pi_Y(x_n)\\to\\infty$ we have $x_n+K\\subset V$ for $n$ large enough,\nhence\n$$\n\\|\\cchi_V(Q)\\varphi(Q)SU_{x_n}^*f\\|\\geq\\mu\\|SU_{x_n}^*f\\|=\n\\mu\\|Sf\\| >0.\n$$\nOn the other hand, for each $\\varepsilon>0$ one can choose $V$ such\nthat $\\|\\cchi_V(Q)\\varphi(Q)S\\|<\\varepsilon$. Then we shall have\n$\\|\\cchi_V(Q)\\varphi(Q)SU_{x_n}^*f\\|\\leq\\varepsilon\\|f\\|$ so\n$\\mu\\|Sf\\|\\leq\\varepsilon\\|f\\|$ for all $\\varepsilon>0$ which is\nabsurd. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\nWe now give a similar characterization of $\\mathscr{C}_{XY}(Z)$ where $X,Y$\nis a compatible pair of closed subgroups of an lca group $G$.\n\n\\begin{theorem}\\label{th:yzintr}\n $\\mathscr{C}_{XY}(Z)$ is the set of $T\\in\\mathscr{L}_{XY}$ satisfying the\n following conditions:\n\\begin{enumerate} \\vspace{-2mm}\n\\item[{\\rm(1)}] $U_z^*T U_z=T$ if $z\\in Z$ and $\\|V^*_k T V_k-T\\|\\to\n 0$ if $k\\to 0$ in $(X+Y)^*$\n\\item[{\\rm(2)}]\n$\\|(U_x-1)T\\|\\to 0$ if $x\\to 0$ in $X$ and \n$\\|T(U_y-1)\\|\\to 0$ if $y\\to 0$ in $Y$, \n\\item[{\\rm(3)}]\n$\\|(V_k-1)T\\|\\to 0$ if $k\\to 0$ in $(X\/Z)^*$ and \n$\\|T(V_k-1)\\|\\to 0$ if $k\\to 0$ in $(Y\/Z)^*$.\n\\end{enumerate}\n\\end{theorem}\n\nBefore the proof we make some preliminary comments. We think of\n$X+Y$ as a closed subgroup of $G\\in\\mathscr{S}$ which contains $X$ and $Y$\nas closed subgroups. Each character $k\\in(X+Y)^*$ defines by\nrestriction a character $k|_X\\in X^*$ and the map $k\\mapsto k|_X$ is\na continuous open surjection. And similarly if $X$ is replaced by\n$Y$. In (1) the operator $V_k$ acts in $L^2(X)$ as multiplication by\n$k|_X$ and in $L^2(Y)$ as multiplication by $k|_Y$. In the first\npart of (3) we take $k\\in X^*$ and identify $(X\/Z)^*$ with the\northogonal of $Z$ in $X^*$ and similarly for the second part.\n\nAssumptions (2) and (3) of Theorem \\ref{th:yzintr} are decay\nconditions in certain directions in $P$ and $Q$ space. Indeed, by\nLemma \\ref{lm:help} condition (2) is equivalent to:\n\\begin{equation}\\label{eq:cond1}\n\\text{there are } S_1\\in \\mathscr{T}_X, S_2\\in \\mathscr{T}_Y \\text{ and }\nR_1,R_2\\in\\mathscr{L}_{XY} \\text{ such that } T=S_1R_1=R_2S_2.\n\\end{equation}\nRecall that $\\mathscr{T}_X\\cong\\cc_{\\mathrm{o}}(X^*)$ for example. Then\ncondition (3) is equivalent to:\n\\begin{equation}\\label{eq:cond2}\n\\text{there are } S_1\\in \\mathcal{C}_X(Z), S_2\\in \\mathcal{C}_Y(Z) \\text{ and }\nR_1,R_2\\in\\mathscr{L}_{XY} \\text{ such that } T=S_1R_1=R_2S_2.\n\\end{equation}\n\n\n\\noindent{\\bf Proof of Theorem \\ref{th:yzintr}:} The set $\\mathscr{C}$ of all\nthe operators satisfying the conditions of the theorem is clearly a\nclosed subspace of $\\mathscr{L}_{XY}$. We have $\\mathscr{C}_{X,Y}(Z)\\subset\\mathscr{C}$\nbecause \\eqref{eq:cond1}, \\eqref{eq:cond2} are satisfied by any\n$T\\in\\mathscr{C}_{XY}(Z)$ as a consequence of Theorem \\ref{th:nxyz}. Then we\nget:\n$$\n\\mathscr{C}_Y(Z)= \n\\mathscr{C}_{XY}^*(Z)\\cdot\\mathscr{C}_{XY}(Z)\\subset \\mathscr{C}^*\\cdot\\mathscr{C}, \n\\hspace{1mm}\n\\mathscr{C}_X(Z)=\n\\mathscr{C}_{XY}(Z)\\cdot\\mathscr{C}_{XY}^*(Z)\\subset \\mathscr{C}\\cdot\\mathscr{C}^*.\n$$ \nWe prove that equality holds in both these relations. We show, for\nexample, that $A\\equiv TT^*$ belongs to $\\mathscr{C}_X(Z)$ if $T\\in\\mathscr{C}$ and\nfor this we shall use Theorem \\ref{th:cxy} with $Y$ replaced by\n$Z$. That $U_z^*AU_z=A$ for $z\\in Z$ is clear. From \\eqref{eq:cond1}\nwe get $A=S_1R_1R_1^*S_1^*$ with $S_1\\in\\mathscr{T}_X$ hence\n$\\|(U_x-1)A\\|\\to 0$ and $\\|A(U_x-1)\\|\\to 0$ as $x\\to0$ in $X$ are\nobvious and imply $\\|U_x^*AU_x-A\\|\\to 0$. Then \\eqref{eq:cond2}\nimplies $A=\\psi(Q)C$ with $\\psi\\in\\mathcal{C}_X(Z)$ and bounded $C$ hence\n\\eqref{eq:cxy} is satisfied.\n\nThat $\\mathscr{C}\\rc_Y(Z)\\subset\\mathscr{C}$ is easily proven because $T=SA$ has the\nproperties \\eqref{eq:cond1} and \\eqref{eq:cond2} if $S$ belongs to\n$\\mathscr{C}$ and $A$ to $\\mathscr{C}_Y(Z)$, cf. Theorem \\ref{th:cxy}. From what we\nhave shown above we get $\\mathscr{C}\\rc^*\\mathscr{C}\\subset\\mathscr{C}\\rc_Y(Z)\\subset\\mathscr{C}$ so\n$\\mathscr{C}$ is a Hilbert $C^*$-submodule of $\\mathscr{L}_{XY}$. On the other hand,\n$\\mathscr{C}_{XY}(Z)$ is a Hilbert $C^*$-submodule of $\\mathscr{L}_{XY}$ such that\n$\\mathscr{C}_{XY}^*(Z)\\cdot\\mathscr{C}_{XY}(Z)=\\mathscr{C}^*\\cdot\\mathscr{C}$ and\n$\\mathscr{C}_{XY}(Z)\\cdot\\mathscr{C}_{XY}^*(Z)=\\mathscr{C}\\cdot\\mathscr{C}^*$. Since\n$\\mathscr{C}_{XY}(Z)\\subset\\mathscr{C}$ we get $\\mathscr{C}=\\mathscr{C}_{XY}(Z)$ from Proposition\n\\ref{pr:clsubmod}. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\n\nIf $Z=X\\cap Y$ then Theorem \\ref{th:yzintr} gives an intrinsic\ndescription of the space $\\mathscr{T}_{XY}$. For example:\n\n\\begin{corollary}\\label{co:txyintr}\nIf $X\\supset Y$ then $\\mathscr{T}_{XY}$ is the set of $T\\in\\mathscr{L}_{XY}$\nsatisfying $U_y^*T U_y=T$ if $y\\in Y$ and such that:\n$U_xT\\to T$ if $x\\to 0$ in $X$, \n$V^*_k T V_k\\to T$ if $k\\to 0$ in $X^*$ and \n$V_kT\\to T$ if $k\\to 0$ in $Y^\\perp$.\n\\end{corollary}\n\n\nIn the rest of this section we describe the structure of the objects\nintroduced in Section \\ref{s:grass} when the subgroups are\ncomplemented, e.g. if $\\mathcal{S}$ consists of finite dimensional vector\nspaces.\n\n\nWe say that \\emph{$Z$ is complemented in $X$} if $X=Z\\oplus E$ for\nsome closed subgroup $E$ of $X$. If $X,Z$ are equipped with Haar\nmeasures then $X\/Z$ is equipped with the quotient Haar measure and we\nhave $E\\simeq X\/Z$. If $Z$ is complemented in $X$ and $Y$ then\n$\\mathscr{C}_{XY}(Z)$ can be expressed as a tensor product.\n\n\\begin{proposition}\\label{pr:def3}\nIf $Z$ is complemented in $X$ and $Y$ then\n\\begin{equation}\\label{eq:ryzsum}\n\\mathscr{C}_{XY}(Z)\\simeq \\mathscr{T}_Z\\otimes \\mathscr{K}_{X\/Z,Y\/Z}.\n\\end{equation}\nIf $Y\\subset X$ then $\\mathscr{T}_{XY}\\simeq \\mathscr{T}_Y\\otimes L^2(X\/Y)$ tensor\nproduct of Hilbert $C^*$-modules.\n\\end{proposition} \n\\proof Note first that the tensor product in \\eqref{eq:ryzsum} is\ninterpreted as the exterior tensor product of the Hilbert\n$C^*$-modules $\\mathscr{T}_Z$ and $\\mathscr{K}_{X\/Z,Y\/Z}$. Let $X=Z\\oplus E$ and\n$Y=Z\\oplus F$ for some closed subgroups $E,F$. Then, as explained in\n\\S\\ref{ss:ha}, we may also view the tensor product as the norm closure\nin the space of continuous operators from $L^2(Y)\\simeq L^2(Z)\\otimes\nL^2(F)$ to $L^2(X)\\simeq L^2(Z)\\otimes L^2(E)$ of the linear space\ngenerated by the operators of the form $T\\otimes K$ with $T\\in\n\\mathscr{T}_Z$ and $K\\in \\mathscr{K}_{EF}$.\n\n\n\n\nWe now show that under the conditions of the proposition $X+Y\\simeq\nZ\\oplus E\\oplus F$ algebraically and topologically. The natural map\n$\\theta:Z\\oplus E\\oplus F\\to Z+E+F=X+Y$ is a continuous bijective\nmorphism, we have to prove that it is open. Since $X,Y$ are\ncompatible, the map \\eqref{eq:nat} is a continuous open\nsurjection. If we represent $X\\oplus Y\\simeq Z\\oplus Z\\oplus E\\oplus\nF$ then this map becomes $\\phi(a,b,c,d)=(a-b)+c+d$. Let\n$\\psi=\\xi\\oplus{\\rm id}_E\\oplus{\\rm id}_F$ where $\\xi:Z\\oplus Z\\to Z$\nis given by $\\xi(a,b)=a-b$. Then $\\xi$ is continuous surjective and\nopen because if $U$ is an open neighborhood of zero in $Z$ then\n$U-U$ is also an open neighborhood of zero. Thus\n$\\psi:(Z\\oplus Z)\\oplus E\\oplus F \\to Z\\oplus E\\oplus F$ is a\ncontinuous open surjection and $\\phi=\\theta\\circ\\psi$. So if $V$ is\nopen in $Z\\oplus E\\oplus F$ then there is an open \n$U\\subset Z\\oplus Z\\oplus E\\oplus F$ such that $V=\\psi(U)$ and then\n$\\theta(V)=\\theta\\circ\\psi(U)=\\phi(U)$ is open in $Z+E+F$.\n\nThus we may identify $L^2(Y)\\simeq L^2(Z)\\otimes L^2(F)$ and\n$L^2(X)\\simeq L^2(Z)\\otimes L^2(E)$ and we must describe the norm\nclosure of the set of operators $T_{XY}(\\varphi)\\psi(Q)$ with\n$\\varphi\\in\\cc_{\\mathrm{c}}(X+Y)$ (cf. the remark after \\eqref{eq:ryz} and the fact\nthat $X+Y$ is closed) and $\\psi\\in\\cc_{\\mathrm{o}}(Y\/Z)$. Since $X+Y\\simeq Z\\oplus\nE\\oplus F$ and $Y=Z\\oplus F$ it suffices to describe the clspan of the\noperators $T_{XY}(\\varphi)\\psi(Q)$ with\n$\\varphi=\\varphi_Z\\otimes\\varphi_E\\otimes\\varphi_F$ and\n$\\varphi_Z,\\varphi_E,\\varphi_F$ continuous functions with compact\nsupport on $Z,E,F$ respectively and $\\psi=1\\otimes\\eta$ where $1$ is\nthe function identically equal to $1$ on $Z$ and\n$\\eta\\in\\cc_{\\mathrm{o}}(F)$. Then, if $x=(a,c)\\in Z\\times E$ and $y=(b,d)\\in\nZ\\times F$, we get:\n$$\n(T_{XY}(\\varphi)\\psi(Q)u)(a,c)=\\int_{Z\\times F} \\varphi_Z(a-b)\n\\varphi_E(c)\n\\varphi_F(d) \\eta(d) u(b,d) \\text{d} b \\text{d} d. \n$$ But this is just\n$C(\\varphi_Z)\\otimes\\ket{\\varphi_E}\\bra{\\bar\\eta\\bar\\varphi_F}$ where\n$\\ket{\\varphi_E}\\bra{\\bar\\eta\\bar\\varphi_F}$ is a rank one operator\n$L^2(F)\\to L^2(E)$ and $C(\\varphi_Z)$ is the operator of convolution\nby $\\varphi_Z$ on $L^2(Z)$. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\nIf $X\\cap Y$ is complemented in $X$ and $Y$ then $\\mathscr{C}_{XY}$ can be\nexpressed (non canonically) as a tensor product.\n\n\\begin{proposition}\\label{pr:xytens}\nIf $X\\cap Y$ is complemented in $X$ and $Y$ then\n\\[\n\\mathscr{C}_{XY}\\simeq \\mathscr{C}_{X\\cap Y} \\otimes \\mathscr{K}_{X\/Y,Y\/X}.\n\\] \nIn particular,\nif $X\\supset Y$ then $\\mathscr{C}_{XY}\\simeq \\mathscr{C}_{Y} \\otimes \\mathcal{H}_{X\/Y}$.\n\\end{proposition}\n\\proof If $X=(X\\cap Y)\\oplus E$ and $Y=(X\\cap Y)\\oplus F$ then we have\nto show that $\\mathscr{C}_{XY}\\simeq \\mathscr{C}_{X\\cap Y} \\otimes \\mathscr{K}_{EF}$ where the\ntensor product may be interpreted either as the exterior tensor\nproduct of the Hilbert $C^*$-modules $\\mathscr{C}_{X\\cap Y}$ and $\\mathscr{K}_{EF}$ or\nas the norm closure in the space of continuous operators from\n$L^2(Y)\\simeq L^2(X\\cap Y)\\otimes L^2(F)$ to $L^2(X)\\simeq L^2(X\\cap\nY)\\otimes L^2(E)$ of the algebraic tensor product of $\\mathscr{C}_{X\\cap Y}$\nand $\\mathscr{K}_{EF}$. From Proposition \\ref{pr:def3} with $Z=X\\cap Y$ we\nget $\\mathscr{T}_{XY}\\simeq\\mathscr{T}_{X\\cap Y}\\otimes\\mathscr{K}_{EF}$. The relations\n\\eqref{eq:main} and the Definition \\ref{df:main} imply\n$\\mathscr{C}_{XY}=\\mathscr{T}_{XY}\\cdot\\mathcal{C}_Y^{X\\cap Y}$ and we clearly have\n\\[\n\\mathcal{C}_Y^{X\\cap Y}={\\textstyle\\sum_{Z\\subset X\\cap Y}^\\mathrm{c}} \\mathcal{C}_Y(Z)\n\\simeq {\\textstyle\\sum_{Z\\subset X\\cap Y}^\\mathrm{c}} \n\\mathcal{C}_{X\\cap Y}(Z)\\otimes \\cc_{\\mathrm{o}}(F)\n\\simeq \\mathcal{C}_{X\\cap Y}\\otimes \\cc_{\\mathrm{o}}(F).\n\\] \nThen we get\n\\[\n\\mathscr{C}_{XY}\\simeq \\mathscr{T}_{X\\cap Y}\\otimes\\mathscr{K}_{EF}\\cdot \n\\mathcal{C}_{X\\cap Y}\\otimes \\cc_{\\mathrm{o}}(F)=\n\\big(\\mathscr{T}_{X\\cap Y}\\cdot\\mathcal{C}_{X\\cap Y} \\big)\\otimes\n\\big(\\mathscr{K}_{EF}\\cdot\\cc_{\\mathrm{o}}(F)\\big)\n\\]\nand this is $\\mathscr{C}_{X\\cap Y} \\otimes \\mathscr{K}_{EF}$.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\nIf $Z$ is complemented in $X$ and $Y$ then Theorem \\ref{th:yzintr}\ncan be improved. We shall describe this improvement only in the\nEuclidean case which will be useful in our treatment of\nnonrelativistic Hamiltonians. Thus below we assume that $X,Y$ are\nsubspaces of an Euclidean space (see \\S\\ref{ss:mouint} for\nnotations). Note that $V_k$ is the operator of multiplication by the\nfunction $x\\mapsto\\mathrm{e}^{i\\braket{x}{k}}$ where the scalar product\n$\\braket{x}{k}$ is well defined for any $x,k$ in the ambient space\n$\\mathcal{X}$. \n\n\\begin{theorem}\\label{th:xyzeintr}\n$\\mathscr{C}_{XY}(Z)$ is the set of $T\\in\\mathscr{L}_{XY}$ satisfying:\n\\begin{enumerate} \n\\item[{\\rm(1)}] \n$U_z^*T U_z=T$ for $z\\in Z$ and\n$\\|V^*_z T V_z-T\\|\\to 0$ if $z\\to 0$ in $Z$,\n\\item[{\\rm(2)}] \n$\\|(U_x-1)T\\|\\to 0$ if $x\\to 0$ in $X$ and $\\|(V_k-1)T\\|\\to 0$ if\n$k\\to 0$ in $X\/Z$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{remark}\\label{re:xyzeintr}\nCondition 2 may be replaced by\n\\begin{compactenum}\n\\item[{\\rm(2$'$)}]\n$\\|T(U_y-1)\\|\\to 0$ if $y\\to 0$ in $Y$ and $\\|T(V_k-1)\\|\\to 0$ if\n$k\\to 0$ in $Y\/Z$.\n\\end{compactenum}\nThis will be clear from the next proof.\n\\end{remark}\n\\proof Let $\\mathcal{F}\\equiv\\mathcal{F}_Z$ be the Fourier transformation in the\nspace $Z$, this is a unitary operator in the space $L^2(Z)$ which\ninterchanges the position and momentum observables $Q_Z,P_Z$. We\ndenote also by $\\mathcal{F}$ the operators $\\mathcal{F}\\otimes1_{\\mathcal{H}_{X\/Z}}$ and\n$\\mathcal{F}\\otimes1_{\\mathcal{H}_{Y\/Z}}$ which are unitary operators in the spaces\n$\\mathcal{H}_X$ and $\\mathcal{H}_Y$ due to \\eqref{eq:xyzint}. If $S=\\mathcal{F} T\n\\mathcal{F}^{-1}$ then $S$ satisfies the following conditions:\n\\begin{enumerate} \n\\item[(i)]\n$V_z^*S V_z=S$ for $z\\in Z$, $\\|(V_z-1)S\\|\\to 0$ if $z\\to 0$ in $Z$,\nand $\\|U_z S U^*_z-S\\|\\to 0$ if $z\\to 0$ in $Z$;\n\\item[(ii)] \n$\\|(U_x-1)S\\|\\to 0$ and $\\|(V_x-1)S\\|\\to 0$ if $x\\to 0$ in $X\/Z$.\n\\end{enumerate}\nFor the proof, observe that the first part of condition (2) may be\nwritten as the conjunction of the two relations $\\|(U_z-1)T\\|\\to 0$\nif $z\\to 0$ in $Z$ and $\\|(U_x-1)T\\|\\to 0$ if $x\\to 0$ in $X\/Z$. We\nshall work in the representations\n\\begin{equation}\\label{eq:fiber}\n\\mathcal{H}_X= L^2(Z;\\mathcal{H}_{X\/Z}) \\quad \\text{and} \\quad\n\\mathcal{H}_Y= L^2(Z;\\mathcal{H}_{Y\/Z}).\n\\end{equation} \nFrom the relation $V_z^*S V_z=S$ for all $z\\in Z$ it follows that\nthere is a bounded weakly measurable function\n$S(\\cdot):Z\\to\\mathscr{L}_{X\/Z,Y\/Z}$ such that in the representations\n\\eqref{eq:fiber} $S$ is the operator of multiplication by\n$S(\\cdot)$. Then $\\|U_z S U^*_z-S\\|\\to 0$ if $z\\to 0$ in $Z$ means\nthat the function $S(\\cdot)$ is uniformly continuous. And clearly\n$\\|(V_z-1)S\\|\\to 0$ if $z\\to 0$ in $Z$ is equivalent to the fact\nthat $S(\\cdot)$ tends to zero at infinity. Thus we see that\n$S(\\cdot)\\in\\cc_{\\mathrm{o}}(Z;\\mathscr{L}_{X\/Z,Y\/Z})$.\nThe condition (ii) can now be written\n\\[\n\\sup_{z\\in Z}\\big(\\|(U_x-1)S(z)\\|+ \\|(V_x-1)S(z)\\|\\big)\\to 0\n\\quad \\text{if } x\\to 0 \\text{ in } X\/Z.\n\\]\nFrom the Riesz-Kolmogorov theorem it follows that each $S(z)$ is a\ncompact operator. Thus we have $S(\\cdot)\\in\\cc_{\\mathrm{o}}(Z;\\mathscr{K}_{X\/Z,Y\/Z})$\nwhich implies $T\\in\\mathscr{C}_{XY}(Z)$ by Proposition \\ref{pr:def3}. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\\begin{remark}\\label{re:half}\nSince $S(\\cdot)$ is continuous and tends to zero at infinity, for\neach $\\varepsilon>0$ there are points $z_1,\\dots,z_n\\in Z$ and\ncomplex functions $\\varphi_1,\\dots,\\varphi_n\\in\\cc_{\\mathrm{c}}(Z)$ such that\n\\[\n\\|S(z)-{\\textstyle\\sum_k}\\varphi_k(z)S(z_k)\\|\\leq\\varepsilon\\quad\n\\forall z\\in Z.\n\\]\nThe operators $S(z_k)$ being compact, applying once again the\nRiesz-Kolmogorov theorem we get\n\\[\n\\sup_{z\\in Z}\\big(\\|S(z)(U_y-1)\\|+ \\|S(z)(V_y-1)\\|\\big)\\to 0 \n\\quad \\text{if } y\\to 0 \\text{ in } Y\/Z.\n\\]\nThis explains why the second parts of conditions (2) and (3) of\nTheorem \\ref{th:yzintr} is not needed. \n\\end{remark}\n\n\n\n\n\\section{Affiliated operators}\n\\label{s:af}\n\\protect\\setcounter{equation}{0}\n\nIn this section we give examples of self-adjoint operators\naffiliated to the algebra $\\mathscr{C}$ constructed in Section \\ref{s:grass}\nand then we give a formula for their essential spectrum. We refer to\n\\S\\ref{ss:grca} for terminology and basic results related to the\nnotion of affiliation that we use and to \\cite{ABG,GI1,DG3} for\ndetails.\n\nWe recall that a self-adjoint operator $H$ on a Hilbert space $\\mathcal{H}$\nis \\emph{strictly affiliated} to a $C^*$-algebra of operators $\\mathscr{A}$\non $\\mathcal{H}$ if $(H+i)^{-1}\\in\\mathscr{A}$ (then $\\varphi(H)\\in\\mathscr{A}$ for all\n$\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R})$) and if $\\mathscr{A}$ is the clspan of the elements\n$\\varphi(H)A$ with $\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R})$ and $A\\in\\mathscr{A}$. This class\nof operators has the advantage that each time $\\mathscr{A}$ is\nnon-degenerately represented on a Hilbert space $\\mathcal{H}'$ with the help\nof a morphism $\\mathscr{P}:\\mathscr{A}\\to L(\\mathcal{H}')$, the observable $\\mathscr{P} H$ is\nrepresented by a usual (densely defined) self-adjoint operator on\n$\\mathcal{H}'$.\n\nThe diagonal algebra\n\\begin{equation}\\label{eq:d}\n\\mathscr{T}_{\\text{d}}\\equiv(\\mathscr{T}_{\\mathcal{S}})_\\text{d}=\\oplus_{X\\in\\mathcal{S}} \\mathscr{T}_X\n\\end{equation}\nhas a simple physical interpretation: this is the $C^*$-algebra\ngenerated by the kinetic energy operators. Since\n$\\mathscr{C}_{XX}=\\mathscr{C}_X\\supset\\mathscr{C}_{X}(X)= \\mathscr{T}_X$ we see that $\\mathscr{T}_\\text{d}$\nis a $C^*$-subalgebra of $\\mathscr{C}$. From \\eqref{eq:nxyz},\n\\eqref{eq:txy1}, \\eqref{eq:txy2} and the Cohen-Hewitt theorem we get\n\\begin{equation}\\label{eq:ed}\n\\mathscr{C}(Z)\\mathscr{T}_\\text{d}=\\mathscr{T}_\\text{d}\\mathscr{C}(Z)=\\mathscr{C}(Z)\\quad \\forall Z\\in\\mathcal{S} \\quad \n\\text{and}\\hspace{2mm} \\mathscr{C} \\mathscr{T}_\\text{d}=\\mathscr{T}_\\text{d}\\mathscr{C}=\\mathscr{C}.\n\\end{equation} \nIn other terms, $\\mathscr{T}_\\text{d}$ acts\nnon-degenerately\\symbolfootnote[2]{\\ Note that if $\\mathcal{S}$ has a\n largest element $\\mathcal{X}$ then the algebra $\\mathscr{C}(\\mathcal{X})$ acts on each\n $\\mathscr{C}(Z)$ but this action is degenerate.} \non each $\\mathscr{C}(Z)$ and on $\\mathscr{C}$. It follows that a self-adjoint\noperator strictly affiliated to $\\mathscr{T}_\\text{d}$ is also strictly\naffiliated to $\\mathscr{C}$. \n\n\nFor each $X\\in\\mathcal{S}$ let $h_X:X^*\\to\\mathbb{R}$ be a continuous function\nsuch that $|h_X(k)|\\to\\infty$ if $k\\to\\infty$ in $X^*$. Then the\nself-adjoint operator $K_X\\equiv h_X(P)$ on $\\mathcal{H}_X$ is strictly\naffiliated to $\\mathscr{T}_X$ and the norm of $(K_X+i)^{-1}$ is equal to\n$\\sup_k(h^2_X(k)+1)^{-1\/2}$. Let $K\\equiv\\bigoplus_{X\\in\\mathcal{S}}K_X$,\nthis is a self-adjoint operator $\\mathcal{H}$. Clearly $K$ is affiliated to \n$\\mathscr{T}_\\text{d}$ if and only if \n\\begin{equation}\\label{eq:kin}\n\\lim_{X\\to\\infty}\\sup\\nolimits_{k}(h^2_X(k)+1)^{-1\/2}=0\n\\end{equation}\nand then $K$ is strictly affiliated to $\\mathscr{T}_\\text{d}$ (the set $\\mathcal{S}$ is\nequipped with the discrete topology). If the functions $h_X$ are\npositive this means that $\\min h_X$ tends to infinity when\n$X\\to\\infty$. One could avoid such a condition by considering an\nalgebra larger then $\\mathscr{C}$ such as to contain \n$\\prod_{X\\in\\mathcal{S}} \\mathscr{T}_X$, but we shall not develop this here.\n\nNow let $H=K+I$ with $I\\in\\mathscr{C}$ (or in the multiplier algebra) a\nsymmetric element. Then\n\\begin{equation}\\label{eq:res}\n(\\lambda-H)^{-1}=(\\lambda-K)^{-1}\\left(1-I(\\lambda-K)^{-1}\\right)^{-1}\n\\end{equation}\nif $\\lambda\\notin\\mathrm{Sp}(H)\\cup\\mathrm{Sp}(K)$ . Thus $H$ is strictly affiliated\nto $\\mathscr{C}$. We interpret $H$ as the Hamiltonian of our system of\nparticles when the kinetic energy is $K$ and the interactions\nbetween particles are described by $I$. Even in the simple case\n$I\\in\\mathscr{C}$ these interactions are of a very general nature being a\nmixture of $N$-body and quantum field type interactions (which\ninvolve creation and annihilation operators so the number of\nparticles is not preserved).\n\nWe shall now use Theorem \\ref{th:gas} in order to compute the\nessential spectrum of an operator like $H$. The case of unbounded\ninteractions will be treated later on. Let $\\mathscr{C}_{\\geq E}$ be the\n$C^*$-subalgebra of $\\mathscr{C}$ determined by $E\\in\\mathcal{S}$ according to the\nrules of $\\S\\ref{ss:grca}$. More explicitly, we set\n\\begin{equation}\\label{eq:geqe}\n\\mathscr{C}_{\\geq E}={\\textstyle\\sum^\\mathrm{c}_{F\\supset E}\\mathscr{C}(F)}\\cong \n\\big({\\textstyle\\sum^\\mathrm{c}_{F\\supset E}}\n\\mathscr{C}_{XY}(F)\\big)_{X\\cap Y\\supset E}\n\\end{equation}\nand note that $\\mathscr{C}_{\\geq E}$ lives on the subspace $\\mathcal{H}_{\\geq\n E}=\\bigoplus_{X\\supset E}\\mathcal{H}_X$ of $\\mathcal{H}$. Since in the second sum\nfrom \\eqref{eq:geqe} the group $F$ is such that $E\\subset F\\subset\nX\\cap Y$ the algebra $\\mathscr{C}_{\\geq E}$ is strictly included in the\nalgebra $\\mathscr{C}_\\mathcal{T}$ obtained by taking $\\mathcal{T}=\\{F\\in\\mathcal{S} \\mid F\\supset\nE\\}$ in \\eqref{eq:t}. \n\nLet $\\mathscr{P}_{\\geq E}$ be the canonical idempotent morphism of $\\mathscr{C}$\nonto $\\mathscr{C}_{\\geq E}$ introduced in \\S\\ref{ss:grca}. We consider\nthe self-adjoint operator on the Hilbert space $\\mathcal{H}_{\\geq E}$\ndefined as follows:\n\\begin{equation}\\label{eq:post}\n H_{\\geq E}=K_{\\geq E}+I_{\\geq E} \\quad \\text{where} \\quad \nK_{\\geq E}= \\oplus_{X\\geq E} K_X \\hspace{2mm} \\text{and} \n\\hspace{2mm} I_{\\geq E}=\\mathscr{P}_{\\geq E}I.\n\\end{equation}\nThen $H_{\\geq E}$ is strictly affiliated to $\\mathscr{C}_{\\geq E}$ and it\nfollows easily from \\eqref{eq:res} that\n\\begin{equation}\\label{eq:pre}\n\\mathscr{P}_{\\geq E}\\varphi(H)=\\varphi(H_{\\geq E}) \\quad \\forall\n\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R}).\n\\end{equation} \nNow let us assume that the group $O=\\{0\\}$ belongs to $\\mathcal{S}$. Then we\nhave\n\\begin{equation}\\label{eq:O}\n\\mathscr{C}(O)=K(\\mathcal{H}).\n\\end{equation} \nIndeed, from \\eqref{eq:nxyz} we get\n$\\mathscr{C}_{XY}(O)=\\mathscr{T}_{XY}\\cdot\\cc_{\\mathrm{o}}(Y)=\\mathscr{K}_{XY}$ which implies the\npreceding relation. If we also assume that $\\mathcal{S}$ is atomic and we\ndenote $\\mathcal{P}(\\mathcal{S})$ its set of atoms, then from Theorem \\ref{th:ga} we\nget a canonical embedding\n\\begin{equation}\\label{eq:quotc}\n\\mathscr{C}\/K(\\mathcal{H})\\subset\\displaystyle\\mbox{$\\textstyle\\prod$}_{\\substack{i}}\\nolimits_{E\\in\\mathcal{P}(\\mathcal{S})}\\mathscr{C}_{\\geq E}\n\\end{equation} \ndefined by the morphism $\\mathscr{P}\\equiv(\\mathscr{P}_{\\geq E})_{E\\in\\mathcal{P}(\\mathcal{S})}$.\nThen from \\eqref{eq:es2} we obtain:\n\\begin{equation}\\label{eq:ess1}\n\\mathrm{Sp_{ess}}(H)=\\overline{\\textstyle{\\bigcup}}_{E\\in\\mathcal{P}(\\mathcal{S})}\\mathrm{Sp}(H_{\\geq E}).\n\\end{equation}\nOur next purpose is to prove a similar formula for a certain class\nof unbounded interactions $I$.\n\nLet $\\mathcal{G}\\equiv\\mathcal{G}_\\mathcal{S}=D(|K|^{1\/2})$ be the form domain of $K$\nequipped with the graph topology. Then $\\mathcal{G}\\subset\\mathcal{H}$ continuously\nand densely so after the Riesz identification of $\\mathcal{H}$ with its\nadjoint space $\\mathcal{H}^*$ we get the usual scale\n$\\mathcal{G}\\subset\\mathcal{H}\\subset\\mathcal{G}^*$ with continuous and dense embeddings.\nLet us denote\n\\begin{equation}\\label{eq:jap}\n\\jap{K} =|K+i|=\\sqrt{K^2+1}.\n\\end{equation}\nThen $\\jap{K}^{1\/2}$ is a self-adjoint operator on $\\mathcal{H}$ with domain\n$\\mathcal{G}$ and $\\jap{K}$ induces an isomorphism $\\mathcal{G}\\to\\mathcal{G}^*$. The\nfollowing result is a straightforward consequence of Theorem 2.8 and\nLemma 2.9 from \\cite{DG3}. \n\n\\begin{theorem}\\label{th:af}\nLet $I:\\mathcal{G}\\to\\mathcal{G}^*$ be a continuous symmetric operator and let us\nassume that there are real numbers $\\mu,a$ with $0<\\mu<1$ such that\none of the following conditions is satisfied:\n\\begin{compactenum}[(i)]\n\\item\n$\\pm I \\leq\\mu|K+ia|,$\n\\item\n$K$ is bounded from below and $ I \\geq -\\mu|K+ia|.$\n\\end{compactenum}\nLet $H=K+I$ be the form sum of $K$ and $I$, so $H$ has as domain the\nset of $u\\in\\mathcal{G}$ such that $Ku+Iu\\in\\mathcal{H}$ and acts as $Hu=Ku+Iu$. \nThen $H$ is a self-adjoint operator on $\\mathcal{H}$. If there is\n$\\alpha>1\/2$ such that \n$\\langle K\\rangle^{-1\/2}I\\langle K\\rangle^{-\\alpha}\\in\\mathscr{C}$ then $H$\nis strictly affiliated to $\\mathscr{C}$. \nIf $O\\in\\mathcal{S}$ and the semilattice $\\mathcal{S}$ is atomic then\n\\begin{equation}\\label{eq:ess2}\n\\mathrm{Sp_{ess}}(H)=\\overline{\\textstyle{\\bigcup}}_{E\\in\\mathcal{P}(\\mathcal{S})}\\mathrm{Sp}(H_{\\geq E}).\n\\end{equation}\n\\end{theorem}\n\nThe last assertion of the theorem follows immediately from Theorem\n\\ref{th:gas} and is a general version of the HVZ theorem. In order\nto have a more explicit description of the observables $H_{\\geq\n E}\\equiv\\mathscr{P}_{\\geq E}H$ we now prove an analog of Theorem 3.5 from\n\\cite{DG3}. We cannot use that theorem in our context for three\nreasons: first we did not suppose that $\\mathcal{S}$ has a maximal element,\nthen even if $\\mathcal{S}$ has a maximal element $\\mathcal{X}$ the action of the\ncorresponding algebra $\\mathscr{C}(\\mathcal{X})$ on the algebras $\\mathscr{C}(E)$ is\ndegenerate, and finally our ``free'' operator $K$ is not affiliated\nto $\\mathscr{C}(\\mathcal{X})$.\n\n\\begin{theorem}\\label{th:afi}\nFor each $E\\in\\mathcal{S}$ let $I(E)\\in L(\\mathcal{G},\\mathcal{G}^*)$ be a symmetric\noperator such that:\n\\begin{compactenum}[(i)]\n\\item\n$\\jap{K}^{-1\/2}I(E)\\jap{K}^{-\\alpha}\\in\\mathscr{C}(E)$ for some\n$\\alpha> 1\/2$ independent of $E$,\n\\item\nthere are real positive numbers $\\mu_E,a$ such that either $\\pm\nI(E) \\leq\\mu_E|K+ia|$ for all $E$ or $K$ is bounded from below and\n$ I(E) \\geq -\\mu_E|K+ia|$ for all $E$,\n\\item\nwe have $\\sum_E\\mu_E\\equiv\\mu<1$ and the series $\\sum_E I(E)\\equiv I$\nis norm summable in $L(\\mathcal{G},\\mathcal{G}^*)$.\n\\end{compactenum}\nLet us set $I_{\\geq E}=\\sum_{F\\geq E}I(F)$. Define the self-adjoint\noperator $H=K+I$ on $\\mathcal{H}$ as in Theorem \\ref{th:af} and define\nsimilarly the self-adjoint operator $H_{\\geq E}=K_{\\geq E}+I_{\\geq\n E}$ on $\\mathcal{H}_{\\geq E}$. Then the operator $H$ is strictly\naffiliated to $\\mathscr{C}$, the operator $H_{\\geq E}$ is strictly\naffiliated to $\\mathscr{C}_{\\geq E}$, and we have $\\mathscr{P}_{\\geq E}H=H_{\\geq E}$.\n\\end{theorem}\n\\proof We shall consider only the case when $\\pm I(E)\n\\leq\\mu_E|K+ia|$ for all $E$. The more singular situation when $K$\nis bounded from below but there is no restriction on the positive\npart of the operators $I(E)$ (besides summability) is more difficult\nbut the main idea has been explained in \\cite{DG3}.\n\nWe first make some comments to clarify the definition of the\noperators $H$ and $H_{\\geq E}$. Observe that our assumptions imply\n$\\pm I\\leq\\mu|K+ia|$ hence if we set\n\\[\n\\Lambda\\equiv|K+ia|^{-1\/2}=(K^2+a^2)^{-1\/4}\\in \\mathscr{T}_\\text{d}\n\\]\nthen we obtain\n\\[\n\\pm\\braket{u}{Iu}\\leq\\mu\\braket{u}{|K+ia|u}=\n\\mu\\| |K+ia|^{1\/2}u\\| = \\mu\\| \\Lambda^{-1} u\\|\n\\] \nwhich is equivalent to $\\pm\\Lambda I\\Lambda\\leq\\mu$ or $\\|\\Lambda\nI\\Lambda\\|\\leq\\mu$. In particular we may use Theorem \\ref{th:af}\nin order to define the self-adjoint operator $H$. Moreover, we have\n\\[\n\\jap{K}^{-1\/2}I\\jap{K}^{-\\alpha}={\\textstyle\\sum_E}\n\\jap{K}^{-1\/2}I(E)\\jap{K}^{-\\alpha}\\in\\mathscr{C}\n\\]\nbecause the series is norm summable in $L(\\mathcal{H})$. Thus $H$ is\nstrictly affiliated to $\\mathscr{C}$. \n\nIn order to define $H_{\\geq E}$ we first make a remark on \n$I_{\\geq E}$. If we set $\\mathcal{G}_X=D(|K_X|^{-1\/2})$ and if we equip\n$\\mathcal{G}$ and $\\mathcal{G}_X$ with the norms \\label{p:formd}\n\\[\n\\|u\\|_\\mathcal{G}=\\|\\jap{K}^{1\/2}u\\|_\\mathcal{H} \\quad \\text{and} \\quad\n\\|u\\|_{\\mathcal{G}_X}=\\|\\jap{K_X}^{1\/2}u\\|_{\\mathcal{H}_X}\n\\]\nrespectively then clearly $\\mathcal{G}=\\oplus_X\\mathcal{G}_X$ and\n$\\mathcal{G}^*=\\oplus_X\\mathcal{G}^*_X $ where the sums are Hilbertian direct sums\nand $\\mathcal{G}^*$ and $\\mathcal{G}_X^*$ are equipped with the dual norms. Then\neach $I(F)$ may be represented as a matrix\n$I(F)=(I_{XY}(F))_{X,Y\\in\\mathcal{S}}$ of continuous operators\n$I_{XY}(E):\\mathcal{G}_Y\\to\\mathcal{G}^*_X$. Clearly\n\\[\n\\jap{K}^{-1\/2}I(F)\\jap{K}^{-\\alpha}=\n\\left(\\jap{K_X}^{-1\/2}I_{XY}(F)\\jap{K_Y}^{-\\alpha}\\right)_{X,Y\\in\\mathcal{S}}\n\\] \nand since by assumption (i) this belongs to $\\mathscr{C}(F)$ we see that\n$I_{XY}(F)=0$ if $X\\not\\supset F$ or $Y\\not\\supset F$. Now fix $E$\nand let $F\\supset E$. Then, when viewed as a sesquilinear form,\n$I(F)$ is supported by the subspace $\\mathcal{H}_{\\geq E}$ and has domain\n$\\mathcal{G}_{\\geq E}= D(|K_{\\geq E}|^{1\/2}$. It follows that $I_{\\geq E}$ \nis a sesquilinear form with domain $\\mathcal{G}_{\\geq E}$ supported by the\nsubspace $\\mathcal{H}_{\\geq E}$ and may be thought as an element of\n$L(\\mathcal{G}_{\\geq E},\\mathcal{G}^*_{\\geq E})$ such that\n$\\pm I_{\\geq E}\\leq \\mu |K_{\\geq E}+ia|$ because \n$\\sum_{F\\supset E}\\mu_F\\leq \\mu$. To conclude, we may now define \n$H_{\\geq E}=K_{\\geq E}+I_{\\geq E}$ exactly as in the case of $H$ and\nget a self-adjoint operator on $\\mathcal{H}_{\\geq E}$ strictly affiliated to\n$\\mathscr{C}_{\\geq E}$. Note that this argument also gives\n\\begin{equation}\\label{eq:ek}\n\\jap{K}^{-1\/2} I(F) \\jap{K}^{-1\/2}=\n\\jap{K_{\\geq E}}^{-1\/2} I(F) \\jap{K_{\\geq E}}^{-1\/2}.\n\\end{equation}\nIt remains to be shown that $\\mathscr{P}_{\\geq E}H=H_{\\geq E}$. If we set\n$R\\equiv(ia-H)^{-1}$ and $R_{\\geq E}\\equiv(ia-H_{\\geq E})^{-1}$ then\nthis is equivalent to $\\mathscr{P}_{\\geq E}R=R_{\\geq E}$. Let us set\n\\[\nU=|ia-K|(ia-K)^{-1}=\\Lambda^{-2}(ia-K)^{-1}, \\quad\nJ=\\Lambda I\\Lambda U.\n\\]\nThen $U$ is a unitary operator and $\\|J\\|<1$, so we get a norm\nconvergent series expansion\n\\[\nR=(ia-K-I)^{-1}=\n\\Lambda U(1-\\Lambda I\\Lambda U)^{-1}\\Lambda =\n{\\textstyle\\sum_{n\\geq0}}\\Lambda U J^n\\Lambda\n\\]\nwhich implies\n\\[\n\\mathscr{P}_{\\geq E} (R)=\n{\\textstyle\\sum_{n\\geq0}}\n\\mathscr{P}_{\\geq E}\\big(\\Lambda U J^{n}\\Lambda\\big)\n\\]\nthe series being norm convergent. Thus it suffices to prove that for\neach $n\\geq0$ \n\\begin{equation}\\label{eq:ekk}\n\\mathscr{P}_{\\geq E}\\big(\\Lambda U J^{n}\\Lambda\\big)=\n\\Lambda_{\\geq E} (J_{\\geq E})^{n}\\Lambda_{\\geq E}\n\\end{equation}\nwhere $J_{\\geq E}=\\Lambda_{\\geq E} I_{\\geq E}\\Lambda_{\\geq E}\nU_{\\geq E}$. Here $\\Lambda_{\\geq E}$ and $U_{\\geq E}$ are\nassociated to $K_{\\geq E}$ in the same way $\\Lambda$ and $K$ are\nassociated to $K$. For $n=0$ this is obvious because \n$\\mathscr{P}_{\\geq E}K=K_{\\geq E}$. If $n=1$ this is easy because\n\\begin{align}\\label{eq:e}\n\\Lambda U J\\Lambda &= \\Lambda U \\Lambda I \\Lambda U\\Lambda=\n(ia-K)^{-1} I (ia-K)^{-1} \\\\\n&=\n[(ia-K)^{-1}\\jap{K}^{1\/2}] \\cdot \n[\\jap{K}^{-1\/2} I \\jap{K}^{-\\alpha}] \\cdot\n[\\jap{K}^{\\alpha} (ia-K)^{-1}] \\nonumber\n\\end{align}\nand it suffices to note that \n$\\mathscr{P}_{\\geq E}(\\jap{K}^{-1\/2} I(F) \\jap{K}^{-\\alpha})=0$ if\n$F\\not\\supset E$ and to use \\eqref{eq:ek} for $F\\supset E$. \n\nTo treat the general case we make some preliminary remarks. \nIf $J(F)=\\Lambda I(F) \\Lambda U$ then $J=\\sum_F J(F)$ where the\nconvergence holds in norm on $\\mathcal{H}$ because of the condition (iii). \nThen we have a norm convergent expansion\n\\[\n\\Lambda U J^n \\Lambda ={\\textstyle\\sum_{F_1,\\dots,F_n\\in\\mathcal{S}}}\n\\Lambda U J(F_1)\\dots J(F_n) \\Lambda.\n\\]\nAssume that we have shown $\\Lambda U J(F_1)\\dots\nJ(F_n)\\Lambda\\in\\mathscr{C}(F_1\\cap\\dots\\cap F_n)$. Then we get\n\\begin{equation}\\label{eq:ekkk}\n\\mathscr{P}_{\\geq E}(\\Lambda U J^n \\Lambda)=\n{\\textstyle\\sum_{F_1\\geq E,\\dots,F_n\\geq E}}\n\\Lambda U J(F_1)\\dots J(F_n) \\Lambda\n\\end{equation}\nbecause if one $F_k$ does not contain $E$ then the intersection\n$F_1\\cap\\dots\\cap F_n$ does not contain $E$ hence $\\mathscr{P}_{\\geq E}$\napplied to the corresponding term gives $0$. Because of\n\\eqref{eq:ek} we have $J(F)=\\Lambda_{\\geq E} I(F) \\Lambda_{\\geq E}\nU_{\\geq E}$ if $F\\supset E$ and we may replace everywhere in the\nright hand side of \\eqref{eq:ekkk} $\\Lambda$ and $U$ by\n$\\Lambda_{\\geq E}$ and $U_{\\geq E}$. This clearly proves\n\\eqref{eq:ekk}. \n\nNow we prove the stronger fact \n$\\Lambda U J(F_1)\\dots J(F_n)\\in\\mathscr{C}(F_1\\cap\\dots\\cap F_n)$. \nIf $n=1$ this follows from a slight modification of\n\\eqref{eq:e}: the last factor on the right hand side of \\eqref{eq:e}\nis missing but is not needed. Assume that the assertion holds for\nsome $n$. Since $K$ is strictly affiliated to $\\mathscr{T}_\\text{d}$ and\n$\\mathscr{T}_\\text{d}$ acts non-degenerately on each $\\mathscr{C}(F)$ we may use the\nCohen-Hewitt theorem to deduce that there is $\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R})$\nsuch that\n$\\Lambda U J(F_1)\\dots J(F_n)=T\\varphi(K)$ \nfor some $T\\in\\mathscr{C}(F_1\\cap\\dots\\cap F_n)$. \nThen\n\\[\n\\Lambda U J(F_1)\\dots J(F_n)J(F_{n+1})=T\\varphi(K)J(F_{n+1})\n\\]\nhence it suffices to prove that $\\varphi(K)J(F)\\in\\mathscr{C}(F)$ for any\n$F\\in\\mathcal{S}$ and any $\\varphi\\in\\cc_{\\mathrm{o}}(\\mathbb{R})$. But the set of $\\varphi$\nwhich have this property is a closed subspace of $\\cc_{\\mathrm{o}}(\\mathbb{R})$ which\nclearly contains the functions $\\varphi(\\lambda)=(\\lambda -z)^{-1}$\nif $z$ is not real hence is equal to $\\cc_{\\mathrm{o}}(\\mathbb{R})$. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\\begin{remark}\\label{re:alpha}\nChoosing $\\alpha>1\/2$ allows one to consider perturbations of $K$\nwhich are of the same order as $K$, e.g. in the $N$-body situations\none may add to the Laplacian $\\Delta$ on operator like $\\nabla^*\nM\\nabla$ where the function $M$ is bounded measurable and has the\nstructure of an $N$-body type potential, cf. \\cite{DG3,DerI}. \n\\end{remark}\n\n\nThe only assumption of Theorem \\ref{th:afi} which is really relevant\nis $\\jap{K}^{-1\/2}I(E)\\jap{K}^{-\\alpha}\\in\\mathscr{C}(E)$. We shall give\nbelow more explicit conditions which imply it. If we change\nnotation $E\\to Z$ and use the formalism introduced in the proof of\nTheorem \\ref{th:afi} we have\n\\begin{equation}\\label{eq:xye}\nI(Z)=(I_{XY}(Z))_{X,Y\\in\\mathcal{S}} \\quad \\text{with} \\quad\nI_{XY}(Z):\\mathcal{G}_Y\\to\\mathcal{G}^*_X \\text{ continuous}.\n\\end{equation}\nWe are interested in conditions on $I_{XY}(Z)$ which imply\n\\begin{equation}\\label{eq:xye1}\n\\jap{K_X}^{-1\/2}I_{XY}(Z)\\jap{K_X}^{-\\alpha} \\in \\mathscr{C}_{XY}(Z).\n\\end{equation}\nFor this we shall use Theorem \\ref{th:yzintr} which gives a simple\nintrinsic characterization of $\\mathscr{C}_{XY}(Z)$.\n\nThe construction which follows is interesting only if $X$ is not a\ndiscrete group, otherwise $X^*$ is compact and many conditions are\ntrivially satisfied. We shall use weights only in order to avoid\nimposing on the functions $h_X$ regularity conditions stronger than\ncontinuity.\n\nA positive function $w$ on $X^*$ is a \\emph{weight} if\n$\\lim_{k\\to\\infty} w(k)=\\infty$ and $w(k+p)\\leq\\omega(k)w(p)$ for\nsome function $\\omega$ on $X^*$ and all $k,p$. We say that $w$ is\n\\emph{regular} if one may choose $\\omega$ such that\n$\\lim_{k\\to0}\\omega(k)=1$. The example one should have in mind when\n$X$ is an Euclidean space is $w(k)=\\jap{k}^s$ for some $s>0$. Note\nthat we have $\\omega(-k)^{-1}\\leq w(k+p)w(p)^{-1} \\leq \\omega(k)$\nhence if $w$ is a regular weight then \\label{p:regw}\n\\begin{equation}\\label{eq:regw}\n\\theta(k)\\equiv \\sup_{p\\in X^*}\\frac{|w(k+p)-w(p)|}{w(p)} \n\\Longrightarrow\n\\lim_{k\\to0}\\theta(k)=0.\n\\end{equation}\nIt is clear that if $w$ is a regular weight and $\\sigma\\geq 0$ is a\nreal number then $w^\\sigma$ is also a regular weight.\n\nWe say that two functions $f,g$ defined on a neighborhood of \ninfinity of $X^*$ are \\emph{equivalent} and we write $f\\sim g$ if\nthere are numbers $a,b$ such that $a|f(k)|\\leq|g(k)|\\leq\nb|f(k)|$. Then $|f|^\\sigma\\sim|g|^\\sigma$ for all $\\sigma>0$.\n\n\nWe denote $\\mathcal{G}^\\sigma_X=D(|K_X|^{\\sigma\/2})$ and\n$\\mathcal{G}^{-\\sigma}_X\\equiv(\\mathcal{G}^{\\sigma}_X)^*$ with $\\sigma \\geq 1$. In\nparticular $\\mathcal{G}^1_X=\\mathcal{G}_X$ and $\\mathcal{G}^{-1}_X=\\mathcal{G}^*_X$.\n\n\\begin{proposition}\\label{pr:tex}\n Assume that $h_X,h_Y$ are equivalent to regular weights. Let\n $Z\\subset X\\cap Y$ and let $I_{XY}(Z)$ be a continuous map\n $\\mathcal{G}_Y\\to\\mathcal{G}^*_X$ such that\n\\begin{enumerate}\n\\item\n$U_z I_{XY}(Z)=I_{XY}(Z) U_z$ if $z\\in Z$ and\n$V^*_k I_{XY}(Z) V_k\\to I_{XY}(Z)$ if $k\\to 0$ in $(X+Y)^*$,\n\\item \n$(U_x-1)I_{XY}(Z)\\to 0$ if $x\\to 0$ in $X$ and\n$(V_k-1) I_{XY}(Z)\\to 0$ if $k\\to 0$ in $(X\/Z)^*$,\n\\end{enumerate}\nwhere the limits hold in norm in $L(\\mathcal{G}^{\\sigma}_Y,\\mathcal{G}^{-1}_X)$ for\nsome $\\sigma\\geq1$. Then \\eqref{eq:xye1} holds with\n$\\alpha=\\sigma\/2$.\n\\end{proposition}\n\\proof We begin with some general comments on weights. Let $w$ be a\nregular weight and let $\\mathcal{G}_X$ be the domain of the operator $w(P)$\nin $\\mathcal{H}_X$ equipped with the norm $\\|w(P)u\\|$. Then $\\mathcal{G}_X$ is a\nHilbert space and if $\\mathcal{G}^*_X$ is its adjoint space then we get a\nscale of Hilbert spaces $\\mathcal{G}_X\\subset\\mathcal{H}_X\\subset\\mathcal{G}^*_X$ with\ncontinuous and dense embeddings. Since $U_x$ commutes with $w(P)$ it\nis clear that $\\{U_x\\}_{x\\in X}$ induces strongly continuous unitary\nrepresentation of $X$ on $\\mathcal{G}_X$ and $\\mathcal{G}^*_X$. Then\n\\[\n\\|V_k u\\|_{\\mathcal{G}_X}=\\|w(k+P)u\\|\\leq\\omega(k)\\|u\\|_{\\mathcal{G}_X}\n\\]\nfrom which it follows that $\\{V_k\\}_{k\\in X^*}$ induces by\nrestriction and extension strongly continuous representations of\n$X^*$ in $\\mathcal{G}_X$ and $\\mathcal{G}^*_X$. Moreover, as operators on $\\mathcal{H}_X$\nwe have \\label{p:RK}\n\\begin{align} \n|V_k^*w(P)^{-1}V_k-w(P)^{-1}| \n&=|w(k+P)^{-1}-w(P)^{-1}| \n= |w(k+P)^{-1}(w(P)-w(k+P))w(P)^{-1}| \\nonumber \\\\\n& \\leq \\omega(-k)|(w(P)-w(k+P))w(P)^{-2}|\n\\leq \\omega(-k)\\theta(k) w(P)^{-1}. \\label{eq:refw} \n\\end{align}\nNow let $w_X,w_Y$ be regular weights equivalent to\n$|h_X|^{1\/2},|h_Y|^{1\/2}$ and let us set $S=I_{XY}(Z)$. Then\n\\[\n\\jap{K_X}^{-1\/2}S\\jap{K_Y}^{-\\alpha}=\n\\jap{K_X}^{-1\/2}w_X(P)\\cdot \nw_X(P)S w_Y(P)^{-2\\alpha} \\cdot\nw_Y(P)^{2\\alpha}\\jap{K_Y}^{-\\alpha}\n\\]\nand $\\jap{h_X}^{-1\/2}w_X$, $\\jap{h_Y}^{-\\alpha}w_Y^{2\\alpha}$ and\ntheir inverses are bounded continuous functions on $X,Y$. Since\n$\\mathscr{C}_{XY}(Z)$ is a non-degenerate left $\\mathscr{T}_X$-module and right\n$\\mathscr{T}_Y$-module we may use the Cohen-Hewitt theorem to deduce that\n\\eqref{eq:xye1} is equivalent to\n\\begin{equation}\\label{eq:xye2}\nw_X(P)^{-1}I_{XY}(Z) w_Y(P)^{-\\sigma} \\in \\mathscr{C}_{XY}(Z)\n\\end{equation}\nwhere $\\sigma=2\\alpha$. To simplify notations we set $W_X=w_X(P),\nW_Y=w^\\sigma_Y(P)$. We also omit the index $X$ or $Y$ for the\noperators $W_X,W_Y$ since their value is obvious from the\ncontext. In order to show $W^{-1}SW^{-1}\\in \\mathscr{C}_{XY}(Z)$ we check\nthe conditions of Theorem \\ref{th:yzintr} with $T=W^{-1}SW^{-1}$.\nThe first part of condition (2) of the theorem is verified by the\nhypothesis (2) of the present proposition. We may assume $\\sigma>1$\nand then hence the second part of condition (2) of the theorem\nfollows from\n\\[\n\\|T(U_y-1)\\|\\leq \\|W^{-1}I_{XY}(Z)w_Y^{-1}(P)\\|\\to 0 \n\\|(U_y-1)w_Y^{1-\\sigma}(P)\\| \\quad \\text{if } y\\to0.\n\\]\nTo check the second part of condition (1) of the theorem set\n$W_k=V_k^*WV_k$ and $S_k=V_k^* SV_k$ and write\n\\begin{align*}\nV_k^*TV_k-T \n&= W_k^{-1} S_k W_k^{-1}-W^{-1}SW^{-1}\\\\\n&= (W_k^{-1}-W^{-1})S_kW_k^{-1} + W^{-1}(S_k-S)W_k^{-1}\n+W^{-1}S(W_k^{-1} -W^{-1}).\n\\end{align*}\nNow if we use \\eqref{eq:refw} and set $\\xi(k)=\\omega(-k)\\theta(k)$\nwe get\n\\begin{align*}\n\\|V_k^*TV_k-T\\| &\\leq \n\\xi(k)\\|W^{-1}S_kW_k^{-1}\\| + \\|W^{-1}(S_k-S)W^{-1}\\|\\|W W_k^{-1}\\|\n+\\xi(k)\\|W^{-1}SW^{-1}\\|\n\\end{align*}\nwhich clearly tends to zero if $k\\to0$. Condition\n(3) of Theorem \\ref{th:yzintr} follows by a similar argument.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\nNow let $H$ be defined according to the algorithm of \\S\\ref{ss:ex}.\nThen condition (i) of Theorem \\ref{th:afi} will be satisfied for all\n$\\alpha >1\/2$. Indeed, from Proposition \\ref{pr:tex} we get\n$\\jap{K}^{-1\/2}\\Pi_\\mathcal{T} I(Z)\\Pi_\\mathcal{T}\\jap{K}^{-\\alpha}\\in\\mathscr{C}(Z)$ for\nany finite $\\mathcal{T}$ and this operator converges in norm to\n$\\jap{K}^{-1\/2} I(Z)\\jap{K}^{-\\alpha}$. Thus all conditions of\nTheorem \\ref{th:afi} are fulfilled by the Hamiltonian $H=K+I$ and so\n$H$ is strictly affiliated to $\\mathscr{C}$. \\label{p:algor}\n\n\n\n\\section{The Mourre estimate} \n\\label{s:mou}\n\\protect\\setcounter{equation}{0}\n\n\n\n\\subsection{Proof of the Mourre estimate}\n\\label{ss:mest}\n\nFrom now on we work in the framework of the second part of Section\n\\ref{s:euclid}, so we assume that $\\mathcal{S}$ is a \\emph{finite}\nsemilattice of finite dimensional subspaces of an Euclidean\nspace. In this subsection we prove the Mourre estimate for\nnonrelativistic Hamiltonians. The strategy of the proof is that\nintroduced in \\cite{BG2} and further developed in \\cite{ABG,DG2}\n(graded $C^*$-algebras over infinite semilattices and dispersive\nHamiltonians are considered in Section 5 from \\cite{DG2}). We\nchoose the generator $D$ of the dilation group $W_\\tau$ in $\\mathcal{H}$ as\nconjugate operator for reasons explained below. For special types of\ninteractions, similar to those occurring in quantum field models,\nwhich are allowed by our formalism, better choices can be made, but\nat a technical level there is nothing new in that with respect to\n\\cite{Geo} (these special interactions correspond to distributive\nsemilattices $\\mathcal{S}$).\n\n\n\n\nThe dilations implement a group of automorphisms of the\n$C^*$-algebra $\\mathscr{C}$ which is compatible with the grading, i.e. it\nleaves invariant each component $\\mathscr{C}(Z)$ of $\\mathscr{C}$. In fact, it is\nclear that $W_\\tau^*\\mathscr{C}_{XY}(Z)W_\\tau=\\mathscr{C}_{XY}(Z)$ for all $X,Y,Z$\nhence $W_\\tau^*\\mathscr{C}(Z)W_\\tau=\\mathscr{C}(Z)$. This fact plays a fundamental\nrole in the proof of the Mourre estimate for operators affiliated to\n$\\mathscr{C}$ and explains the choice of $D$ as conjugate operator.\nMoreover, for each $T\\in\\mathscr{C}$ the map $\\tau\\mapsto W^*_\\tau T W_\\tau$\nis norm continuous. We can compute explicitly the function\n$\\widehat\\rho_H$ thanks to the relation\n\\begin{equation}\\label{eq:dlap}\nW^*_\\tau \\Delta_X W_\\tau = \\mathrm{e}^\\tau\\Delta_X \\quad \\text{or}\\quad\n[\\Delta_X, i D]=\\Delta_X \n\\end{equation}\nWe say that a self-adjoint operator $H$ \\emph{is of class $C^1(D)$}\nor \\emph{of class $C^1_\\mathrm{u}(D)$} if $W^*_\\tau RW_\\tau$ as a function\nof $\\tau$ is of class $C^1$ strongly or in norm respectively. Here\n$R=(H-z)^{-1}$ for some $z$ outside the spectrum of $H$. The formal\nrelation\n\\begin{equation}\\label{eq:dres}\n[D,R]= R[H,D] R \n\\end{equation}\ncan be given a rigorous meaning as follows. If $H$ is of class\n$C^1(D)$ then the intersection $\\mathscr{D}$ of the domains of the operators\n$H$ and $D$ is dense in $D(H)$ and the sesquilinear form with domain\n$\\mathscr{D}$ associated to the formal expression $HD-DH$ is continuous for\nthe topology of $D(H)$ so extends uniquely to a continuous\nsesquilinear form on the domain of $H$ which is denoted\n$[H,D]$. This defines the right hand side of \\eqref{eq:dres}. The\nleft hand side can be defined for example as \n$i\\frac{d}{d\\tau}W_\\tau^*RW_\\tau|_{\\tau=0}$. \n\nFor Hamiltonians as those considered here it is easy to decide that\n$H$ is of class $C^1(D)$ in terms of properties of the commutator\n$[H, D]$. Moreover, the following is easy to prove: \\emph{if $H$\n is affiliated to $\\mathscr{C}$ then $H$ is of class $C^1_\\mathrm{u}(D)$ if and\n only if $H$ is of class $C^1(D)$ and $[R,D]\\in\\mathscr{C}$}.\n\nLet $H$ be of class $C^1(D)$ and $\\lambda\\in\\mathbb{R}$. Then for each\n$\\theta\\in\\cc_{\\mathrm{c}}(\\mathbb{R})$ with $\\theta(\\lambda)\\neq0$ one may find a real\nnumber $a$ and a compact operator $K$ such that \n\\begin{equation}\\label{eq:must}\n\\theta(H)^*[H,iD]\\theta(H)\\geq a|\\theta(H)|^2+K.\n\\end{equation}\n\n\\begin{definition}\\label{df:must}\nThe upper bound $\\widehat\\rho_H(\\lambda)$ of the numbers $a$ for which\nsuch an estimate holds is \\emph{the best constant in the Mourre\n estimate for $H$ at $\\lambda$}. The \\emph{threshold set} of $H$\n(relative to $D$) is the closed real set\n\\begin{equation}\\label{eq:thr0}\n\\tau(H)=\\{\\lambda \\mid \\widehat\\rho_H(\\lambda)\\leq0\\}\n\\end{equation}\nOne says that $D$ is \\emph{conjugate to} $H$ at $\\lambda$ if \n$\\widehat\\rho_H(\\lambda)>0$. \n\\end{definition}\n\nThe set $\\tau(H)$ is closed because the function\n$\\widehat\\rho_H:\\mathbb{R}\\to]-\\infty,\\infty]$ is lower semicontinuous.\n\n\nTo each closed real set $A$ we associate the function\n$N_A:\\mathbb{R}\\to[-\\infty,\\infty[$ defined by\n\\begin{equation}\\label{eq:na}\nN_A(\\lambda)=\\sup\\{ x\\in A \\mid x\\leq\\lambda\\}.\n\\end{equation}\nWe make the convention $\\sup\\emptyset=-\\infty$. Thus $N_A$ may take\nthe value $-\\infty$ if and only if $A$ is bounded from below and then\n$N_A(\\lambda)=-\\infty$ if and only if $\\lambda<\\min A$. The function\n$N_A$ is further discussed during the proof of Lemma \\ref{lm:nab}.\n\n\nNonrelativistic many-body Hamiltonians have been introduced in\nDefinition \\ref{df:NR}. Let $\\mathrm{ev}(T)$ be the set of\neigenvalues of an operator $T$.\n\n\\begin{theorem}\\label{th:thr}\n Let $\\mathcal{S}$ be finite and let $H=H_\\mathcal{S}$ be a nonrelativistic\n many-body Hamiltonian of class $C^1_\\mathrm{u}(D)$. Then\n $\\widehat\\rho_H(\\lambda)=\\lambda-N_{\\tau(H)}(\\lambda)$ for all real\n $\\lambda$ and $\\tau(H)$ is a closed \\emph{countable} real set\n given by\n\\begin{equation}\\label{eq:thr}\n\\tau(H)=\\textstyle{\\bigcup}_{X\\neq O}\\mathrm{ev}(H_{\\mathcal{S}\/X}).\n\\end{equation} \n\\end{theorem}\n\\proof We first treat the case $O\\in\\mathcal{S}$. We need some facts which\nare discussed in detail in Sections 7.2, 8.3 and 8.4 from \\cite{ABG}\n(see pages 51--61 in \\cite{BG2} for a shorter presentation).\n\n(i) For each real $\\lambda$ let $\\rho_H(\\lambda)$ be the upper bound\nof the numbers $a$ for which an estimate like \\eqref{eq:must} but\nwith $K=0$ holds. This defines a lower semicontinuous function\n$\\rho_H:\\mathbb{R}\\to ]-\\infty,\\infty] $ hence the set\n$\\varkappa(H)=\\{\\lambda \\mid \\rho_H(\\lambda)\\leq 0\\}$ is a closed\nreal set called \\emph{critical set} of $H$ (relative to $D$). We\nclearly have $\\rho_H\\leq\\widehat\\rho_H$ and so\n$\\tau(H)\\subset\\varkappa(H)$.\n\n(ii) Let $\\mu(H)$ be the set of eigenvalues of $H$ such that\n$\\widehat\\rho_H(\\lambda)>0$. Then $\\mu(H)$ is a discrete subset of\n$\\mathrm{ev}(H)$ consisting of eigenvalues of finite\nmultiplicity. This is essentially the virial theorem.\n\n(iii) There is a simple and rather unexpected relation between the\nfunctions $\\rho_H$ and $\\widehat\\rho_H$: they are ``almost'' equal. In\nfact, $\\rho_H(\\lambda)=0$ if $\\lambda\\in\\mu(H)$ and\n$\\rho_H(\\lambda)=\\widehat\\rho_H(\\lambda)$ otherwise. In particular\n\\begin{equation}\\label{eq:tev}\n\\varkappa(H)=\\tau(H)\\cup \\mathrm{ev}(H)=\\tau(H)\\sqcup\\mu(H)\n\\end{equation}\nwhere $\\sqcup$ denotes disjoint union. \n\n(iv) This step is easy but rather abstract and the $C^*$-algebra\nsetting really comes into play. We assume that $H$ is affiliated to\nour algebra $\\mathscr{C}$. The preceding arguments did not require more than\nthe $C^1(D)$ class. Now we require $H$ to be of class $C^1_\\mathrm{u}(D)$.\nThen the operators $H_{\\geq X}$ are also of class $C^1_\\mathrm{u}(D)$ and\nwe have the important relation (Theorem 8.4.3 in \\cite{ABG} or\nTheorem 4.4 in \\cite{BG2})\n\\[\n\\widehat\\rho_H=\\min_{X\\in\\mathcal{P}(\\mathcal{S})}\\rho_{H_{\\geq X}}.\n\\]\nTo simplify notations we adopt the abbreviations $\\rho_{H_{\\geq\n X}}=\\rho_{\\geq X}$ and instead of $X\\in\\mathcal{P}(\\mathcal{S})$ we write\n$X\\gtrdot O$, which should be read ``$X$ covers $O$''. For coherence\nwith later notations we also set $\\widehat\\rho_H=\\widehat\\rho_\\mathcal{S}$. So\n\\eqref{eq:mustq} may be written\n\\begin{equation}\\label{eq:mustq}\n\\widehat\\rho_\\mathcal{S}=\\min_{X\\gtrdot O}\\rho_{\\geq X}.\n\\end{equation}\n\n(v) From \\eqref{eq:dlap} and \\eqref{eq:NR} we get\n\\[\nH_{\\geq X}=\\Delta_X\\otimes1+ 1\\otimes H_{\\mathcal{S}\/X}, \\quad\n[H_{\\geq X},iD]=\\Delta_X\\otimes1+ 1\\otimes [D,i H_{\\mathcal{S}\/X}].\n\\]\nRecall that we denote $D$ the generator of the dilation group\nindependently of the space in which it acts. We note that the formal\nargument which gives the second relation above can easily be made\nrigorous but this does not matter here. Indeed, since $H_{\\geq X}$\nis of class $C^1_\\mathrm{u}(D)$ and by using the first relation above, one\ncan easily show that $H_{\\mathcal{S}\/X}$ is also of class $C^1_\\mathrm{u}(D)$ (see\nthe proof of Lemma 9.4.3 in \\cite{ABG}). Now we may use Theorem\n8.3.6 from \\cite{ABG} to get\n\\[\n\\rho_{\\geq X}(\\lambda)=\\inf_{\\lambda_1+\\lambda_2=\\lambda}\n\\big(\\rho_{\\Delta_X}(\\lambda_1) + \\rho_{\\mathcal{S}\/X}(\\lambda_2) \\big)\n\\]\nwhere $\\rho_{\\mathcal{S}\/X}=\\rho_{H_{\\mathcal{S}\/X}}$. But clearly if $X\\neq O$ we\nhave $\\rho_{\\Delta_X}(\\lambda)=\\infty$ if $\\lambda<0$ and \n$\\rho_{\\Delta_X}(\\lambda)=\\lambda$ if $\\lambda\\geq0$. Thus we get\n\\begin{equation}\\label{eq:mustt}\n\\rho_{\\geq X}(\\lambda)=\\inf_{\\mu\\leq\\lambda}\n\\big(\\lambda-\\mu + \\rho_{\\mathcal{S}\/X}(\\mu) \\big)\n=\\lambda- \\sup_{\\mu\\leq\\lambda}\\big(\\mu - \\rho_{\\mathcal{S}\/X}(\\mu) \\big).\n\\end{equation}\n\n(vi) Now from \\eqref{eq:mustq} and \\eqref{eq:mustt} we get\n\\begin{equation}\\label{eq:musr}\n\\lambda-\\widehat\\rho_\\mathcal{S}(\\lambda)=\n\\max_{X\\gtrdot O}\\sup_{\\mu\\leq\\lambda}\n\\big(\\mu - \\rho_{\\mathcal{S}\/X}(\\mu)\\big).\n\\end{equation}\nFinally, we are able to prove the formula\n$\\widehat\\rho_H(\\lambda)=\\lambda-N_{\\tau(H)}(\\lambda)$ by induction\nover the semilattice $\\mathcal{S}$. In other terms, we assume that the\nformula is correct if $H$ is replaced by $H_{\\mathcal{S}\/X}$ for all $X\\neq\nO$ and we prove it for $H=H_{\\mathcal{S}\/O}$. So we have to show that the\nright hand side of \\eqref{eq:musr} is equal to\n$N_{\\tau(H)}(\\lambda)$.\n\nAccording to step (iii) above we have $\\rho_{\\mathcal{S}\/X}(\\mu)=0$ if\n$\\mu\\in\\mu(H_{\\mathcal{S}\/X})$ and\n$\\rho_{\\mathcal{S}\/X}(\\mu)=\\widehat\\rho_{\\mathcal{S}\/X}(\\mu)$ otherwise. Since by the\nexplicit expression of $\\widehat\\rho_{\\mathcal{S}\/X}$ this is a positive\nfunction and since $\\rho_H(\\lambda)\\leq0$ is always true if\n$\\lambda$ is an eigenvalue, we get $\\mu-\\rho_{\\mathcal{S}\/X}(\\mu)=\\mu$ if\n$\\mu\\in\\mathrm{ev}(H_{\\mathcal{S}\/X})$ and\n\\[\n\\mu-\\rho_{\\mathcal{S}\/X}(\\mu)=\\mu-\\widehat\\rho_{\\mathcal{S}\/X}(\\mu)=\nN_{\\tau(H_{\\mathcal{S}\/X})}(\\mu)\n\\]\notherwise. From the first part of Lemma \\ref{lm:nab} below we get\n\\[\n\\sup_{\\mu\\leq\\lambda}\\big(\\mu - \\rho_{\\mathcal{S}\/X}(\\mu)\\big)=\nN_{\\mathrm{ev}(H_{\\mathcal{S}\/X}) \\cup \\tau(H_{\\mathcal{S}\/X})}.\n\\]\nIf we use the second part of Lemma \\ref{lm:nab} then we see that\n\\[\n\\max_{X\\gtrdot O}\\sup_{\\mu\\leq\\lambda}\\big(\\mu -\n\\rho_{\\mathcal{S}\/X}(\\mu)\\big)=\n\\max_{X\\gtrdot O}N_{\\mathrm{ev}(H_{\\mathcal{S}\/X}) \\cup \\tau(H_{\\mathcal{S}\/X})}\n\\]\nis the $N$ function of the set \n\\[\n\\bigcup_{X\\gtrdot O}\\big(\\mathrm{ev}(H_{\\mathcal{S}\/X}) \n\\cup \\tau(H_{\\mathcal{S}\/X})\\big)=\n\\bigcup_{X\\gtrdot O}\\left(\\mathrm{ev}(H_{\\mathcal{S}\/X}) \n\\bigcup \\bigcup_{Y>X}\\mathrm{ev}(H_{\\mathcal{S}\/Y})\\right)=\n\\bigcup_{X>O}\\mathrm{ev}(H_{\\mathcal{S}\/X})\n\\]\nwhich finishes the proof of\n$\\widehat\\rho_H(\\lambda)=\\lambda-N_{\\tau(H)}(\\lambda)$ hence the proof\nof Theorem \\ref{th:thr} in the case $O\\in\\mathcal{S}$.\n\nNo assume $O\\notin\\mathcal{S}$ and let $E=\\min\\mathcal{S}$. Then $O\\in\\mathcal{S}\/E$ so we may\nuse the preceding result for $H_{\\mathcal{S}\/E}$. Moreover, we have\n$H=\\Delta_E\\otimes 1 + 1\\otimes H_{\\mathcal{S}\/E}$. Thus\n$\\mathrm{ev}(H)=\\emptyset$, $\\widehat\\rho_H=\\rho_H$, and we may use a\nrelation similar to \\eqref{eq:mustt} to get\n\\[\n\\lambda-\\widehat\\rho_H(\\lambda)=\n\\sup_{\\mu\\leq\\lambda}(\\mu-\\rho_{\\mathcal{S}\/E}(\\mu)).\n\\]\nBy what we have shown before we have\n$\\mu-\\rho_{\\mathcal{S}\/E}(\\mu)=N_{\\tau(H_{\\mathcal{S}\/E})}(\\mu)$ if\n$\\mu\\notin\\mu(H_{\\mathcal{S}\/E})$ and otherwise $\\mu-\\rho_{\\mathcal{S}\/E}(\\mu)=\\mu$.\nFrom Lemma \\ref{lm:nab} we get\n$\\lambda-\\widehat\\rho_H(\\lambda)=N_{\\tau(H_{\\mathcal{S}\/E})\\cup\\mu(H_{\\mathcal{S}\/E})}$.\nBut from \\eqref{eq:tev} we get $\\tau(H_{\\mathcal{S}\/E})\\cup\\mu(H_{\\mathcal{S}\/E})=\n\\tau(H_{\\mathcal{S}\/E})\\cup\\mathrm{ev}(H_{\\mathcal{S}\/E})$. From\n\\eqref{eq:thr} we get\n\\[\n\\tau(H_{\\mathcal{S}\/E})=\\textstyle{\\bigcup}_{Y\\in\\mathcal{S}\/E, Y\\neq O}\\mathrm{ev}(H_{(\\mathcal{S}\/E)\/Y})\n=\\textstyle{\\bigcup}_{X\\in\\mathcal{S}, X\\neq E} \\mathrm{ev}(H_{\\mathcal{S}\/X})\n\\] \nbecause if we write $Y=X\/E$ with $X\\in\\mathcal{S}, X\\neq E$ then\n$(\\mathcal{S}\/E)\/(X\/E)=\\mathcal{S}\/X$. Finally,\n\\[\n\\tau(H_{\\mathcal{S}\/E})\\,\\textstyle{\\bigcup}\\,\\mathrm{ev}(H_{\\mathcal{S}\/E})=\n\\textstyle{\\bigcup}_{X\\in\\mathcal{S}} \\mathrm{ev}(H_{\\mathcal{S}\/X})\n\\]\nwhich proves the Theorem in the case $O\\notin\\mathcal{S}$.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\n\n\nIt remains to show the following fact which was used above.\n\n\\begin{lemma}\\label{lm:nab}\nIf $A$ and $A\\cup B$ are closed and if $M$ is the function given by\n$M(\\mu)=N_A(\\mu)$ for $\\mu\\notin B$ and $M(\\mu)=\\mu$ for $\\mu\\in B$\nthen $\\sup_{\\mu\\leq\\lambda}M(\\mu)=N_{A\\cup B}(\\lambda)$. If $A,B$\nare closed then $\\sup(N_A,N_B)=N_{A\\cup B}$.\n\\end{lemma}\n\\proof The last assertion of the lemma is easy to check, we prove\nthe first one. Observe first that the function $N_A$ has the\nfollowing properties:\n\\begin{compactenum}\n\\item[(i)]\n$N_A$ is increasing and right-continuous,\n\\item[(ii)]\n$N_A(\\lambda)=\\lambda$ if $\\lambda\\in A$,\n\\item[(iii)] $N_A$ is locally constant and $N_A(\\lambda)<\\lambda$ on\n $A^\\mathrm{c}\\equiv \\mathbb{R}\\setminus A$.\n\\end{compactenum} \nIndeed, the first assertion in (i) and assertion (ii) are obvious.\nThe second part of (i) follows from the more precise and easy to prove\nfact \n\\begin{equation}\\label{eq:nae}\nN_A(\\lambda+\\varepsilon)\\leq N_A(\\lambda)+\\varepsilon \\quad\n\\text{for all real } \\lambda \\text{ and } \\varepsilon>0.\n\\end{equation}\nA connected component of the open set $A^\\mathrm{c}$ is necessarily an\nopen interval of one of the forms $]-\\infty,y[$ or $]x,y[$ or\n$]x,\\infty[$ with $x,y\\in A$. On the first interval (if such an\ninterval appears) $N_A$ is equal to $-\\infty$ and on the second or\nthe third one it is clearly constant and equal to $N_A(x)$. We also\nnote that the function $N_A$ is characterized by the properties\n(i)--(iii).\n\nThus, if we denote $N(\\lambda)=\\sup_{\\mu\\leq\\lambda}M(\\mu)$, then it\nwill suffices to show that the function $N$ satisfies the conditions\n(i)--(iii) with $A$ replace by $A\\cup B$. Observe that $M(\\mu)\\leq\\mu$\nand the equality holds if and only if $\\mu\\in A\\cup B$. Thus $N$ is\nincreasing, $N(\\lambda)\\leq\\lambda$, and $N(\\lambda)=\\lambda$ if\n$\\lambda\\in A\\cup B$.\n\nNow assume that $\\lambda$ belongs to a bounded connected component\n$]x,y[ $ of $A\\cup B$ (the unbounded case is easier to treat). If\n$x<\\mu0$.\n Then this holds for all $\\varepsilon>0$.\n\\end{corollary}\n\nIndeed, the first part of condition (ii) of Proposition\n\\ref{pr:stsob} ($s$ replaced by $s+\\varepsilon$) is\nautomatically satisfied.\n\nWe now give a Sobolev space version of Proposition \\ref{pr:tex}\nwhich uses the weights $\\jap{\\cdot}^s$ and is convenient in\napplications. By using Theorem \\ref{th:xyzeintr} instead of Theorem\n\\ref{th:yzintr} in the proof of Proposition \\ref{pr:tex} we get:\n\n\\begin{proposition}\\label{pr:etex}\nLet $s,t>0$ and $Z\\subset X\\cap Y$. Let \n$I_{XY}(Z)\\in L(\\mathcal{H}^t_Y,\\mathcal{H}^{-s}_X)$\nsuch that the following relations hold in norm in\n$ L(\\mathcal{H}^{t+\\varepsilon}_Y,\\mathcal{H}^{-s}_X)$ for some $\\varepsilon > 0$:\n\\begin{enumerate}\n\\item[{\\rm(1)}]\n$U_z I_{XY}(Z)=I_{XY}(Z) U_z$ if $z\\in Z$ and\n$V^*_z I_{XY}(Z) V_z\\to I_{XY}(Z)$ if $z\\to 0$ in $Z$,\n\\item[{\\rm(2)}] \n$I_{XY}(Z)(V_y-1)\\to 0$ if $y\\to 0$ in $Y\/Z$.\n\\end{enumerate}\nIf $h_X,h_Y$ are continuous real functions on $X,Y$ such that\n$h_X(x)\\sim\\jap{x}^{2s}$ and $h_Y(y)\\sim\\jap{y}^{2t}$ and if we set\n$K_X=h_X(P), K_Y=h_Y(P)$ then\n$\\jap{K_X}^{-1\/2}I_{XY}(Z)\\jap{K_Y}^{-\\alpha}\\in\\mathscr{C}_{XY}(Z)$ if\n$\\alpha>1\/2$.\n\\end{proposition}\n\nOur next purpose is to discuss in more detail the structure of the\noperators $I_{XY}(Z)$ from Proposition \\ref{pr:etex}. For this we\nmake a Fourier transformation $\\mathcal{F}_Z$ in the $Z$ variable as in the\nproof of Theorem \\ref{th:xyzeintr}.\n\nWe fix $X,Y,Z$ with $Z\\subset X\\cap Y$, use the tensor\nfactorizations \\eqref{eq:xyzint} and make identifications like\n$\\mathcal{H}_Z\\otimes\\mathcal{H}_{X\/Z}= L^2(Z;\\mathcal{H}_{X\/Z})$. Thus\n$\\mathcal{H}_X=\\mathcal{H}_Z\\otimes\\mathcal{H}_{X\/Z}$ and $\\Delta_X=\\Delta_Z\\otimes 1 +\n1\\otimes \\Delta_{X\/Z}$ hence if $s\\geq0$\n\\begin{equation}\\label{eq:stens}\n\\mathcal{H}^s(X)=\\mathcal{H}(Z;\\mathcal{H}^s(X\/Z))\\cap \\mathcal{H}^s(Z;\\mathcal{H}_{X\/Z})=\n\\big(\\mathcal{H}_Z\\otimes\\mathcal{H}^s(X\/Z)\\big)\\cap \n\\big(\\mathcal{H}^s(Z)\\otimes\\mathcal{H}_{X\/Z}\\big)\n\\end{equation}\nwhere our notations are extended to vector-valued Sobolev spaces. \nClearly\n\\begin{equation}\\label{eq:lap}\n\\mathcal{F}_Z \\jap{P_X}^s \\mathcal{F}_Z^{-1} = \n\\int_Z^\\oplus (1+|k|^2+|P_{X\/Z}|^2)^{s\/2} \\text{d} k.\n\\end{equation}\nThen from \\eqref{eq:CZ} and $\\mathscr{T}_Z=\\mathcal{F}_Z^{-1}\\cc_{\\mathrm{o}}(Z)\\mathcal{F}_Z$ we get\n\\begin{equation*}\n\\mathscr{C}_{XY}(Z)=\\mathscr{T}_Z\\otimes \\mathscr{K}_{X\/Z,Y\/Z}=\n\\mathcal{F}_Z^{-1}\\cc_{\\mathrm{o}}(Z;\\mathscr{K}_{X\/Z,Y\/Z})\\mathcal{F}_Z.\n\\end{equation*}\nTo each weakly measurable map $I_{XY}^Z:Z\\to\nL(\\mathcal{H}^t_{Y\/Z},\\mathcal{H}^{-s}_{X\/Z})$ such that\n\\begin{equation}\\label{eq:Iest}\n\\sup\\nolimits_k\n\\|(1+|k|+|P_{X\/Z}|)^{-s}I_{XY}^Z(k)(1+|k|+|P_{Y\/Z}|)^{-t}\\| <\\infty.\n\\end{equation} \nwe associate a continuous operator \n$I_{XY}(Z):\\mathcal{H}^t_Y\\to\\mathcal{H}^{-s}_X$ by the relation\n\\begin{equation}\\label{eq:Ixyz}\n\\mathcal{F}_Z I_{XY}(Z) \\mathcal{F}_Z^{-1} \\equiv \\int_Z^\\oplus I_{XY}^Z(k) \\text{d} k.\n\\end{equation}\nThe following fact is known: a continuous operator\n$T:\\mathcal{H}^t_Y\\to\\mathcal{H}^{-s}_X$ is of the preceding form if and only if\n$U_aT=TUa$ for all $a\\in Z$. From the preceding results we get\n(notations are as in Remark \\ref{re:iaff}):\n\n\\begin{proposition}\\label{pr:zxy}\n Let $X,Y,Z\\in\\mathcal{S}$ with $Z\\subset X\\cap Y$ and assume that\n $\\mathcal{G}^1_X=\\mathcal{H}^s_X$ and $\\mathcal{G}^1_Y=\\mathcal{H}^t_Y$. An operator\n $I_{XY}(Z):\\mathcal{H}^t_Y\\to\\mathcal{H}^{-s}_X$ satisfies the conditions of\n Remark \\ref{re:iaff} if and only if it is of the form\n \\eqref{eq:Ixyz} with a norm continuous function $I_{XY}^Z:Z\\to\n L^\\circ(\\mathcal{H}^t_{Y\/Z},\\mathcal{H}^{-s}_{X\/Z})$ satisfying \\eqref{eq:Iest}.\n\\end{proposition}\n\n\n\\subsection{Auxiliary results}\n\\label{ss:lemma}\n\nIn this subsection we collect some useful technical results. Let\n$\\mathcal{E},\\mathcal{F},\\mathcal{G},\\mathcal{H}$ be Hilbert spaces. Note that we have a canonical\nidentification (tensor products are discussed in \\S\\ref{ss:ha})\n\\begin{equation}\\label{eq:comtens}\nK(\\mathcal{E},\\mathcal{F})\\otimes K(\\mathcal{G},\\mathcal{H})\\cong K(\\mathcal{E}\\otimes\\mathcal{G},\\mathcal{F}\\otimes\\mathcal{H}),\n\\hspace{2mm}\\text{in particular}\\hspace{2mm}\nK(\\mathcal{E},\\mathcal{F}\\otimes\\mathcal{H})\\cong K(\\mathcal{E},\\mathcal{F})\\otimes\\mathcal{H}.\n\\end{equation}\nAssume that we have continuous injective embeddings $\\mathcal{E}\\subset\\mathcal{G}$\nand $\\mathcal{F}\\subset\\mathcal{G}$. We equip $\\mathcal{E}\\cap\\mathcal{F}$ with the intersection\ntopology defined by the norm $(\\|g\\|_\\mathcal{E}^2+\\|g\\|_\\mathcal{F}^2)^{1\/2}$, hence\n$\\mathcal{E}\\cap\\mathcal{F}$ becomes a Hilbert space continuously embedded in $\\mathcal{G}$.\n\n\\begin{lemma}\\label{lm:efgh}\n The map $K(\\mathcal{E},\\mathcal{H})\\times K(\\mathcal{F},\\mathcal{H}) \\to K(\\mathcal{E}\\cap\\mathcal{F},\\mathcal{H})$ which\n associates to $S\\in K(\\mathcal{E},\\mathcal{H})$ and $T\\in K(\\mathcal{F},\\mathcal{H})$ the operator\n $S|_{\\mathcal{E}\\cap\\mathcal{F}}+T|_{\\mathcal{E}\\cap\\mathcal{F}} \\in K(\\mathcal{E}\\cap\\mathcal{F},\\mathcal{H})$ is\n surjective. Thus, slightly formally,\n\\begin{equation}\\label{eq:efgh}\nK(\\mathcal{E}\\cap\\mathcal{F},\\mathcal{H})=K(\\mathcal{E},\\mathcal{H}) + K(\\mathcal{F},\\mathcal{H}).\n\\end{equation}\n\\end{lemma}\n\\proof It is clear that the map is well defined. Let $R\\in\nK(\\mathcal{E}\\cap\\mathcal{F},\\mathcal{H})$, we have to show that there are $S,T$ as in the\nstatement of the proposition such that $R=\nS|_{\\mathcal{E}\\cap\\mathcal{F}}+T|_{\\mathcal{E}\\cap\\mathcal{F}}$. Observe that the norm on\n$\\mathcal{E}\\cap\\mathcal{F}$ has been chosen such that the linear map\n$g\\mapsto(g,g)\\in\\mathcal{E}\\oplus\\mathcal{F}$ be an isometry with range a closed\nlinear subspace $\\mathcal{I}$. Consider $R$ as a linear map $\\mathcal{I}\\to\\mathcal{H}$ and\nextend it to the orthogonal of $\\mathcal{I}$ by zero. The so defined map\n$\\widetilde R:\\mathcal{I}\\to\\mathcal{H}$ is clearly compact. Let $S,T$ be defined by\n$Se=\\widetilde R(e,0)$ and $Tf=\\widetilde R(0,f)$. Clearly $S\\in\nK(\\mathcal{E},\\mathcal{H})$ and $T\\in K(\\mathcal{F},\\mathcal{H})$ and if $g\\in\\mathcal{E}\\cap\\mathcal{F}$ then\n\\[\nSg+Tg=\\widetilde R(g,0)+\\widetilde R(0,g)=\\widetilde R(g,g)=Rg\n\\]\nwhich proves the lemma.\n\\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\nWe give some applications. If $E,F$ are Euclidean spaces and $s>0$\nis real then\n\\begin{equation}\\label{eq:EFs}\n\\mathcal{H}^s_{E\\oplus F}=\\big(\\mathcal{H}^s_E\\otimes\\mathcal{H}_F\\big)\\cap\n\\big(\\mathcal{H}_E\\otimes\\mathcal{H}^s_F\\big)\n\\end{equation}\nhence Lemma \\ref{lm:efgh} gives for an arbitrary Hilbert space $\\mathcal{H}$\n\\begin{equation}\\label{eq:EFH}\nK(\\mathcal{H}^s_{E\\oplus F},\\mathcal{H})=\nK(\\mathcal{H}^s_E\\otimes\\mathcal{H}_F,\\mathcal{H}) + K(\\mathcal{H}_E\\otimes\\mathcal{H}^s_F,\\mathcal{H}).\n\\end{equation}\nIf $\\mathcal{H}$ itself is a tensor product $\\mathcal{H}=\\mathcal{H}'\\otimes\\mathcal{H}''$ then we\ncan combine this with \\eqref{eq:comtens} and get\n\\begin{equation}\\label{eq:EFHEF}\nK(\\mathcal{H}^s_{E\\oplus F},\\mathcal{H}'\\otimes\\mathcal{H}'') =\nK(\\mathcal{H}^s_E,\\mathcal{H}')\\otimes K(\\mathcal{H}_F,\\mathcal{H}'')\n + K(\\mathcal{H}_E,\\mathcal{H}') \\otimes K(\\mathcal{H}^s_F,\\mathcal{H}'').\n\\end{equation}\nConsider now a triplet $X,Y,Z$ such that $Z\\subset X\\cap Y$ and\ndenote\n\\begin{equation}\\label{eq:xyze}\nE=(X\\cap Y)\/Z \\hspace{2mm}\\text{and}\\hspace{2mm} \nX \\boxplus Y = X\/Y\\times Y\/X.\n\\end{equation}\nThen $Y\/Z=E\\oplus(Y\/X) \\text{ and } X\/Z=E\\oplus(X\/Y)$ hence by using\n\\eqref{eq:EFHEF} we get for example \n\\begin{align}\n\\mathcal{H}_{Y\/Z} &=\\mathcal{H}_E\\otimes\\mathcal{H}_{Y\/X} \\text{ and \\ }\n\\mathcal{H}_{X\/Z} =\\mathcal{H}_E\\otimes\\mathcal{H}_{X\/Y} \\label{eq:new} \\\\\n\\mathcal{H}^2_{Y\/Z} &=\\big(\\mathcal{H}^2_E\\otimes\\mathcal{H}_{Y\/X}\\big)\\cap \n\\big(\\mathcal{H}_E\\otimes\\mathcal{H}^2_{Y\/X}\\big) \\label{eq:klea} \\\\\n\\mathcal{H}^{-2}_{X\/Z} &= \\mathcal{H}^{-2}_E\\otimes\\mathcal{H}_{X\/Y}\n+\\mathcal{H}_E\\otimes\\mathcal{H}^{-2}_{X\/Y}. \\label{eq:klean}\n\\end{align}\nBy using once again \\eqref{eq:EFHEF} and the notations introduced in\n\\eqref{eq:ikef}, we get\n\\begin{equation}\\label{eq:kde3}\n\\mathscr{K}^2_{X\/Z,Y\/Z} = \\mathscr{K}^2_E\\otimes \\mathscr{K}_{X\/Y,Y\/X} + \n\\mathscr{K}_E\\otimes\\mathscr{K}^2_{X\/Y,Y\/X}.\n\\end{equation}\nWe identify a Hilbert-Schmidt operator with its kernel, so $L^2(X\n\\boxplus Y)\\subset \\mathscr{K}_{X\/Y,Y\/X}$ is the subspace of Hilbert-Schmidt\noperators. The we have a strict inclusion\n\\begin{equation} \\label{eq:kde9} \nL^2(X \\boxplus Y;\\mathscr{K}^2_E) \\subset \\mathscr{K}^2_E\\otimes \\mathscr{K}_{X\/Y,Y\/X}\n\\end{equation}\n\n\n\\subsection{First order regularity conditions}\n\\label{ss:scs} \n\nIn the next two subsections we consider interactions as in\nProposition \\ref{pr:nrm} and give explicit conditions on the\n$I_{XY}^Z$ such that $H$ be of class $C^1_\\mathrm{u}(D)$. We recall that\nthe assumptions of Proposition \\ref{pr:nrm} can be stated as\nfollows: for all $Z\\subset X\\cap Y$\n\\begin{align}\n& I^Z_{XY}:\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}_{X\/Z} \n\\hspace{2mm} \\text{is compact and satisfies}\\hspace{2mm} \n(I^{Z}_{XY})^*\\supset I^Z_{YX}, \\label{eq:A}\\\\\n& [D,I^Z_{XY}]:\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z} \\hspace{2mm}\\text{is\ncompact}. \\label{eq:B}\n\\end{align}\nIf \\eqref{eq:A} is satisfied then by duality and interpolation we\nget\n\\begin{equation}\\label{eq:interpol}\nI^Z_{XY}:\\mathcal{H}^\\theta_{Y\/Z}\\to\\mathcal{H}^{\\theta-2}_{X\/Z} \\quad \n\\text{is a compact operator for all } 0\\leq \\theta \\leq 2,\n\\end{equation} \nin particular the operator $[D,I^Z_{XY}]\\equiv\nD_{X\/Z}I^Z_{XY}-I^Z_{XY}D_{Y\/Z}$ restricted to the space of\nfunctions in $\\mathcal{H}^2_{Y\/Z}$ with compact support has values in the\nspace of functions locally in $\\mathcal{H}^{-1}_{X\/Z}$. We use, for\nexample, the relation $D_{X\/Z}=D_E\\otimes 1 + 1\\otimes D_{X\/Y}$\nrelatively to \\eqref{eq:new} to decompose this operator as follows:\n\\begin{align}\n[D,I^Z_{XY}] \n&=(D_E+D_{X\/Y})I^Z_{XY}-I^Z_{XY}(D_E+D_{Y\/X}) \\nonumber \\\\\n&=[D_E,I^Z_{XY}] +D_{X\/Y}I^Z_{XY} -I^Z_{XY}D_{Y\/X}. \\label{eq:dec}\n\\end{align}\nSince $I^Z_{XY}D_{Y\/X}\\subset (D_{Y\/X}I^Z_{YX})^*$ if \\eqref{eq:A}\nis satisfied then condition \\eqref{eq:B} follows from:\n\\begin{equation}\\label{eq:dii}\n[D_E,I^Z_{XY}] \\text{ and } D_{X\/Y}I^Z_{XY} \\text{ are compact\n operators } \\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z} \\text{ for all } X,Y,Z.\n\\end{equation}\nAccording to \\eqref{eq:kde3} the first part of condition \n\\eqref{eq:A} is equivalent to\n\\begin{equation}\\label{eq:kde4}\nI_{XY}^Z=J+J' \\text{ for some }\nJ\\in \\mathscr{K}^2_E\\otimes \\mathscr{K}_{X\/Y,Y\/X} \\text{ and }\nJ'\\in\\mathscr{K}_E\\otimes\\mathscr{K}^2_{X\/Y,Y\/X}. \n\\end{equation}\nAs a particular case, from \\eqref{eq:kde9} we obtain the example\ndiscussed in \\S\\ref{ss:examples}. The compactness conditions\n\\eqref{eq:dii} are conditions on the kernels $[D_E,I^Z_{XY}(x',y')]$\nand $x'\\cdot\\nabla_{x'}I_{XY}^Z(x',y')$ of the operators\n$[D_E,I^Z_{XY}]$ and $D_{X\/Y}I^Z_{XY}$. Note that a condition on\n$I^Z_{XY}D_{Y\/X}$ is a requirement on the kernel\n$y'\\cdot\\nabla_{y'}I_{XY}^Z(x',y')$.\n\n\n\\subsection{Creation-annihilation type interactions}\n\\label{ss:xsupy} \n\nTo see the relation with the\ncreation-annihilation type interactions characteristic to quantum\nfield models we consider now the case when\n$Y\\subset X$ strictly. Then\n\\begin{equation}\\label{eq:ysux}\n\\mathscr{C}_{XY}=\\mathscr{C}_Y\\otimes\\mathcal{H}_{X\/Y}, \\quad \n\\mathscr{C}_{XY}(Z)=\\mathscr{C}_Y(Z)\\otimes\\mathcal{H}_{X\/Y}, \\quad\n\\mathcal{H}_X=\\mathcal{H}_Y\\otimes\\mathcal{H}_{X\/Y}\n\\end{equation}\nwhere the first two tensor product have to be interpreted as\nexplained in \\S\\ref{ss:ha}. In particular we have\n\\begin{equation}\\label{eq:sux}\nL^2(X\/Y;\\mathscr{C}_Y)\\subset\\mathscr{C}_{XY} \\quad \\text{and} \\quad\nL^2(X\/Y;\\mathscr{C}_Y(Z))\\subset\\mathscr{C}_{XY}(Z)\n\\quad \\text{strictly}. \n\\end{equation}\nIf $Z\\subset Y$ then $X=Z\\oplus(Y\/Z)\\oplus(X\/Y)$\nand $X\/Z=(Y\/Z)\\oplus(X\/Y)$ hence $\\mathcal{H}_{X\/Z}=\\mathcal{H}_{Y\/Z}\\otimes\\mathcal{H}_{X\/Y}$\nand thus the operator $I^Z_{XY}$ is\njust a compact operator\n\\begin{equation}\\label{eq:ixyo}\nI^Z_{XY} : \\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}_{Y\/Z}\\otimes\\mathcal{H}_{X\/Y}.\n\\end{equation}\nIf $\\mathcal{E},\\mathcal{F},\\mathcal{G}$ are Hilbert spaces then $K(\\mathcal{E},\\mathcal{F}\\otimes\\mathcal{G})\\cong\nK(\\mathcal{E},\\mathcal{F})\\otimes\\mathcal{G}$. Hence \\eqref{eq:ixyo} means\n\\begin{equation}\\label{eq:ixyoo}\nI^Z_{XY} \\in \\mathscr{K}^2_{Y\/Z}\\otimes \\mathcal{H}_{X\/Y}.\n\\end{equation}\nLet $\\mathscr{I}_{XY}= {\\textstyle\\sum_{Z\\subset X\\cap Y}} 1_Z\\otimes\n\\mathscr{K}^2_{X\/Z,Y\/Z}$, where the sum is direct and closed in\n$\\mathscr{K}^2_{XY}$. A usual nonrelativistic $N$-body Hamiltonian\nassociated to the semilattice $\\mathcal{S}_X$ of subspaces of $X$ is of the\nform $\\Delta_X+I_X$ with $I_X\\in\\mathscr{I}_{X}\\equiv\\mathscr{I}_{XX}$. Thus the\ninteraction which couples the $X$ and $Y$ systems is of the form\n\\begin{equation}\\label{eq:xyinter}\nI_{XY}={\\textstyle\\sum_{Z\\in\\mathcal{S}_Y}} \n1_Z\\otimes I^Z_{XY} \\in \\mathscr{I}_{Y}\\otimes \\mathcal{H}_{X\/Y}. \n\\end{equation}\nIn particular we may take $I_{XY}\\in L^2(X\/Y;\\mathscr{I}_Y)$, but we stress\nthat the space $\\mathscr{I}_{Y}\\otimes \\mathcal{H}_{X\/Y}$ is much larger (see\n\\S\\ref{ss:ha}). More explicitly, a square integrable function\n$I_{XY}:X\/Y\\to\\mathscr{I}_Y$ determines an operator $I_{XY}:\\mathcal{H}^2_Y\\to\\mathcal{H}_X$\nby the following rule: it associates to $u\\in\\mathcal{H}^2(Y)$ the function\n$y'\\mapsto I_{XY}(y')u$ which belongs to $L^2(X\/Y;\\mathcal{H}_{X\/Y})=\\mathcal{H}_X$.\nWe may also write\n\\begin{equation}\\label{eq:IV}\n(I_{XY}u)(x)=(I_{XY}(y')u)(y) \\quad \\text{where }\nx\\in X=Y\\oplus X\/Y \\text{ is written as } x=(y,y').\n\\end{equation}\nWe say that the operator valued function $I_{XY}$ is the kernel of\nthe operator $I_{XY}$. The adjoint $I_{YX}=I_{XY}^*$ is an integral\noperator in the $y'$ variable (like an annihilation\noperator). Indeed, if $v\\in\\mathcal{H}_X$ is thought as a map $y'\\mapsto\nv(y')\\in\\mathcal{H}_Y$ then we have $I_{YX}v=\\int_{X\/Y}\nI^*_{XY}(y')v(y')\\text{d} y'$ at least formally.\n\n\nThe particular case when the function $I_{XY}$ is factorizable\nclarifies the connection with the quantum field type interactions:\nlet $I_{XY}$ be a finite sum $I_{XY}=\\sum_i V_Y^i\\otimes\\phi_i$\nwhere $V^i_Y\\in\\mathscr{I}_Y$ and $\\phi_i\\in \\mathcal{H}_{X\/Y}$, then\n\\begin{equation}\\label{eq:qft}\nI_{XY}u={\\textstyle\\sum_i} (V_Y^i u)\\otimes\\phi_i \\quad\n\\text{as an operator } \nI_{XY}:\\mathcal{H}^2_Y\\to\\mathcal{H}_X=\\mathcal{H}_Y\\otimes\\mathcal{H}_{X\/Y}.\n\\end{equation}\nThis is a sum of $N$-body type interactions $V^i_Y$ tensorized with\noperators which create particles in states~ $\\phi_i$.\n\nThe conditions on the ``commutator'' $[D,I_{XY}]$ may be written\nin terms of the kernel of $I_{XY}$. The\nrelation \\eqref{eq:dec} becomes $\n[D,I_{XY}]=[D_Y,I_{XY}]+D_{X\/Y}I_{XY}$. The operator $D_Y$ acts only\non the variable $y$ and $D_{X\/Y}$ acts only on the variable\n$y'$. Thus $[D_Y,I_{XY}]$ and $D_{X\/Y}I_{XY}$ are operators of the\nsame nature as $I_{XY}$ but more singular: the kernel of\n$[D_Y,I_{XY}]$ is the function $y'\\mapsto [D_Y,I_{XY}(y')]$ and that\nof $2iD_{X\/Y}I_{XY}$ is the function $ y'\\mapsto\n(y'\\cdot\\nabla_{y'}+n\/2) I_{XY}(y')$. Thus, to get\n\\eqref{eq:B} it suffices to\nrequire two conditions on the kernel $I_{XY}$, one on\n$[D_Y,I_{XY}(y')]$ and a second one on\n$y'\\cdot\\nabla_{y'}I_{XY}(y')$.\n\nIf we decompose $I_{XY}$ as in \\eqref{eq:xyinter} with\n$I^Z_{XY}:\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}_{Y\/Z}\\otimes\\mathcal{H}_{X\/Y}$ compact then the\n(formal) kernel of $I^Z_{XY}$ is a $\\mathscr{K}^2_{Y\/Z}$ valued map on\n$X\/Y$. We require that $[D_{Y\/Z},I^Z_{XY}]$ and $D_{X\/Y}I^Z_{XY}$\nbe compact operators $\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z}$. From\n\\eqref{eq:stens} and $X\/Z=(Y\/Z)\\oplus(X\/Y)$ we get\n\\begin{equation*}\n\\mathcal{H}^2_{X\/Z} = \\big(\\mathcal{H}_{Y\/Z}\\otimes\\mathcal{H}^2_{X\/Y}\\big)\\cap\n\\big(\\mathcal{H}^2_{Y\/Z}\\otimes\\mathcal{H}_{X\/Y}\\big), \\quad\n\\mathcal{H}^{-2}_{X\/Z} = \\mathcal{H}_{Y\/Z}\\otimes\\mathcal{H}^{-2}_{X\/Y} +\n\\mathcal{H}^{-2}_{Y\/Z}\\otimes\\mathcal{H}_{X\/Y}\n\\end{equation*} \nwhich are helpful in checking these compactness requirements.\n\n\n\n\\subsection{Besov regularity classes}\n\\label{ss:scsc} \n\nWe recall some facts concerning the Besov type regularity class $\nC^{1,1}(D)$; we refer to \\cite{ABG} for details on these\nmatters. Since the conjugate operator $D$ is fixed we shall not\nindicate it in the notation from now on. An operator $T\\in L(\\mathcal{H})$\nis of class $C^{1,1}$ if\n\\begin{equation}\\label{eq:c11}\n\\int_0^1\\|W^*_{2\\varepsilon}T W_{2\\varepsilon}-\n2W^*_{\\varepsilon}T W_{\\varepsilon} +T\\|\n\\frac{\\text{d}\\varepsilon}{\\varepsilon^2} \\equiv\n\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^2\nT\\|\\frac{\\text{d}\\varepsilon}{\\varepsilon^2} <\\infty\n\\end{equation}\nwhere $\\mathcal{W}_\\varepsilon$ is the automorphism of $L(\\mathcal{H})$ defined by\n$\\mathcal{W}_\\varepsilon T=W_\\varepsilon^*T W_\\varepsilon$. The condition\n\\eqref{eq:c11} implies that $T$ is of class $C^1_\\mathrm{u}$ and is\njust slightly more than this. Indeed, $T$ is of class $C^1$ or\n$C^1_\\mathrm{u}$ if and only if the limit\n\\[\n\\lim_{\\tau\\to 0}\\int_\\tau^1 (\\mathcal{W}_\\varepsilon-1)^2 T\n\\frac{\\text{d}\\varepsilon}{\\varepsilon^2}\n\\] \nexists strongly or in norm respectively. The following subclass of\n$C^{1,1}$ is useful in applications: $T$ is called of class\n$C^{1+}$ if $T$ is of class $C^1$, so the commutator $[D,T]$ is a\nbounded operator, and\n\\begin{equation}\\label{eq:din}\n\\int_0^1\\|W_\\varepsilon^*[D,T]W_\\varepsilon-[D,T]\\|\n\\frac{\\text{d}\\varepsilon}{\\varepsilon} <\\infty.\n\\end{equation}\nThen $C^{1+}\\subset C^{1,1}$. The class most frequently used in the\ncontext of the Mourre theorem is $C^2$: this is the set of $T\\in\nC^1$ such that $[D,T]\\in C^1$. But $[D,T]\\in C^1$ if and only if\n\\[\n\\|W_\\varepsilon^*[D,T] W_\\varepsilon-[D,T]\\| \\leq \nC |\\varepsilon| \\quad \\text{for some constant $C$ and all real }\n\\varepsilon \n\\]\nhence this condition is much stronger then the Dini type condition\n\\eqref{eq:din}. A self-adjoint operator $H$ is of class $C^{1,1}$,\n$C^{1+}$ or $C^2$ if its resolvent is of class $C^{1,1}$, $C^{1+}$\nor $C^2$ respectively. \n\nWe now consider a Hamiltonian as in Proposition \\ref{pr:nrm} and\ndiscuss conditions which ensure that $H$ is of class $C^{1,1}$. An\nimportant point is that the domain $\\mathcal{H}^2$ of $H$ is stable under\nthe dilation group $W_\\tau$. Then Theorem 6.3.4 from \\cite{ABG}\nimplies that $H$ is of class $C^{1,1}$ if and only if\n\\begin{equation}\\label{eq:c11h}\n\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^2H\\|_{\\mathcal{H}^2\\to\\mathcal{H}^{-2}}\n\\frac{\\text{d}\\varepsilon}{\\varepsilon^2} <\\infty.\n\\end{equation}\nAs above $\\mathcal{W}_{\\varepsilon} H=W^*_{\\varepsilon}H W_{\\varepsilon}$\nhence $ (\\mathcal{W}_\\varepsilon-1)^2H=W^*_{2\\varepsilon}H W_{2\\varepsilon}-\n2W^*_{\\varepsilon}H W_{\\varepsilon} +H$. We have $H=\\Delta +I$ and\ndue to \\eqref{eq:dlap} the relation \\eqref{eq:c11h} is trivially\nverified by the kinetic part $\\Delta$ of $H$ hence we are only\ninterested in conditions on $I$ which ensure that \\eqref{eq:c11h} is\nsatisfied with $H$ replaced by $I$. If this is the case, by a slight\nabuse of language we say that $I$ is of class $C^{1,1}$. In terms of\nthe coefficients $I_{XY}$, this means\n\\begin{equation}\\label{eq:c11xyz}\n\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^2I_{XY}^Z\\|_{\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z}}\n\\frac{\\text{d}\\varepsilon}{\\varepsilon^2} <\\infty\n\\quad\\text{for all } X,Y,Z.\n\\end{equation}\nWe recall one fact (see \\cite[Ch. 5]{ABG}). Let $I:\\mathcal{H}^2\\to\\mathcal{H}^{-2}$\nbe an arbitrary linear continuous operator. Then\n$[D,I]:\\mathcal{H}^2_\\mathrm{c}\\to\\mathcal{H}^{-3}_{\\mathrm{loc}}$ is well defined and $I$\nis of class $C^1$ (in an obvious sense) if and only if this operator\nis the restriction of a continuous map $\\mathcal{H}^2\\to\\mathcal{H}^{-2}$, which\nwill be denoted also $[D,I]$. We say that $I$ is of class $C^{1+}$\nif this condition is satisfied and \n\\begin{equation}\\label{eq:dini}\n\\int_0^1\\|W_\\varepsilon^*[D,I]W_\\varepsilon-[D,I]\\|_{\\mathcal{H}^2\\to\\mathcal{H}^{-2}}\n\\frac{\\text{d}\\varepsilon}{\\varepsilon} <\\infty.\n\\end{equation}\nAs before, if $I$ is of class $C^{1+}$ then it is of class\n$C^{1,1}$. In terms of the coefficients $I_{XY}^Z$ this means\n\\begin{equation}\\label{eq:dinix}\n\\int_0^1\\|W_\\varepsilon^*[D,I_{XY}^Z]W_\\varepsilon-[D,I_{XY}^Z]\n\\|_{\\mathcal{H}^2_{Y\/Z}\\to\\mathcal{H}^{-2}_{X\/Z}}\n\\frac{\\text{d}\\varepsilon}{\\varepsilon} <\\infty.\n\\end{equation}\nSuch a condition should be imposed on each of the three terms in the\ndecomposition \\eqref{eq:dec} separately.\n\nThe techniques developed in \\S 7.5.3 and on pages 425--429 from\n\\cite{ABG} can be used to get more concrete conditions. The only\nnew fact with respect to the $N$-body situation as treated there is\nthat $\\mathcal{W}_\\tau$ when considered as an operator on $\\mathscr{L}_{X\/Z,Y,Z}$\nfactorizes in a product of three commuting operators. Indeed, if we\nwrite $\\mathcal{H}_{Y\/Z}=\\mathcal{H}_E\\otimes\\mathcal{H}_{Y\/X}$ and\n$\\mathcal{H}_{X\/Z}=\\mathcal{H}_E\\otimes\\mathcal{H}_{X\/Y}$ then we get\n$\\mathcal{W}_\\tau(T)=W^{X\/Y}_{-\\tau}\\mathcal{W}^E_\\tau(T)W^{Y\/X}_\\tau$ where this\ntime we indicated by an upper index the space to which the operator\nis related and, for example, we identified $W^{Y\/X}_\\tau=1_E\\otimes\nW^{Y\/X}_\\tau$. To check the $C^{1,1}$ property in this context\none may use:\n\n\n\\begin{proposition}\\label{pr:inter}\nIf $T\\in\\mathscr{L}:= L(\\mathcal{H}^2_{Y\/Z},\\mathcal{H}^{-2}_{X\/Z})$ then \n$\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^2T\\|_\\mathscr{L}\n\\text{d}\\varepsilon\/\\varepsilon^2<\\infty$ follows from\n\\begin{equation}\\label{eq:inter}\n\\int_0^1\\left(\n\\|(W^{X\/Y}_{\\varepsilon}-1)^2T\\|_\\mathscr{L}+ \n\\|(\\mathcal{W}^E_\\varepsilon-1)^2T\\|_\\mathscr{L}+\n\\|T(W^{Y\/X}_{\\varepsilon}-1)^2\\|_\\mathscr{L}\n\\right)\n\\frac{\\text{d}\\varepsilon}{\\varepsilon^2} < \\infty.\n\\end{equation}\n\\end{proposition}\n\\proof We shall interpret $\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^2T\\|_\\mathscr{L}\n\\text{d}\\varepsilon\/\\varepsilon^2<\\infty$ in terms of real interpolation\ntheory. Let $L_\\tau$ be the operator of left multiplication by\n$W^{X\/Y}_{-\\tau}$ and $N_\\tau$ the operator of right multiplication\nby $W^{Y\/X}_{\\tau}$ on $\\mathscr{L}_{X\/Z,Y\/Z}$. If we also set\n$M_\\tau=\\mathcal{W}^E_\\tau$ then we get three commuting operators\n$L_\\tau,M_\\tau,N\\tau$ on $\\mathscr{L}_{X\/Z,Y\/Z}$ such that $\\mathcal{W}_\\tau=L_\\tau\nM_\\tau N_\\tau$. Then it is easy to check a Dini type condition like\n\\eqref{eq:dinix} by using\n\\begin{equation}\\label{eq:tmp}\n\\mathcal{W}_\\tau-1=(L_\\tau-1)M_\\tau N_\\tau+(M_\\tau -1)N_\\tau+(N_\\tau-1). \n\\end{equation}\nOn the other hand, observe that $\\mathcal{W}_\\tau,L_\\tau,M_\\tau,N_\\tau$ are\none parameter groups of operators on the Banach space $\\mathscr{L}$. These\ngroups are not continuous in the ordinary sense but this does not\nreally matter, in fact we are in the setting of \\cite[Ch. 5]{ABG}.\nThe main point is that the integral\n$\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^2T\\|_\\mathscr{L}\\text{d}\\varepsilon\/\\varepsilon^2$\nis finite if and only if\n$\\int_0^1\\|(\\mathcal{W}_\\varepsilon-1)^6T\\|_\\mathscr{L}\\text{d}\\varepsilon\/\\varepsilon^2$\nis finite (see Theorem 3.4.6 in \\cite{ABG}; this is where real\ninterpolation comes into play). Now by taking the sixth power of\n\\eqref{eq:tmp} and developing the right hand side we easily get the\nresult, cf. the formula on top of page 132 of \\cite{ABG}. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\nThe proof of Theorem \\ref{th:BVR} is based on an extension of\nPropositions 9.4.11 and 9.4.12 from \\cite{ABG} to the present\ncontext. Since the argument is very similar, we do not enter into\ndetails. We mention only that the operator $D$ can be written as\n$4D=P \\cdot Q+Q \\cdot P$ where $P=\\oplus_X P_X$ and $Q=\\oplus_X Q_X$\nare suitably interpreted. The proofs in \\cite{ABG} depend only on\nthis structure.\n\n\n \n\n\\section{Appendix: Hamiltonian algebras} \n\\label{s:appb}\n\n\nWe prove here some results on $C^*$-algebras generated by certain\nclasses of ``elementary'' Hamiltonians.\n\n\\subsection{}\\label{ss:a1}\n\nLet $X$ be a locally compact abelian group and let\n$\\{U_x\\}_{x\\in X}$ be a strongly continuous unitary representation\nof $X$ on a Hilbert space $\\mathcal{H}$. Then one can associate to it a\nBorel regular spectral measure $E$ on $X^*$ with values projectors\non $\\mathcal{H}$ such that $U_x=\\int_{X^*}k(x)E(\\text{d} k)$ and this allows us\nto define for each Borel function $\\psi:X^*\\to\\mathbb{C}$ a normal\noperator on $\\mathcal{H}$ by the formula $\\psi(P)=\\int_{X^*} \\psi(k)E(\\text{d}\nk)$. The set $\\mathscr{T}_X(\\mathcal{H})$ of all the operators $\\psi(P)$ with\n$\\psi\\in\\cc_{\\mathrm{o}}(X^*)$ is clearly a non-degenerate $C^*$-algebra of\noperators on $\\mathcal{H}$. The following result, which will be useful in\nseveral contexts, is an easy consequence of the Cohen-Hewitt\nfactorization theorem, see Lemma 3.8 from \\cite{GI4}. Consider an\noperator $A\\in L(\\mathcal{H})$.\n\n\\begin{lemma}\\label{lm:help}\n$\\displaystyle{\\lim_{x\\to0}}\\|(U_x-1)A\\|=0$ if and\nonly if $A=\\psi(P)B$ for some $\\psi\\in\\cc_{\\mathrm{o}}(X^*)$ and $B\\in L(\\mathcal{H})$.\n\\end{lemma}\n\nWe say that an operator $S\\in L(\\mathcal{H})$ is of class $C^0(P)$ if the\nmap $x\\mapsto U_xSU_x^*$ is norm continuous.\n\n\\begin{lemma}\\label{lm:cop}\nLet $S\\in L(\\mathcal{H})$ be of class $C^0(P)$ and let $T\\in\n\\mathscr{T}_X(\\mathcal{H})$. Then for each $\\varepsilon>0$ there is $Y\\subset X$\nfinite and there are operators $T_y\\in \\mathscr{T}_X(\\mathcal{H})$ such that\n$\\|ST-\\sum_{y\\in Y} T_y U_{y}SU_{y}^*\\|<\\varepsilon$.\n\\end{lemma}\n\\proof It suffices to assume that $T=\\psi(P)$ where $\\psi$ has a\nFourier transform integrable on $X$, so that $T=\\int_X U_x\n\\widehat\\psi(x) \\text{d} x$, and then to use a partition of unity on $X$\nand the uniform continuity of the map $x\\mapsto U_xSU_x^*$ (see the\nproof of Lemma 2.1 in \\cite{DG1}). \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\nWe say that a subset $\\mathcal{B}$ of $L(\\mathcal{H})$ is $X$-stable if\n$U_xSU_x^*\\in\\mathcal{B}$ whenever $S\\in\\mathcal{B}$ and $x\\in X$. From Lemma\n\\ref{lm:cop} we see that if $\\mathcal{B}$ is an $X$-stable real linear space\nof operators of class $C^0(P)$ then\n\\[\n\\mathcal{B}\\cdot \\mathscr{T}_X(\\mathcal{H})= \\mathscr{T}_X(\\mathcal{H})\\cdot\\mathcal{B}.\n\\] \nSince the $C^*$-algebra $\\mathcal{A}$ generated by $\\mathcal{B}$ is also $X$-stable\nand consists of operators of class $C^0(P)$\n\\begin{equation}\\label{eq:cop}\n\\mathscr{A}\\equiv\\mathcal{A}\\cdot \\mathscr{T}_X(\\mathcal{H})= \\mathscr{T}_X(\\mathcal{H})\\cdot\\mathcal{A}\n\\end{equation} \nis a $C^*$-algebra. The operators $U_x$ implement a norm continuous\naction of $X$ by automorphisms of the algebra $\\mathcal{A}$ so the\n$C^*$-algebra crossed product $\\mathcal{A}\\rtimes X$ is well defined and the\nalgebra $\\mathscr{A}$ is a quotient of this crossed product.\n\nA function $h$ on $X^*$ is called \\emph{$p$-periodic} for some\nnon-zero $p\\in X^*$ if $h(k+p)=h(k)$ for all $k\\in X^*$.\n\n\\begin{proposition}\\label{pr:cop}\nLet $\\mathcal{V}$ be an $X$-stable set of symmetric bounded operators of\nclass $C^0(P)$ and such that $\\lambda\\mathcal{V}\\subset\\mathcal{V}$ if\n$\\lambda\\in\\mathbb{R}$. Denote $\\mathcal{A}$ the $C^*$-algebra generated by $\\mathcal{V}$\nand define $\\mathscr{A}$ by \\eqref{eq:cop}. Let $h:X^*\\to\\mathbb{R}$ be\ncontinuous, not $p$-periodic if $p\\neq0$, and such that\n$|h(k)|\\to\\infty$ as $k\\to\\infty$. Then $\\mathscr{A}$ is the $C^*$-algebra\ngenerated by the self-adjoint operators of the form $h(P+k)+V$ with\n$k\\in X^*$ and $V\\in\\mathcal{V}$.\n\\end{proposition}\n\\proof Denote $K=h(P+k)$ and let $R_\\lambda=(z-K-\\lambda V)^{-1}$\nwith $z$ not real and $\\lambda$ real. Let $\\mathscr{C}$ be the $C^*$-algebra\ngenerated by such operators (with varying $k$ and $V$). By taking\n$V=0$ we see that $\\mathscr{C}$ will contain the $C^*$-algebra generated by\nthe operators $R_0$. By the Stone-Weierstrass theorem this algebra\nis $\\mathscr{T}_X(\\mathcal{H})$ because the set of functions $p\\to(z-h(p+k))^{-1}$\nwhere $k$ runs over $X^*$ separates the points of $X^*$. The\nderivative with respect to $\\lambda$ at $\\lambda=0$ of $R_\\lambda$\nexists in norm and is equal to $R_0VR_0$, so $R_0VR_0\\in\\mathscr{C}$. Since\n$\\mathscr{T}_X\\subset\\mathscr{C}$ we get $\\phi(P)V\\psi(P)\\in\\mathscr{C}$ for all\n$\\phi,\\psi\\in\\cc_{\\mathrm{o}}(X^*)$ and all $V\\in\\mathcal{V}$. Since $V$ is of class\n$C^0(P)$ we have $(U_x-1)V\\psi(P)\\sim V(U_x-1)\\psi(P)\\to0$ in norm\nas $x\\to0$ from which we get $\\phi(P)V\\psi(P)\\to S\\psi(P)$ in norm\nas $\\phi\\to1$ conveniently. Thus $V\\psi(P)\\in\\mathscr{C}$ for $V,\\psi$ as\nabove. This implies $V_1\\cdots V_n\\psi(P)\\in\\mathscr{C}$ for all\n$V_1,\\dots,V_n\\in\\mathcal{V}$. Indeed, assuming $n=2$ for simplicity, we\nwrite $\\psi=\\psi_1\\psi_2$ with $\\psi_i\\in\\cc_{\\mathrm{o}}(X^*)$ and then Lemma\n\\ref{lm:cop} allows us to approximate $V_2\\psi_1(P)$ in norm with\nlinear combinations of operators of the form $\\phi(P)V^x_2$ where\nthe $V^x_2$ are translates of $V_2$. Since $\\mathscr{C}$ is an algebra we\nget $V_1\\phi(P) V^x_2\\psi_2(P)\\in\\mathscr{C}$ hence passing to the limit we\nget $V_1V_2\\psi(P)\\in\\mathscr{C}$. Thus we proved $\\mathscr{A}\\subset\\mathscr{C}$. The\nconverse inclusion follows from a series expansion of $R_\\lambda$ in\npowers of $V$. \\hfill \\vrule width 8pt height 9pt depth-1pt \\medskip\n\n\nThe next two corollaries follow easily from Proposition\n\\ref{pr:cop}. We take $\\mathcal{H}=L^2(X)$ which is equipped with the usual\nrepresentations $U_x,V_k$ of $X$ and $X^*$ respectively. Let\n$W_\\xi=U_xV_k$ with $\\xi=(x,k)$ be the phase space translation\noperator, so that $\\{W_\\xi\\}$ is a projective representation of the\nphase space $\\Xi=X\\oplus X^*$. Fix some classical kinetic energy\nfunction $h$ as in the statement of Proposition \\ref{pr:cop} and let\nthe classical potential $v:X\\to\\mathbb{R}$ be a bounded uniformly\ncontinuous function. Then the quantum Hamiltonian will be\n$H=h(P)+v(Q)\\equiv K+V$. Since the origins in the configuration and\nmomentum spaces $X$ and $X^*$ have no special physical meaning one\nmay argue \\cite{Be1,Be2} that $W_\\xi H W^*_\\xi=h(P-k)+v(Q+x)$ is a\nHamiltonian as good as $H$ for the description of the evolution of\nthe system. It is not clear to us whether the algebra generated by\nsuch Hamiltonians (with $h$ and $v$ fixed) is in a natural way a\ncrossed product. On the other hand, it is natural to say that the\ncoupling constant in front of the potential is also a variable of\nthe system and so the Hamiltonians $H_\\lambda=K+\\lambda V$ with any\nreal $\\lambda$ are as relevant as $H$. Then we may apply Proposition\n\\ref{pr:cop} with $\\mathcal{V}$ equal to the set of operators of the form\n$\\lambda v(Q+x)$. Thus:\n\n\n\\begin{corollary}\\label{co:cop1}\nLet $v\\in\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ real and let $\\mathcal{A}$ be the $C^*$-subalgebra of\n$\\cc_{\\mathrm{b}}^{\\mathrm{u}}(X)$ generated by the translates of $v$. Let $h:X^*\\to\\mathbb{R}$ be\ncontinuous, not $p$-periodic if $p\\neq0$, and such that\n$|h(k)|\\to\\infty$ as $k\\to\\infty$. Then the $C^*$-algebra generated\nby the self-adjoint operators of the form $W_\\xi H_\\lambda W^*_\\xi$\nwith $\\xi\\in\\Xi$ and real $\\lambda$ is the crossed product\n$\\mathcal{A}\\rtimes X$.\n\\end{corollary}\n\nNow let $\\mathcal{T}$ be a set of closed subgroups of $X$ such that the\nsemilattice $\\mathcal{S}$ generated by it (i.e. the set of finite\nintersections of elements of $\\mathcal{T}$) consists of pairwise compatible\nsubgroups. Set $\\mathcal{C}_X(\\mathcal{S})=\\sum^\\mathrm{c}_{Y\\in\\mathcal{S}} \\mathcal{C}_X(Y)$. From\n\\eqref{eq:reg1} it follows that this is the $C^*$-algebra generated\nby $\\sum_{Y\\in\\mathcal{T}} \\mathcal{C}_X(Y)$.\n\n\\begin{corollary}\\label{co:cop2}\nLet $h$ be as in Corollary \\ref{co:cop1}. Then the $C^*$-algebra\ngenerated by the self-adjoint operators of the form $h(P+k)+v(Q)$\nwith $k\\in X^*$ and $v\\in\\sum_{Y\\in\\mathcal{T}} \\mathcal{C}_X(Y)$ is the\ncrossed product $\\mathcal{C}_X(\\mathcal{S})\\rtimes X$.\n\\end{corollary}\n\n\n\\begin{remark}\\label{re:cop}\nProposition \\ref{pr:cop} and Corollaries \\ref{co:cop1} and\n\\ref{co:cop2} remain true and are easier to prove if we consider the\n$C^*$-algebra generated by the operators $h(P)+V$ with all\n$h:X^*\\to\\mathbb{R}$ continuous and such that $|h(k)|\\to\\infty$ as\n$k\\to\\infty$. If in Proposition \\ref{pr:cop} we take $\\mathcal{H}=L^2(X;E)$\nwith $E$ a finite dimensional Hilbert space (describing the spin\ndegrees of freedom) then the operators $H_0=h(P)$ with $h:X\\to L(E)$\na continuous symmetric operator valued function such that\n$\\|(h(k)+i)^{-1}\\|\\to 0$ as $k\\to\\infty$ are affiliated to $\\mathscr{A}$\nhence also their perturbations $H_0+V$ where $V$ satisfies the\ncriteria from \\cite{DG3}, for example.\n\\end{remark}\n\n\n\\subsection{}\n\\label{ss:anbody} \n\nWe consider the framework of \\S\\ref{ss:cexample} and use Corollary\n\\ref{co:cop2} to prove that the Hamiltonian algebra of a\nnonrelativistic $N$-body system is generated in a natural way by the\noperators of the form \\eqref{eq:nonrel}. To state a precise result\nit suffices to consider the reduced Hamiltonians (for which we keep\nthe notation $H$).\n\nLet $\\mathfrak{S}_2$ be the set of cluster decompositions which\ncontain only one nontrivial cluster which consists of exactly two\nelements. This cluster is of the form $\\{j,k\\}$ for a unique pair of\nnumbers $1\\leq j < k \\leq N$ and we denote by $(jk)$ the\ncorresponding cluster decomposition. The map $x\\mapsto x_j-x_k$\nsends $X$ onto $\\mathbb{R}^d$ and has $X_{(jk)}$ as kernel hence\n$V_{jk}(x_j-x_k)=V_{(jk)}\\circ\\pi_{(jk)}(x)$ where\n$V_{(jk)}:X\/X_{(jk)}\\to\\mathbb{R}$ is continuous with compact support and\n$\\pi_{(jk)}:X\\to X\/X_{(jk)}$ is the canonical surjection.\n\nThus the reduced Hamiltonians corresponding to \\eqref{eq:nonrel} are\nthe operators on $\\mathcal{H}_X$ of the form\n\\begin{equation}\\label{eq:nonre}\n\\Delta_X+\n{\\textstyle{\\sum_{\\sigma\\in\\mathfrak{S}_2}}} V_\\sigma\\circ\\pi_\\sigma\n\\hspace{2mm}\\text{with}\\hspace{2mm} V_\\sigma:X\/X_\\sigma\\to\\mathbb{R}\n\\text{ continuous with compact support}.\n\\end{equation} \nThese operators must be affiliated to the Hamiltonian algebra of the\n$N$-body system. On the other hand, if a Hamiltonian $h(P)+V$ is\nconsidered as physically admissible then $h(P+k)+V$ should be\nadmissible too because the zero momentum $k=0$ should not play a\nspecial role. In other terms, translations in momentum space should\nleave invariant the set of admissible Hamiltonians. Hence it is\nnatural to consider \\emph{the smallest $C^*$-algebra $\\mathscr{C}_X(\\mathcal{S})$\n such that the operators \\eqref{eq:nonre} are affiliated to it and\n which is stable under translations in momentum space. But this\n algebra is exactly the crossed product}\n\\[\n\\mathscr{C}_X=\\mathcal{C}_X\\rtimes X=\\mathcal{C}_X\\cdot \\mathscr{T}_X \n\\hspace{2mm}\\text{with}\\hspace{2mm}\n\\mathcal{C}_X={\\textstyle\\sum_\\sigma}\\mathscr{C}_X(X_\\sigma).\n\\]\nIndeed, it is clear that the semilattice generated by\n$\\mathfrak{S}_2$ is $\\mathfrak{S}$ so we may apply Corollary\n\\ref{co:cop2}.\n\n\n\\subsection{}\n\\label{ss:amotiv} \n\nHere we prove Theorem \\ref{th:motiv}.\n\nLet $\\mathscr{C}'$ be the $C^*$-algebra generated by the operators of the\nform $(z-K-\\phi)^{-1}$ where $z$ is a not real number, $K$ is a\nstandard kinetic energy operator, and $\\phi$ is a symmetric field\noperator. With the notation \\eqref{eq:d} we easily get\n$\\mathscr{T}_\\text{d}\\subset\\mathscr{C}'$. If $\\lambda\\in\\mathbb{R}$ then $\\lambda\\phi$ is\nalso a field operator so $(z-K-\\lambda\\phi)^{-1}\\in\\mathscr{C}'$. By taking\nthe derivative with respect to $\\lambda$ at $\\lambda=0$ of this\noperator we get $(z-K)^{-1}\\phi (z-K)^{-1}\\in\\mathscr{C}$. Since\n$(z-K)^{-1}=\\oplus_X(z-h_X(P))^{-1}$ (recall that $P$ is the\nmomentum observable independently of the group $X$) and since\n$\\mathscr{T}_\\text{d}\\subset\\mathscr{C}'$ we get $S\\phi(\\theta) T\\in\\mathscr{C}'$ for all\n$S,T\\in \\mathscr{T}_\\text{d}$ and $\\theta=(\\theta_{XY})_{X\\supset Y}$.\n\nLet $\\mathscr{C}'_{XY}=\\Pi_X\\mathscr{C}'\\Pi_Y\\subset\\mathscr{L}_{XY}$ be the components of\nthe algebra $\\mathscr{C}'$ and let us fix $X\\supset Y$. Then we get\n$\\varphi(P) a^*(u) \\psi(P)\\in\\mathscr{C}'_{XY}$ for all\n$\\varphi\\in\\cc_{\\mathrm{o}}(X^*)$, $\\psi\\in\\cc_{\\mathrm{o}}(Y^*)$, and $u\\in\\mathcal{H}_{X\/Y}$. The\nclspan of the operators $a^*(u) \\psi(P)$ is $\\mathscr{T}_{XY}$, see\nProposition \\ref{pr:def3} and the comments after \\eqref{eq:L2a}, and\nfrom \\eqref{eq:cyzc} we have $\\mathscr{T}_X\\cdot\\mathscr{T}_{XY}=\\mathscr{T}_{XY}$. Thus\nthe clspan of the operators $\\varphi(P) a^*(u) \\psi(P)$ is\n$\\mathscr{T}_{XY}$ for each $X\\supset Y$ and then we get\n$\\mathscr{T}_{XY}\\subset\\mathscr{C}'_{XY}$. By taking adjoints we get\n$\\mathscr{T}_{XY}\\subset\\mathscr{C}'_{XY}$ if $X\\sim Y$.\n\nNow recall that the subspace $\\mathscr{T}^\\circ\\subset L(\\mathcal{H})$ defined by\n$\\mathscr{T}^\\circ_{XY}=\\mathscr{T}_{XY}$ if $X\\sim Y$ and $\\mathscr{T}^\\circ=\\{0\\}$ if\n$X\\not\\sim Y$ is a closed self-adjoint linear subspace of $\\mathscr{T}$ and\nthat $\\mathscr{T}^\\circ\\cdot\\mathscr{T}^\\circ=\\mathscr{C}$, cf. Theorem \\ref{th:tc}. By\nwhat we proved before we have $\\mathscr{T}^\\circ\\subset\\mathscr{C}'$ hence\n$\\mathscr{C}\\subset\\mathscr{C}'$. The converse inclusions is easy to prove. This\nfinishes the proof of Theorem \\ref{th:motiv}. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{S1}\n\\setcounter{equation}{0}\nThe Laguerre unitary ensemble ($ {\\rm LUE}_N $) refers to the eigenvalue probability\ndensity function (p.d.f.)\n\\begin{equation}\n \\frac{1}{C_{N,a}}\n \\prod^{N}_{l=1}\\lambda_l^ae^{-\\lambda_l}\\prod_{1 \\leq j -1 $ and $ \\Re(a) > -1 $ for \n$ s>0 $ with the additional constraint $ \\Re(\\mu+a) > -1 $ at $ s=0 $.\n\nWe seek the leading terms in the small $ s$ expansion of (\\ref{LUE_Hsym.3}). \nThese can be read off from an explicit evaluation in terms of the $ {}_1F_1 $\nconfluent hypergeometric function \\cite{WW_1965}. \n\\begin{proposition}\nSubject to the conditions $ \\Re(\\mu) > -1 $, $ \\Re(a) > -1 $,\n$ \\Re(\\mu+a) > -1 $ and $ \\mu+a\\notin \\mathbb Z_{\\geq 0} $ we have\n\\begin{equation}\n w_{n} = a_n(s) + s^{n+\\mu+a+1}b_n(s) ,\n\\label{LUE_Hsym.4}\n\\end{equation}\nwhere $ a_n(s), b_n(s) $ are analytic about $ s=0 $ and given explicitly by\n\\begin{equation}\n\\begin{split}\n a_n(s) &=\n \\Gamma(\\mu+n+a+1)e^{-s}{}_1F_1(-a-n;-\\mu-a-n;s) ,\n \\\\\n b_n(s) &=\n \\frac{\\Gamma(\\mu+1)\\Gamma(n+a+1)}{\\Gamma(\\mu+n+a+2)}\n \\left( (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(\\mu+a)} \\right)\n \\\\\n & \\qquad\\times\n e^{-s}{}_1F_1(\\mu+1;\\mu+a+n+2;s) .\n\\end{split}\n\\label{Hsym_1F1}\n\\end{equation}\nIn particular, under the above conditions,\n\\begin{equation}\n w_{n} \\mathop{\\sim}\\limits_{s \\to 0} a_n(0)+sa'_n(0)+s^{n+\\mu+a+1}b_n(0) ,\n\\label{LUE_Hsym.5}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{split}\n a_n(0) &= \\Gamma(\\mu+n+a+1) ,\n \\\\\n a'_n(0) &= -\\mu\\Gamma(\\mu+n+a) ,\n \\\\\n b_n(0) &=\n \\frac{\\Gamma(\\mu+1)\\Gamma(n+a+1)}{\\Gamma(\\mu+n+a+2)}\n \\left( (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(\\mu+a)} \\right) .\n\\end{split}\n\\label{Hsym_1F1_exp}\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nThe results (\\ref{LUE_Hsym.5}) and (\\ref{Hsym_1F1_exp}) are immediate corollaries\nof (\\ref{LUE_Hsym.4}) and (\\ref{Hsym_1F1}) and the fact that\n\\begin{equation*}\n {}_1F_1(\\gamma;\\alpha;s) = 1+\\frac{\\gamma}{\\alpha}s+{\\rm O}(s^2) .\n\\end{equation*}\nTo derive (\\ref{LUE_Hsym.4}), we note that simple manipulation shows\n\\begin{equation*}\n \\int^{\\infty}_s d\\lambda\\; (\\lambda-s)^{\\mu}\\lambda^{n+a}e^{-\\lambda}\n = s^{a+n}e^{-s}\\int^{\\infty}_0 d\\lambda\\; (1+\\lambda\/s)^{n+a}\\lambda^{\\mu}e^{-\\lambda} .\n\\end{equation*}\nBut with \n\\begin{equation*}\n W_{k,m}(z) = \\frac{z^ke^{-z\/2}}{\\Gamma(1\/2-k+m)}\n \\int^{\\infty}_0 dt\\; (1+t\/z)^{k-1\/2+m}t^{-k-1\/2+m}e^{-t} ,\n\\end{equation*}\nspecifying the Whittaker function, it is known that \\cite{WW_1965}\n\\begin{equation*}\n W_{k,m}(z)\n = \\frac{\\Gamma(-2m)}{\\Gamma(1\/2-k-m)}M_{k,m}(z)+\\frac{\\Gamma(2m)}{\\Gamma(1\/2-k+m)}M_{k,-m}(z)\n\\end{equation*}\nwhere\n\\begin{equation*}\n M_{k,m}(z) = z^{m+1\/2}e^{-z\/2}{}_1F_1(1\/2-k+m;2m+1;z) .\n\\end{equation*}\nConsequently\n\\begin{multline}\n \\int^{\\infty}_s d\\lambda\\; (\\lambda-s)^{\\mu}\\lambda^{n+a}e^{-\\lambda}\n = \\Gamma(\\mu+a+n+1)e^{-s}{}_1F_1(-a-n;-\\mu-a-n;s) \n \\\\\n + \\frac{\\Gamma(\\mu+1)\\Gamma(-\\mu-a-n-1)}{\\Gamma(-a-n)}\n s^{\\mu+a+n+1}e^{-s}{}_1F_1(\\mu+1;\\mu+a+n+2;s) .\n\\label{ws6}\n\\end{multline}\nThe left-hand side of (\\ref{ws6}) exists for $ \\Re(\\mu)>-1 $ if $ s>0 $ and\n$ \\Re(\\mu+a)>-1 $ if $ s=0 $, whereas the right-hand side is valid in \nthis parameter domain except for $ \\mu+a+n\\in\\mathbb Z_{\\geq 0} $, and in this case the individual\nterms have a simple pole at $ a+n\\notin\\mathbb Z_{\\geq 0} $ or are undefined when $ a+n\\in\\mathbb Z_{\\geq 0} $. \nNeedless to say the sum of the terms on the right-hand side has the same analytic\ncharacter as the left-hand side.\n\nRegarding the second integral in (\\ref{LUE_Hsym.3}), we first note that a simple \nchange of variables gives\n\\begin{equation*}\n \\int^{s}_0 d\\lambda\\; (s-\\lambda)^{\\mu}\\lambda^{n+a}e^{-\\lambda}\n = s^{n+1+a+\\mu}e^{-s}\\int^{1}_0 dx\\; (1-x)^{n+a}x^{\\mu}e^{sx} .\n\\end{equation*}\nBut the integral on the right hand side is the Euler integral representation of\nthe $ {}_1F_1 $ function, which shows\n\\begin{multline}\n \\int^{s}_0 d\\lambda\\; (s-\\lambda)^{\\mu}\\lambda^{n+a}e^{-\\lambda} \n \\\\\n = \\frac{\\Gamma(\\mu+1)\\Gamma(a+n+1)}{\\Gamma(\\mu+a+n+2)}\n s^{\\mu+a+n+1}e^{-s}{}_1F_1(\\mu+1;\\mu+a+n+2;s) .\n\\label{ws7}\n\\end{multline}\nThis latter relation is valid for $ \\Re(\\mu)>-1 $ and $ \\Re(a)>-1 $ when\n$ s>0 $.\nSubstituting (\\ref{ws6}) and (\\ref{ws7}) in (\\ref{LUE_Hsym.3}) and using the \nappropriate gamma function identities gives (\\ref{LUE_Hsym.4}), (\\ref{Hsym_1F1}).\n\\end{proof}\n\nWhen $ \\mu+a \\in\\mathbb Z_{\\geq 0} $ we have to consider two exceptional cases where one of the \nhypergeometric functions are not defined - the first when $ a+n \\in\\mathbb Z_{\\geq 0} $ \nfor which the hypergeometric function is indeterminate, and the second when\n$ a+n \\notin\\mathbb Z_{\\geq 0} $ and the hypergeometric function has a simple pole.\nThese two cases can be recovered by taking suitable limits and we just state the\nfinal results.\n\n\\begin{proposition}\nWhen $ \\mu+a=j \\in\\mathbb Z_{\\geq 0} $ and $ a+n=k \\in\\mathbb Z_{\\geq 0} $ with $ n+j\\geq k $ \nwe have\n\\begin{multline}\n w_n = k!e^{-s}\\Bigg\\{ \\sum^k_{l=0}\\frac{(n+j-l)!}{(k-l)!l!}s^l\n \\\\\n + (-1)^{n+j+k}(1-\\xi)\\frac{(n+j-k)!}{(n+j+1)!}s^{n+j+1}\n {}_1F_1(n+j+1-k;n+j+2;s) \\Bigg\\} ,\n\\label{Hsym_indeterm}\n\\end{multline}\nand to leading order in small $ s $ we have\n\\begin{equation}\n w_{n} \\mathop{\\sim}\\limits_{s \\to 0} \n (n+j)! - (n+j-k)(n+j-1)!s + (-1)^{n+j+k}(1-\\xi)\\frac{(n+j-k)!k!}{(n+j+1)!}s^{n+j+1} .\n\\label{Exp_indeterm}\n\\end{equation}\n\\end{proposition}\nNote that the condition $ n+j\\geq k $ is the same as $ \\mu \\geq 0 $, which falls\nwithin the domain of interest. The key difference of (\\ref{Exp_indeterm}) with\n(\\ref{LUE_Hsym.5}) and (\\ref{Hsym_1F1_exp}) is that the non-analytic term is\nnow polynomial and the second part of this term is absent having been cancelled by\na counterbalancing term.\n\n\\begin{proposition}\nWhen $ \\mu+a=j \\in\\mathbb Z_{\\geq 0} $ and $ a+n \\notin\\mathbb Z_{\\geq 0} $ we have \n\\begin{multline}\n w_n = e^{-s}\\Bigg\\{ \\sum^{n+j}_{l=0}\\frac{(-a-n)_l(n+j-l)!}{l!}(-s)^l \n \\\\\n +\\frac{\\Gamma(\\mu+1)\\Gamma(a+n+1)}{(n+j+1)!}(1-\\xi)e^{i\\pi\\mu}s^{n+j+1}\n {}_1F_1(\\mu+1;n+j+2;s)\n \\\\\n + (-1)^j\\frac{\\sin\\pi a}{\\pi}\\frac{\\Gamma(\\mu+1)\\Gamma(a+n+1)}{(n+j+1)!}s^{n+j+1}\n \\\\\n \\times\n \\sum^{\\infty}_{l=0}\\left[\\psi(l+1)+\\psi(n+j+l+2)-\\psi(\\mu+l+1)-\\log s \\right]\n \\frac{(\\mu+1)_l}{(n+j+2)_l}\\frac{s^l}{l!} \\Bigg\\} ,\n\\label{Hsym_pole}\n\\end{multline}\nand its leading order behaviour for small $ s $ is\n\\begin{multline}\n w_{n} \\mathop{\\sim}\\limits_{s \\to 0} \n (n+j)!+(a-j)(n+j-1)!s \n \\\\\n +\\frac{(a-j)_{n+j+1}}{(n+j+1)!}s^{n+j+1}\n \\left[\\frac{\\pi e^{-i\\pi a}}{\\sin\\pi a}(1-\\xi)+\\psi(1)+\\psi(n+j+2)-\\psi(\\mu+1)-\\log s \\right] .\n\\label{Exp_pole}\n\\end{multline}\n\\end{proposition}\nThe expansion (\\ref{Exp_pole}) differs significantly from (\\ref{LUE_Hsym.5}) and \n(\\ref{Hsym_1F1_exp}) because of the presence of logarithmic terms which now \nreplace the non-analytic contributions of the generic case.\n\n\\begin{corollary}\nUnder generic conditions on $ \\mu+a $ we have\n\\begin{multline}\n \\det[w_{j+k}]_{j,k=0,\\ldots,N-1} \n \\\\\n = \\det[\\Gamma(\\mu+a+1+j+k)]_{j,k=0,\\ldots,N-1}\n \\\\\n - \\mu s\\det[\\Gamma(\\mu+a+j) \\;\\; \\Gamma(\\mu+a+1+j+k)]_{{j=0,\\ldots,N-1}\\atop\n {k=1,\\ldots,N-1}}\n + {\\rm O}(s^2)\n \\\\\n + s^{\\mu+a+1}b_0(0)\\det[\\Gamma(\\mu+a+3+j+k)]_{j,k=0,\\ldots,N-2}\n \\left\\{1+{\\rm O}(s)\\right\\}\n \\\\\n + {\\rm O}(s^{2(\\mu+a+1)}) .\n\\label{Hdet_exp}\n\\end{multline}\n\\end{corollary}\n\\begin{proof}\nAccording to (\\ref{LUE_Hsym.5})\n\\begin{multline*}\n \\det[w_{j+k}]_{j,k=0,\\ldots,N-1} \n \\\\\n \\mathop{\\sim}\\limits_{s \\to 0}\n \\det[a_{j+k}(0)+sa'_{j+k}(0)+s^{\\mu+a+1+j+k}b_{j+k}(0)]_{j,k=0,\\ldots,N-1}\n \\\\\n \\mathop{\\sim}\\limits_{s \\to 0}\n \\det[a_{j+k}(0)]_{j,k=0,\\ldots,N-1}+s[s]\\det[a_{j+k}(0)+sa'_{j+k}(0)]_{j,k=0,\\ldots,N-1}\n \\\\\n + s^{\\mu+a+1}b_0(0)\\det[a_{j+k+2}(0)]_{j,k=0,\\ldots,N-2} ,\n\\end{multline*}\nwhere $ [s]P(s) $ denotes the coefficient of $ s $ in $ P(s) $. Recalling the \nexplicit formula for $ a_n(0) $ as given in (\\ref{Hsym_1F1_exp}) we obtain the \nconstant term and the term proportional to $ s^{\\mu+a+1} $ in (\\ref{Hdet_exp}).\nIt remains to compute the coefficient of $ s $, which according to \n(\\ref{Hsym_1F1_exp}) has the explicit form\n\\begin{equation}\n [s]\\det[\\Gamma(\\mu+a+1+j+k)-\\mu s\\Gamma(\\mu+a+j+k)]_{j,k=0,\\ldots,N-1} .\n\\label{ws9}\n\\end{equation}\nUsing the linearity formula\n\\begin{equation*}\n \\det[{\\bf a}_1 \\cdots {\\bf a}_j+{\\bf b}_j \\cdots {\\bf a}_n]\n = \\det[{\\bf a}_1 \\cdots {\\bf a}_j \\cdots {\\bf a}_n]\n +\\det[{\\bf a}_1 \\cdots {\\bf b}_j \\cdots {\\bf a}_n] ,\n\\end{equation*}\nwhere the $ \\bf{a} $'s and $ \\bf{b} $'s are column vectors, on each column of\nthe determinant we see that of the terms proportional to $ s $ only the one\nobtained from expanding the first column in non-zero (all the rest result in \ntwo identical columns), and the determinant given by (\\ref{Hdet_exp}) results.\n\\end{proof}\n\nIt remains to evaluate the determinants. For this task we make use of the identity\n\\cite{Nd_2004}\n\\begin{equation*}\n \\det[\\Gamma(z_k+j)]_{j,k=0,\\ldots,n-1}\n = \\prod^{n-1}_{j=0}\\Gamma(z_j)\\prod_{0\\leq j-1 $, $ \\Re(a)>-1 $ and $ \\mu+a\\notin\\mathbb Z_{\\geq 0} $ we have \n\\begin{multline}\n \\tilde{E}_{2,N}((0,s);a,\\mu;\\xi) = 1-\\frac{\\mu N}{\\mu+a}s+{\\rm O}(s^2)\n \\\\\n +\\frac{\\Gamma(\\mu+1)\\Gamma(a+1)\\Gamma(\\mu+a+N+1)}{\\Gamma^2(\\mu+a+2)\\Gamma(\\mu+a+1)\\Gamma(N)}\n \\left( (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(\\mu+a)} \\right) \n s^{\\mu+a+1}\\left\\{1+{\\rm O}(s)\\right\\}\n \\\\ +{\\rm O}(s^{2(\\mu+a+1)}) ,\n\\label{LUE_Eexp}\n\\end{multline}\nand consequently\n\\begin{multline}\n W_{N}(s;a,\\mu;\\xi) = -N\\mu-\\frac{\\mu N}{\\mu+a}s+{\\rm O}(s^2)\n \\\\\n +\\frac{\\Gamma(\\mu+1)\\Gamma(a+1)\\Gamma(\\mu+a+N+1)}{\\Gamma(\\mu+a+2)\\Gamma^2(\\mu+a+1)\\Gamma(N)}\n \\left( (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(\\mu+a)} \\right) \n s^{\\mu+a+1}\\left\\{1+{\\rm O}(s)\\right\\}\n \\\\ +{\\rm O}(s^{2(\\mu+a+1)}) .\n\\label{LUE_Wexp}\n\\end{multline}\n\\end{proposition}\nIn the first exceptional case $ \\mu+a=j \\in\\mathbb Z_{\\geq 0} $ and $ a=k \\in\\mathbb Z_{\\geq 0} $ \nwith $ j\\geq k $ one can still use (\\ref{LUE_Eexp}) but omitting the term involving\nthe ratio of sines, in the case $ j=0 $, or the whole term if $ j>0 $.\nThe situation of the other exceptional case $ \\mu+a=j \\in\\mathbb Z_{\\geq 0} $ and \n$ a \\notin\\mathbb Z_{\\geq 0} $ is more complicated and more so for larger $ j $, and we \nonly give the examples of $ j=0,1 $.\n\\begin{proposition}\nFor $ \\Re(\\mu)>-1 $, $ \\Re(a)>-1 $ with $ \\mu+a=0 $ we have \n\\begin{multline}\n\\tilde{E}_{2,N}((0,s);a,\\mu=-a;\\xi) = 1\n \\\\\n +\\Big\\{-1 + \\frac{\\pi a}{\\sin\\pi a}e^{-i\\pi a}(1-\\xi)\n +a\\left[2\\psi(2)+\\psi(1)-\\psi(1-a)-\\psi(N+1)-\\log s \\right] \\Big\\}Ns\n \\\\ + {\\rm o}(s) .\n\\label{LUE_Eexp_pole0}\n\\end{multline}\nFor $ \\mu+a=1 $ we have\n\\begin{multline}\n\\tilde{E}_{2,N}((0,s);a,\\mu=1-a;\\xi) = 1+(a-1)Ns\n \\\\\n +\\frac{a(a-1)}{4}\\Big\\{\\frac{\\pi}{\\sin\\pi a}e^{-i\\pi a}(1-\\xi)\n +2\\psi(3)+\\psi(2)-\\psi(2-a)-\\psi(N+2)-\\log s \\Big\\}(N+1)Ns^2\n \\\\ + {\\rm o}(s^2) .\n\\label{LUE_Eexp_pole1}\n\\end{multline}\n\\end{proposition}\n\n\\section{Comparison with the Jimbo solution}\\label{S3}\n\\setcounter{equation}{0}\nThe small $ s$ expansion of the most general solution permitted by (\\ref{PV_sigma}),\nor more precisely its corresponding $ \\tau $-function (see (\\ref{PV_tau}) below) has \nbeen determined by Jimbo \\cite{Ji_1982}. However in \\cite{Ji_1982} the equation\n(\\ref{PV_sigma}) is not treated directly. Instead the discussion is based on\nthe equation\n\\begin{multline}\n (s\\zeta'')^2 - \\left[ \\zeta - s\\zeta' + 2(\\zeta')^2 -\n (2\\theta_0+\\theta_{\\infty}) \\zeta' \\right]^2 \\\\\n + 4\\zeta'(\\zeta'-\\theta_0)(\\zeta'-\\frac{1}{2}(\\theta_0-\\theta_s+\\theta_{\\infty}))\n (\\zeta'-\\frac{1}{2}(\\theta_0+\\theta_s+\\theta_{\\infty}))\n = 0 ,\n\\label{PV_zeta}\n\\end{multline}\nand the small $ s $ behaviour of the corresponding $ \\tau $-function $ \\tau_V(s) $,\nspecified by the the requirement that\n\\begin{equation}\n \\zeta(s) = s\\frac{d}{ds}\\log\\tau_V(s)+\\frac{1}{2}(\\theta_0+\\theta_{\\infty})s\n +\\frac{1}{4}[(\\theta_0+\\theta_{\\infty})^2-\\theta_s^2] , \n\\label{PV_tau}\n\\end{equation}\nwas determined.\n\nComparison of (\\ref{PV_zeta}), (\\ref{PV_tau}) with (\\ref{PV_sigma}), \n(\\ref{LUE_sigma}) shows that for the parameters (\\ref{LUE_Vparam})\n\\begin{equation}\n \\tilde{E}_{2,N}((0,s);a,\\mu;\\xi) = s^{N^2+N(a+\\mu)}e^{-(N+a\/2)s}\\tau_V(s) ,\n\\label{LUE_Vtau}\n\\end{equation}\nwhile in general\n\\begin{equation}\n \\theta_0 = -\\nu_1, \\quad \\theta_s = \\nu_2-\\nu_3, \n \\quad \\theta_{\\infty} = \\nu_1-\\nu_2-\\nu_3 .\n\\label{Vnu_theta}\n\\end{equation}\nNote that for the parameters (\\ref{LUE_Vparam}) we thus thus have\n\\begin{equation}\n \\theta_0 = \\mu, \\quad \\theta_s = a, \\quad \\theta_{\\infty} = -2N-a-\\mu .\n\\label{LUE_theta}\n\\end{equation}\n\nThe relevant result from \\cite{Ji_1982} can now be presented. It states that the\nmost general small $ s $ behaviour of $ \\tau_V(s) $ permitted by the equation\n(\\ref{PV_zeta}) is\n\\begin{multline}\n \\tau_V(s) = Cs^{(\\sigma^2-\\theta^2_{\\infty})\/4}\n \\Bigg\\{ 1\n - \\frac{\\theta_{\\infty}(\\theta^2_s-\\theta^2_0+\\sigma^2)}\n {4\\sigma^2}s \\\\\n + u\n \\frac{\\Gamma^2(-\\sigma)}{\\Gamma^2(2+\\sigma)}\n \\frac{\\Gamma(1+\\frac{\\displaystyle\\theta_s+\\theta_0+\\sigma}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle\\theta_s-\\theta_0+\\sigma}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle\\theta_{\\infty}+\\sigma}{\\displaystyle 2})}\n {\\Gamma(\\frac{\\displaystyle\\theta_s+\\theta_0-\\sigma}{\\displaystyle 2})\n \\Gamma(\\frac{\\displaystyle\\theta_s-\\theta_0-\\sigma}{\\displaystyle 2})\n \\Gamma(\\frac{\\displaystyle\\theta_{\\infty}-\\sigma}{\\displaystyle 2})}\n s^{1+\\sigma}\n \\\\\n + \\frac{1}{u}\n \\frac{\\Gamma^2(\\sigma)}{\\Gamma^2(2-\\sigma)}\n \\frac{\\Gamma(1+\\frac{\\displaystyle\\theta_s+\\theta_0-\\sigma}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle\\theta_s-\\theta_0-\\sigma}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle\\theta_{\\infty}-\\sigma}{\\displaystyle 2})}\n {\\Gamma(\\frac{\\displaystyle\\theta_s+\\theta_0+\\sigma}{\\displaystyle 2})\n \\Gamma(\\frac{\\displaystyle\\theta_s-\\theta_0+\\sigma}{\\displaystyle 2})\n \\Gamma(\\frac{\\displaystyle\\theta_{\\infty}+\\sigma}{\\displaystyle 2})}\n s^{1-\\sigma}\n \\\\\n + {\\rm O}(|s|^{2(1-\\Re(\\sigma))}) \\Bigg\\} ,\n\\label{PV_sExp_0}\n\\end{multline}\nwhere $ C $ is a normalisation constant, while $ u $ and $ \\sigma $ are arbitrary\nparameters. The above result was derived subject to the conditions \n$ \\theta_0, \\theta_s \\notin\\mathbb Z $,\n$ \\frac{1}{2}(\\theta_{\\infty}\\pm\\sigma) \\notin\\mathbb Z $,\n$ \\frac{1}{2}(\\theta_{s}\\pm\\theta_0\\pm\\sigma) \\notin\\mathbb Z $ and\nthat $ 0 < \\Re(\\sigma) < 1 $ (a distinct solution was presented for $ \\sigma=0 $).\nThese conditions therefore strictly apply only to the generic or transcendental solutions of\nthe fifth Painlev\\'e equation. \nFor generic parameter values the terms given in (\\ref{PV_sExp_0}) uniquely \nspecify all the subsequent terms in the convergent Puisuex-type expansion for \n$ \\zeta(s) $ about $ s=0 $\n\\begin{equation}\n \\zeta(s) = \n \\sum^{\\infty}_{j=0}\\sum_{|k|\\leq j}c_{j,k}s^{j+k\\sigma} ,\n\\label{PV_puisuex}\n\\end{equation}\ni.e. with any two of $ c_{1,0},c_{1,1} $ or $ c_{1,-1} $ given. \n\nTo relate this to $ \\tilde{E}_{2,N} $, we see from (\\ref{LUE_Vtau}) and \n(\\ref{LUE_theta}) that we require $ \\sigma^2=(a+\\mu)^2 $ and thus we can choose\n\\begin{equation}\n \\sigma=a+\\mu .\n\\label{LUE_const}\n\\end{equation}\nThis relation, $ \\sigma=\\theta_0+\\theta_{s} $, is a violation of one of the strict \nconditions given above and is in fact a sufficient condition for a classical solution,\nalong with the necessary condition $ \\theta_0+\\theta_s+\\theta_{\\infty}=-2N\\in\\mathbb Z $,\nwhich is the type of solution that we are dealing with here. \nHowever we conjecture that Jimbo's conditions\ncan be relaxed to accommodate such solutions and the corresponding formulae \n(or limiting forms if necessary) still hold. \nWith this choice of $ \\sigma $ the coefficient of $ s^{1-\\sigma} $ in (\\ref{PV_sExp_0})\ncontains a factor of\n\\begin{equation*}\n \\frac{1}{\\Gamma(\\frac{\\displaystyle\\theta_{\\infty}+\\sigma}{\\displaystyle 2})}\n = \\frac{1}{\\Gamma(-N)}\n\\end{equation*}\nand thus vanishes. Simplifying the other terms gives\n\\begin{multline*}\n \\tau_V(s) \\sim Cs^{-N^2-N(a+\\mu)}\n \\Bigg\\{ 1 + \\frac{(2N+a+\\mu)a}{2(a+\\mu)}s \\\\\n + u \\frac{\\sin\\pi\\mu}{\\sin\\pi(a+\\mu)}\n \\frac{\\Gamma(a+1)\\Gamma(\\mu+1)\\Gamma(N+1+a+\\mu)}\n {\\Gamma^2(2+a+\\mu)\\Gamma(1+a+\\mu)\\Gamma(N)}\n s^{1+a+\\mu} \\Bigg\\} .\n\\end{multline*}\nSubstituting in (\\ref{LUE_Vtau}) we see that this is in precise agreement with \n(\\ref{LUE_Eexp}) provided we choose\n\\begin{equation}\n u\\frac{\\sin\\pi\\mu}{\\sin\\pi(a+\\mu)} = (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(a+\\mu)}\n\\label{LUE_u}\n\\end{equation}\n\n\\section{The hard edge limit}\\label{S4}\n\\setcounter{equation}{0}\nThe hard edge limit is defined by (\\ref{HE_Edefn}). However, only in the cases\n$ \\mu=0 $, $ \\mu=2 $ do we know how to prove its existence for general $ \\xi $\n(in the case $ \\mu=0 $ $ \\tilde{E}_{2,N} $ can be written as a Fredholm determinant,\nwhile the case $ \\mu=2 $ is related to this via differentiation). However a log-gas\nviewpoint (\\cite{rmt_Fo}) indicates that the limit will be well defined, and\nmoreover we expect that it can be taken term-by-term in the small $ s $ expansion\nof $ \\tilde{E}_{2,N} $. In this section we will show that taking the hard edge \nlimit of the small $ s $ expansion (\\ref{LUE_Eexp}) give rise to an initial \ncondition for the solution of (\\ref{PIII_sigma}) consistent with that allowed\nby Jimbo's theory of the small $ s $ expansion of the Painlev\\'e ${\\rm III^{\\prime}}\\;$ equation. From a \npractical perspective this specifies $ \\tilde{E}^{\\rm hard}_{2} $ for general\nvalues of the parameters according to (\\ref{HE_E}), while from a theoretical\nviewpoint it lends weight to the belief that (\\ref{HE_E}) is indeed the correct\nlimiting evaluation for general values of the parameters.\n\nUnder the assumption that the hard edge limit can be taken term-by-term in the\nsmall $ s $ expansion of Proposition (\\ref{LUE_exp}) is immediate.\n\n\\begin{corollary}\nFor $ \\Re(\\mu)>-1 $, $ \\Re(a)>-1 $ and $ \\mu+a\\notin\\mathbb Z_{\\geq 0} $ \nwe have\n\\begin{multline}\n \\tilde{E}^{\\rm hard}_{2}(s;a,\\mu;\\xi) = 1-\\frac{\\mu}{4(a+\\mu)}s+{\\rm O}(s^2)\n \\\\\n +\\frac{\\Gamma(\\mu+1)\\Gamma(a+1)}{\\Gamma^2(\\mu+a+2)\\Gamma(\\mu+a+1)}\n \\left( (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(\\mu+a)} \\right) \n \\left(\\frac{s}{4}\\right)^{\\mu+a+1}\\left\\{1+{\\rm O}(s)\\right\\}\n \\\\ +{\\rm O}(s^{2(\\mu+a+1)}) ,\n\\label{HE_Eexp}\n\\end{multline}\nand consequently the $ \\sigma $-function $ \\sigma_{\\rm III'}(s) $ in (\\ref{HE_E.2})\nhas the small $ s $ expansion\n\\begin{multline}\n \\sigma_{\\rm III'}(s) = -\\frac{\\mu(\\mu+a)}{2}+\\frac{\\mu}{4(\\mu+a)}s+{\\rm O}(s^2)\n \\\\\n -\\frac{\\Gamma(\\mu+1)\\Gamma(a+1)}{\\Gamma(\\mu+a+2)\\Gamma^2(\\mu+a+1)}\n \\left( (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(\\mu+a)} \\right) \n \\left(\\frac{s}{4}\\right)^{\\mu+a+1}\\left\\{1+{\\rm O}(s)\\right\\}\n \\\\ +{\\rm O}(s^{2(\\mu+a+1)}) .\n\\label{HE_Sexp}\n\\end{multline}\n\\end{corollary}\n\nSome examples of exceptional cases not covered by the preceding corollary\nare the following. They are obtained by taking the hard edge limit of\n(\\ref{LUE_Eexp_pole0}) and (\\ref{LUE_Eexp_pole1}). \n\\begin{corollary}\nFor $ \\Re(\\mu)>-1 $, $ \\Re(a)>-1 $ and $ \\mu+a=0 $ we have \n\\begin{multline}\n\\tilde{E}^{\\rm hard}_{2}(s;a,\\mu=-a;\\xi) = 1\n \\\\\n +\\Big\\{\n -1+\\frac{\\pi a}{\\sin\\pi a}e^{-\\pi ia}(1-\\xi)\n +a[2\\psi(2)+\\psi(1)-\\psi(1-a)-\\log(s\/4)] \\Big\\}\\frac{s}{4}\n \\\\ +{\\rm o}(s) ,\n\\label{HE_Eexp0}\n\\end{multline}\nwhilst for $ \\mu+a=1 $ we have\n\\begin{multline}\n\\tilde{E}^{\\rm hard}_{2}(s;a,\\mu=1-a;\\xi) = 1+(a-1)\\frac{s}{4}\n \\\\\n +\\frac{a(a-1)}{4} \\Big\\{\n \\frac{\\pi}{\\sin\\pi a}e^{-\\pi ia}(1-\\xi)\n +2\\psi(3)+\\psi(2)-\\psi(2-a)-\\log(s\/4) \\Big\\}\\left(\\frac{s}{4}\\right)^2\n \\\\ +{\\rm o}(s^2) .\n\\label{HE_Eexp1}\n\\end{multline}\n\\end{corollary}\n\nTo compare these results to the small independent variable expansions given by \nJimbo in the theory of ${\\rm III^{\\prime}}\\;$, we must first undertake some preliminary\ncalculations as the equation (\\ref{PIII_sigma}) is not directly studied in \n\\cite{Ji_1982}. Rather the equation studied is\n\\begin{equation}\n (t\\zeta'')^2 = 4\\zeta'(\\zeta' -1)(\\zeta - t\\zeta')\n + \\left( \\frac{v_1+v_2}{2}-v_1\\zeta'\\right)^2 ,\n\\label{PIII_zeta}\n\\end{equation}\nwhere we have identified $ \\theta_0=-v_2 $, $ \\theta_{\\infty}=-v_1 $ \n($ \\theta_0, \\theta_{\\infty} $ are the parameters appearing in \\cite{Ji_1982}).\nIn terms of $ \\zeta(t) $ the $ \\tau $-function $ \\tau_{\\rm III'}(t) $ is specified\nby the requirement that\n\\begin{equation}\n \\zeta(t) = t\\frac{d}{dt}\\log\\tau_{\\rm III'}(t)+\\frac{v_2^2-v_1^2}{4}+t ,\n\\label{PIII_zeta_tau}\n\\end{equation}\nand it is the small $ t $ expansion of $ \\tau_{\\rm III'}(t) $ presented in \n\\cite{Ji_1982}. Comparison of (\\ref{PIII_zeta}) and (\\ref{PIII_sigma}) shows that\n\\begin{equation}\n \\zeta(t) = -\\sigma_{\\rm III'}(s)+\\frac{v_1(v_2-v_1)}{4}+\\frac{s}{4} ,\n \\qquad t = \\frac{s}{4} .\n\\label{HE_zeta}\n\\end{equation}\nRecalling (\\ref{HE_E.2}), (\\ref{HE_E}), (\\ref{HE_zeta}) and (\\ref{PIII_zeta_tau}) \nwe see\n\\begin{equation}\n \\tilde{E}^{\\rm hard}_{2}(s;a,\\mu;\\xi) = t^{(v_2^2-v_1^2)\/4}\\tau_{\\rm III'}(t) .\n\\label{HE_tau}\n\\end{equation}\n\nIn \\cite{Ji_1982} the most general small $ t $ expansion of $ \\tau_{\\rm III'}(t) $\nas permitted by (\\ref{PIII_zeta}) is presented. It reads\n\\begin{multline}\n \\tau_{\\rm III'}(t) = Ct^{(\\sigma^2-v^2_2)\/4}\n \\Bigg\\{ 1 + \\frac{v_1v_2-\\sigma^2}{2\\sigma^2}t\n \\\\\n - u\n \\frac{\\Gamma^2(-\\sigma)}{\\Gamma^2(2+\\sigma)}\n \\frac{\\Gamma(1+\\frac{\\displaystyle v_2+\\sigma}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle v_1+\\sigma}{\\displaystyle 2})}\n {\\Gamma(\\frac{\\displaystyle v_2-\\sigma}{\\displaystyle 2})\n \\Gamma(\\frac{\\displaystyle v_1-\\sigma}{\\displaystyle 2})}\n t^{1+\\sigma}\n \\\\\n - \\frac{1}{u}\n \\frac{\\Gamma^2(\\sigma)}{\\Gamma^2(2-\\sigma)}\n \\frac{\\Gamma(1+\\frac{\\displaystyle v_2-\\sigma}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle v_1-\\sigma}{\\displaystyle 2})}\n {\\Gamma(\\frac{\\displaystyle v_2+\\sigma}{\\displaystyle 2})\n \\Gamma(\\frac{\\displaystyle v_1+\\sigma}{\\displaystyle 2})}\n t^{1-\\sigma}\n \\\\\n + {\\rm O}(|t|^{2(1-\\Re(\\sigma))}) \\Bigg\\} ,\n\\label{PIII_tExp_0}\n\\end{multline}\nwhere as in (\\ref{PV_sExp_0}) $ C $ is a normalisation, while $ u $ and $ \\sigma $\nare arbitrary parameters. This result was established under the assumptions that\n$ \\frac{1}{2}(v_1\\pm\\sigma) \\notin\\mathbb Z $ and $ \\frac{1}{2}(v_2\\pm\\sigma) \\notin\\mathbb Z $\nalong with $ 0 < \\Re\\sigma < 1 $ (for $ \\sigma=0 $ a distinct solution is given).\n\nTo see that this structure is consistent with (\\ref{HE_Eexp}) and (\\ref{HE_tau}),\nrecalling (\\ref{HE_IIIparam}) we see that for the right hand side of \n(\\ref{HE_tau}) to tend to $ 1 $ as $ t $ tends to zero we must have $ C=1 $ and\n$ \\sigma=\\pm v_1 $. Again this is a violation of first condition given above but we\nconjecture that the formulae have meaning under the following limiting procedure and\nare correct. Choosing the positive sign for definiteness, and then writing \n\\begin{equation*}\n \\frac{u}{\\Gamma(\\frac{\\displaystyle v_1-\\sigma}{\\displaystyle 2})}\n = \\frac{u(v_1-\\sigma)}{2\\Gamma(1+\\frac{\\displaystyle v_1-\\sigma}{\\displaystyle 2})}\n\\end{equation*}\nwe see that requiring \n\\begin{equation*}\n \\frac{u}{2}(v_1-\\sigma) \\to \\tilde{u}\\frac{\\sin\\pi v_1}{\\pi}\n \\quad {\\rm as} \\quad \\sigma \\to v_1 ,\n\\end{equation*}\n(\\ref{PIII_tExp_0}) reads\n\\begin{multline}\n \\tau_{\\rm III'}(t) \\sim t^{(v_1^2-v^2_2)\/4}\n \\Bigg\\{ 1 + \\frac{v_1v_2-v_1^2}{2v_1^2}t\n \\\\\n + \\tilde{u}\n \\frac{\\sin\\pi(v_1-v_2)\/2}{\\sin\\pi v_1}\n \\frac{\\Gamma(1-\\frac{\\displaystyle v_2-v_1}{\\displaystyle 2})\n \\Gamma(1+\\frac{\\displaystyle v_2+v_1}{\\displaystyle 2})}\n {\\Gamma^2(2+v_1)\\Gamma(1+v_1)}\n \\left( \\frac{t}{4}\\right)^{1+v_1} \\Bigg\\} .\n\\label{PIII_tExp_0.2}\n\\end{multline}\nRecalling again (\\ref{HE_IIIparam}) and (\\ref{HE_tau}) we see that this agrees\nwith (\\ref{HE_Eexp}) provided\n\\begin{equation}\n \\tilde{u}\\frac{\\sin\\pi\\mu}{\\sin\\pi(a+\\mu)}\n = (1-\\xi)e^{\\pi i\\mu}-\\frac{\\sin\\pi a}{\\sin\\pi(a+\\mu)} ,\n\\label{HE_u}\n\\end{equation}\n(cf. (\\ref{LUE_u})).\n\n\\section{Application}\\label{S5}\n\\setcounter{equation}{0}\nIn a recent work relating to the application of random matrix theory to the\nstudy of moments of the derivative of the Riemann zeta-function, Conrey,\nRubinstein and Snaith \\cite{CRS_2005} obtained two asymptotic expressions\nassociated with the derivative of characteristic polynomials for random unitary\nmatrices. With $ U $ a Haar distributed element of the unitary group $ U(N) $,\nand $ e^{i\\theta_1},\\ldots,e^{i\\theta_N} $ its eigenvalues, let\n\\begin{equation}\n \\Lambda_A(s) = \\prod^N_{j=1}(1-se^{-i\\theta_j}) ,\n\\label{UcharP}\n\\end{equation}\nand\n\\begin{equation}\n {\\mathcal Z}_A(s) = e^{-\\pi iN\/2}e^{i\\sum^N_{n=1}\\theta_n\/2}s^{-N\/2}\\Lambda_A(s) ,\n\\label{UcharZ}\n\\end{equation}\n(note that $ {\\mathcal Z}_A(e^{i\\theta}) $ is real for $ \\theta $ real). In terms\nof this notation, the two results from \\cite{CRS_2005} are\n\\begin{equation}\n \\langle |\\Lambda'_A(1)|^{2k} \\rangle_{A\\in U(N)} \\mathop{\\sim}\\limits_{N \\to \\infty}\n b_k N^{k^2+2k} ,\n\\label{LcharM.1}\n\\end{equation}\nwhere\n\\begin{multline}\n b_k = (-1)^{k(k+1)\/2} \\sum^{k}_{h=0}{{k}\\choose{h}}(k+h)!\n \\\\\n \\times\n [x^{k+h}]\\left( e^{-x}x^{-k^2\/2}\n \\det[I_{\\alpha+\\beta-1}(2\\sqrt{x})]_{\\alpha,\\beta=1,\\ldots,k}\\right) ,\n\\label{LcharM.1aux}\n\\end{multline}\nand\n\\begin{equation}\n \\langle |{\\mathcal Z}'_A(1)|^{2k} \\rangle_{A\\in U(N)} \\mathop{\\sim}\\limits_{N \\to \\infty}\n b'_k N^{k^2+2k} ,\n\\label{ZcharM.1}\n\\end{equation}\nwhere\n\\begin{equation}\n b'_k = (-1)^{k(k+1)\/2} (2k)!\n [x^{2k}]\\left( e^{-x\/2}x^{-k^2\/2}\n \\det[I_{\\alpha+\\beta-1}(2\\sqrt{x})]_{\\alpha,\\beta=1,\\ldots,k}\\right) .\n\\label{ZcharM.1aux}\n\\end{equation}\nIn (\\ref{LcharM.1aux}) and (\\ref{ZcharM.1aux}) the notation $ [x^p]f(x) $ denotes \nthe coefficient of $ x^p $ in $ f(x) $.\n\nThe relevance of these formulae to the present study is that the determinant \ntherein can be identified in terms of $ \\tilde{E}^{\\rm hard}_{2} $. Thus, we have\nshown in a previous study \\cite{FW_2002a} that for $ a \\in \\mathbb Z_{\\geq 0} $\n\\begin{equation}\n \\tilde{E}^{\\rm hard}_{2}(s;a,\\mu;\\xi=1)\n = A(a,\\mu)\\left(\\frac{2}{\\sqrt{s}}\\right)^{a\\mu}e^{-s\/4}\n \\det[I_{\\mu+\\alpha-\\beta}(\\sqrt{s})]_{\\alpha,\\beta=1,\\ldots,a} .\n\\label{HE_bessel}\n\\end{equation}\nwhere\n\\begin{equation}\n A(a,\\mu) = a!\\prod^a_{j=1}\\frac{(j+\\mu-1)!}{j!} .\n\\label{.1}\n\\end{equation}\nInterchanging row $ \\beta $ by row $ a-\\beta+1 $ ($ \\beta=1,\\ldots,a $ in order) \nwe see from this that\n\\begin{equation}\n\\begin{split}\n b_k & = \\frac{(-1)^k}{A(k,k)}\\sum^k_{h=0} {{k}\\choose{h}}(k+h)!\n [x^{k+h}]\\tilde{E}^{\\rm hard}_{2}(4x;k,k;\\xi=1)\n \\\\\n b'_k & = \\frac{(-1)^k}{A(k,k)}(2k)!\n [x^{2k}]\\left( e^{x\/2}\\tilde{E}^{\\rm hard}_{2}(4x;k,k;\\xi=1) \\right)\n\\end{split}\n\\label{:a}\n\\end{equation}\nNote that the Painlev\\'e ${\\rm III^{\\prime}}\\;$ parameters appearing in this solution are \n$ \\mu=a=k\\in\\mathbb N $ and $ \\mu+a=2k\\in2\\mathbb N $ and thus we are dealing with the exceptional \ncase of indeterminacy referred to in Section 2. However as was noted there the \ngeneric formulae still hold with to the modifications discussed and in particular \nthe $\\sigma$-function has a small argument expansion of a purely analytic form. \n\nFrom (\\cite{FW_2003b}) it is known that the determinants in (\\ref{LcharM.1aux})\nand (\\ref{ZcharM.1aux}) can also be expressed as a particular generalised \nhypergeometric function. Such an observation implies, for instance, that\n\\begin{equation}\n x^{-k^2\/2}\\det[I_{\\alpha+\\beta-1}(2\\sqrt{x})]_{\\alpha,\\beta=1,\\ldots,k} \n = \\prod^k_{j=1}\\frac{j!}{\\Gamma(j+k)}\n {{}^{\\vphantom{(1)}}_0}F^{(1)}_1(;2k;x_1,\\ldots,x_k)|_{x_j=x} ,\n\\label{.b}\n\\end{equation} \nwhere $ {{}^{\\vphantom{(1)}}_0}F^{(1)}_1(;c;x_1,\\ldots,x_k) $ has a series\ndevelopment about $ x_1,\\ldots,x_k=0 $ with an explicitly given coefficient for an\narbitrary term.\nHowever this is not a practical or efficient way to compute the coefficients \nrequired in (\\ref{LcharM.1aux}) or (\\ref{ZcharM.1aux}) for moderate or large $ k $\nas it involves the hook lengths of Young diagrams associated with the partitions\nof $ k $. \n\nAccording to (\\ref{HE_E}), (\\ref{PIII_sigma}) and (\\ref{HE_Sexp})\n\\begin{equation}\n \\tilde{E}^{\\rm hard}_{2}(4x;k,k;\\xi=1) \n = \\exp\\left( -\\int^{4x}_0\\frac{ds}{s}\\;(\\sigma_{\\rm III'}(s)+k^2)\\right) ,\n\\label{Dzeta_tau}\n\\end{equation}\nwhere $ \\sigma_{\\rm III'}(s) $ satisfies the particular $ \\sigma $-Painlev\\'e \n${\\rm III^{\\prime}}\\;$ equation\n\\begin{equation}\n (s\\sigma''_{{\\rm III}'})^2 \n + \\sigma'_{{\\rm III}'}(4\\sigma'_{{\\rm III}'}-1)(\\sigma_{{\\rm III}'}-s\\sigma'_{{\\rm III}'})\n - \\frac{k^2}{16} = 0 ,\n\\label{Dzeta_PIIIsigma}\n\\end{equation}\nsubject to the boundary condition\n\\begin{equation}\n \\sigma_{\\rm III'}(s) \\mathop{\\sim}\\limits_{s \\to 0} -k^2+\\frac{s}{8}+{\\rm O}(s^2),\n \\quad k\\in\\mathbb N .\n\\label{Dzeta_PIIIBC}\n\\end{equation}\nSubstituting \n\\begin{equation}\n \\sigma_{\\rm III'}(s) = \\eta(s)+\\frac{s}{8} ,\n\\label{Dzeta_xfm}\n\\end{equation}\n(\\ref{Dzeta_PIIIsigma}) reads\n\\begin{equation}\n (s\\eta'')^2 + 4((\\eta')^2-\\frac{1}{64})(\\eta-s\\eta') - \\frac{k^2}{4^2} = 0 .\n\\label{Dzeta_PIII.2}\n\\end{equation}\nWe see immediately that $ \\eta(s) $ can be expanded in an even function of $ s $ about\n$ s=0 $,\n\\begin{equation}\n \\eta(s) = \\sum^{\\infty}_{n=0}c_ns^{2n}, \\qquad c_0=-k^2,\n \\quad k\\in\\mathbb N .\n\\label{Dzeta_Exp}\n\\end{equation}\nMoreover the coefficients can be computed by a recurrence relation.\n\n\\begin{proposition}\\label{Dzeta_recur}\nSubstituting (\\ref{Dzeta_Exp}) in (\\ref{Dzeta_PIII.2}) shows\n\\begin{equation}\n c_1 = \\frac{1}{64(4k^2-1)} ,\n\\label{Dzeta_c1}\n\\end{equation}\nwhile for $ p \\geq 2 $\n\\begin{multline}\n c_p = \\frac{1}{2c_1p(2p-1)+(2p-1)\/64-8pk^2c_1}\n \\\\\n \\times\\Bigg( 4k^2\\sum^{p-2}_{l=1}(l+1)(p-l)c_{l+1}c_{p-l}\n \\\\\n -\\sum^{p-2}_{l=1}(l+1)(p-l)(2l+1)(2p-2l+1)c_{l+1}c_{p-l}\n \\\\\n -\\sum^{p-1}_{l=1}(1-2l)c_{l}A_{p-l-1} \\Bigg) ,\n\\label{Dzeta_cp}\n\\end{multline}\nwhere\n\\begin{equation}\n A_q = \\sum^{q}_{l=0}(l+1)(q-l+1)c_{l+1}c_{q-l+1} .\n\\label{Dzeta_aux}\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nWith $ h_l := (l+1)(2l+1)c_{l+1} $ we see\n\\begin{equation}\n (s\\eta'')^2 = 4\\sum^{\\infty}_{p=1}H_{p-1}s^{2p}, \\qquad\n H_p = \\sum^{p}_{l=0}h_lh_{p-l} ,\n\\label{Dzeta_aux1}\n\\end{equation}\nand similarly with $ a_l := (l+1)c_{l+1} $ we have\n\\begin{equation*}\n (s\\eta')^2 = 4s^2\\sum^{\\infty}_{p=0}A_{p}s^{2p}, \\qquad\n A_p = \\sum^{p}_{l=0}a_la_{p-l} .\n\\end{equation*}\nIt follows from this latter result that\n\\begin{equation}\n \\left( (\\eta')^2-\\frac{1}{64} \\right)(\\eta-s\\eta') = \\sum^{\\infty}_{p=0}G_{p}s^{2p} ,\n\\label{Dzeta_aux2}\n\\end{equation}\nwhere\n\\begin{equation*}\n G_p = \\sum^{p}_{l=0}(1-2l)c_lb_{p-l}, \\quad\n b_0 = -\\frac{1}{64}, \\quad b_p = 4A_{p-1} \\quad (p\\geq 1) .\n\\end{equation*}\nSubstituting (\\ref{Dzeta_aux1}) and (\\ref{Dzeta_aux2}) in (\\ref{Dzeta_PIII.2})\nand equating like coefficients of $ s^{2p} $ to zero shows that for $ p\\geq 1 $\n\\begin{equation*}\n H_{p-1}+G_{p} = 0 .\n\\end{equation*}\nThis for $ p=1 $ implies (\\ref{Dzeta_c1}), and for $ p>1 $ implies (\\ref{Dzeta_cp}).\n\\end{proof}\n\nUsing Proposition \\ref{Dzeta_recur} it is straightforward to calculate, via \ncomputer algebra, the first $ k $ coefficients in (\\ref{Dzeta_Exp}) for any \nparticular value of $ k $. Furthermore use of computer algebra gives the \npower series up to $ x^{2k} $ of \n\\begin{equation*}\n \\tilde{E}^{\\rm hard}_{2}(4x;k,k;\\xi=1) \\quad{\\rm and}\\quad\n e^{x\/2}\\tilde{E}^{\\rm hard}_{2}(4x;k,k;\\xi=1) ,\n\\end{equation*}\naccording to (\\ref{Dzeta_tau}). From these power series the formulae \n(\\ref{:a}) are used to compute $ b_k $ and $ b'_k $. In \\cite{CRS_2005} the\nfirst 15 values of both $ b_k $ and $ b'_k $ were tabulated. This can be rapidly\nextended using the present method. However the resulting rational numbers \nquickly become unwieldy to record. Let us then be content by presenting just the\n16th member of the sequences,\n\\begin{equation*}\n b_{16} = \\frac{\\scriptstyle \n307\n\\cdot\n23581\n\\cdot\n92867\n\\cdot\n760550281759\n }\n {\\scriptstyle \n2^{272}\n\\cdot\n3^{130}\n\\cdot\n5^{66}\n\\cdot\n7^{42}\n\\cdot\n11^{24}\n\\cdot\n13^{21}\n\\cdot\n17^{16}\n\\cdot\n19^{14}\n\\cdot\n23^{10}\n\\cdot\n29^{6}\n\\cdot\n31^{5}\n\\cdot\n37^{3}\n\\cdot\n41^{2}\n\\cdot\n43^{2}\n\\cdot\n47\n\\cdot\n53\n\\cdot\n59\n\\cdot\n61\n } ,\n\\end{equation*}\n\\begin{equation*}\n b'_{16} = \\frac{\\scriptstyle \n4148297603\n\\cdot\n7623077808870586151748455369217213506671334530597\n }\n {\\scriptstyle\n2^{264}\n\\cdot\n3^{133}\n\\cdot\n5^{66}\n\\cdot\n7^{42}\n\\cdot\n11^{25}\n\\cdot\n13^{21}\n\\cdot\n17^{16}\n\\cdot\n19^{14}\n\\cdot\n23^{11}\n\\cdot\n29^{7}\n\\cdot\n31^{6}\n\\cdot\n37^{3}\n\\cdot\n41^{2}\n\\cdot\n43^{2}\n\\cdot\n47\n\\cdot\n53\n\\cdot\n59\n\\cdot\n61\n } .\n\\end{equation*}\n\n\\section{Acknowledgements}\nThis work was supported by the Australian Research Council. PJF thanks M. Rubinstein\nfor relating the results of \\cite{CRS_2005} before publication, and for the \norganisers of the Newton Institute program `Random matrix approaches in number theory'\nheld in the first half of 2004 for making this possible. NSW wishes to thank the \norganisers of the CRM program `Random Matrices, Random Processes and Integrable Systems'\nheld in Montreal 2005 for the opportunity to attend the program and in particular the \nhospitality of John Harnad during his stay.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nIn the field of machine learning, when there is a large number of features, determining the smallest subset that exhibits the strongest effect often decreases model complexity and increases prediction accuracy.\nThis process is called feature selection.\\footnote{Feature selection is also known as variable selection, attribute selection, and variable subset selection.} There are two well-known types of feature selection methods: filters and wrappers.\nFilters select features based on their relevance to the response variable independently of the prediction model.\nWrappers select features that increase the prediction accuracy of the model.\nHowever, as with any design decision during the construction of a prediction model, one needs to evaluate different feature selection methods in order to choose one, and above all to assess whether it is needed or not.\n\nWhile there has been extensive research on the impact of feature selection on prediction models in different domains, our investigation reveals that it is a rarely studied topic in the domain of bug prediction.\nFew studies explore how feature selection affects the accuracy of classifying software entities into buggy or clean~\\cite{Shiv13a}\\cite{Gao2011a}\\cite{Cata09b}\\cite{Kris11a}\\cite{Wang12a}\\cite{Khos10a}\\cite{Khos14a}\\cite{Ghot17a}, but to the best of our knowledge no dedicated study exists on the impact of feature selection on the accuracy of predicting the number of bugs.\nAs a result of this research gap, researchers often overlook feature selection and provide their prediction models with all the metrics they have on a software project or in a dataset.\nWe argue that feature selection is a mandatory step in the bug prediction pipeline and its application might alter previous findings in the literature, especially when it comes to comparing different machine learning models or different software metrics.\n\nIn this paper we treat bug prediction as a regression problem where a bug predictor predicts the number of bugs in software entities as opposed to classifying software entities as buggy or clean.\nWe investigate the impact of filter and wrapper feature selection methods on the prediction accuracy of five machine learning models: K-Nearest Neighbour, Linear Regression, Multilayer Perceptron, Random Forest, and Support Vector Machine.\nMore specifically, we carry out an empirical study on five open source Java projects: Eclipse JDT Core, Eclipse PDE UI, Equinox, Lucene, and Mylyn to answer the following research questions:\n\\vspace{0.2cm}\n\\\\\\emph{RQ1: How does feature selection impact the prediction accuracy?}\nOur results show that applying correlation-based feature selection (CFS) improves the prediction accuracy in 32\\% of the experiments, degrades it in 24\\%, and keeps it unchanged in the rest.\nOn the other hand, applying the wrapper feature selection method improves prediction accuracy by up to 33\\% in 76\\% of the experiments and never degrades it in any experiment. However, the impact of feature selection varies depending on the underlying machine learning model as different models vary in their sensitivity to noisy, redundant, and correlated features in the data.\nWe observe zero to negligible effect in the case of Random Forest models.\n\\vspace{0.2cm}\n\\\\\\emph{RQ2: Are wrapper feature selection methods better than filters?}\nWrapper feature selection methods are consistently either better than or similar to CFS.\nApplying wrapper feature selection eliminates noisy and redundant features and keeps only relevant features for that specific project, increasing the prediction accuracy of the machine learning model.\n\\vspace{0.2cm}\n\\\\\\emph{RQ3: Do different methods choose different feature subsets?}\nWe realize there is no optimal feature subset that works for every project and feature selection should be applied separately for each new project.\nWe find that not only different methods choose different feature subsets on the same projects, but also the same feature selection method chooses different feature subsets for different projects.\nInterestingly however, all selected feature subsets include a mix of change and source code metrics.\n\\vspace{0.2cm}\n\nIn summary, this paper makes the following contributions:\n\\begin{enumerate}\n\\item A detailed comparison between filter and wrapper feature selection methods in the context of bug prediction as a regression problem.\n\\item A detailed analysis on the impact of feature selection on five widely-used machine learning models in the literature.\n\\item A comparison between the selected features by different methods.\n\\end{enumerate}\n\nThe rest of this paper is organized as follows: In \\autoref{background}, we give a technical background about feature selection in machine learning.\nWe motivate our work in \\autoref{motivation}, and show how we are the first to study wrapper feature selection methods when predicting the number of bugs.\nIn \\autoref{empirical}, we explain the experimental setup, discuss the results of our empirical study, and elaborate on the threats to validity of the results.\nFinally, we discuss the related work in \\autoref{relatedWork} showing how our findings are similar or different from the state of the art, then conclude this paper in \\autoref{conclusions}.\n\n\n\n\\begin{figure}\n\\center{\\includegraphics[width=1.0\\linewidth]{.\/img\/biasvariance.pdf}}\n\\caption{The relationship between model complexity and model error \\cite{BiasVariance}.}\n\\label{fig:complexity}\n\\end{figure}\n\n\\section{Technical Background}\n\\label{background}\nTrained on bug data and software metrics, a bug predictor is a machine learning model that predicts defective software entities using software metrics.\nThe software metrics are called the independent variables or the features.\nThe prediction itself is called the response variable or the dependent variable.\nIf the response variable is the absence\/presence of bugs then bug prediction becomes a classification problem and the machine learning model is called a classifier.\nIf the response variable is the number of bugs in a software entity then bug prediction is a regression problem and the model is called a regressor.\n\nFeature selection is an essential part in any machine learning process.\nIt aims at removing irrelevant and correlated features to achieve better accuracy, build faster models with stable performance, and reduce the cost of collecting features for later models.\nModel error is known to be increased by both noise \\cite{Atla11a} and feature multicollinearity \\cite{Alle97c}.\nDifferent feature selection algorithms eliminate this problem in different ways.\nFor instance, correlation based filter selection chooses features with high correlation with the response variable and low correlation with each other.\n\nAlso when we build a prediction model, we often favour less complex models over more complex ones due to the known relationship between model complexity and model error, as shown in Figure \\autoref{fig:complexity}.\nFeature selection algorithms try to reduce model complexity down to the sweet spot where the total error is minimal.\nThis point is called the optimum model complexity.\nModel error is computed via the mean squared error (MSE) as: \n\\\\$MSE = \\frac{1}{N} \\sum_{i=1}^{N} (\\hat{Y}_i - Y_i)^2$ \\\\where $\\hat{Y}_i$ is the predicted value and $Y_i$ is the actual value.\nMSE can be decomposed into model bias and model variance as: \\\\$MSE = Bias^2 + Variance + Irreducible Error $ \\cite{Hast05a}\n\nBias is the difference between the average prediction of our model to the true unknown value we are trying to predict.\nVariance is the variability of a model prediction for a given data point.\nAs can be seen in Figure \\autoref{fig:complexity}, reducing model complexity increases the bias but decreases the variance.\nFeature selection sacrifices a little bit of bias in order to reduce variance and, consequently, the overall MSE.\n\nEvery feature selection method consists of two parts: a search strategy and a scoring function.\nThe search strategy guides the addition or removal of features to the subset at hand and the scoring function evaluates the performance of that subset.\nThis process is repeated until no further improvement is observed.\n\n\\begin{table}\n\\scriptsize\n\\caption{The CK Metrics Suite \\cite{Chid94a} and other object-oriented metrics included as the source code metrics in the bug prediction dataset \\cite{DAmb10c}}\n\\begin{center}\n\\begin{tabular}{ll} \n{Metric Name}\t\t& {Description}\\\\ \\hline\nCBO & Coupling Between Objects \\\\[0.05cm] \nDIT & Depth of Inheritance Tree \\\\[0.05cm] \nFanIn & Number of classes that reference the class \\\\[0.05cm] \nFanOut & Number of classes referenced by the class \\\\[0.05cm] \nLCOM & Lack of Cohesion in Methods \\\\[0.05cm] \nNOC & Number Of Children \\\\[0.05cm] \nNOA & Number Of Attributes in the class \\\\[0.05cm] \nNOIA & Number Of Inherited Attributes in the class \\\\[0.05cm] \nLOC & Number of lines of code \\\\[0.05cm] \nNOM & Number Of Methods \\\\[0.05cm] \nNOIM & Number of Inherited Methods \\\\[0.05cm] \nNOPRA & Number Of PRivate Atributes \\\\[0.05cm] \nNOPRM & Number Of PRivate Methods \\\\[0.05cm] \nNOPA & Number Of Public Atributes \\\\[0.05cm] \nNOPM & Number Of Public Methods \\\\[0.05cm] \nRFC & Response For Class \\\\[0.05cm] \nWMC & Weighted Method Count \\\\\\hline\n\n\\label{tbl:sourceMetrics}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\begin{table}\n\\scriptsize\n\\caption{The change metrics proposed by Moser \\emph{et al.}\\xspace \\cite{Mose08a} included in the bug prediction dataset \\cite{DAmb10c}}\n\\label{tbl:changeMetrics}\n\\begin{center}\n\\begin{tabularx}{0.48\\textwidth}{lX} \n{Metric Name}\t\t& {Description}\\\\ \\hline\nREVISIONS & Number of reversions \\\\[0.05cm] \nBUGFIXES & Number of bug fixes \\\\[0.05cm] \nREFACTORINGS & Number Of Refactorings \\\\[0.05cm] \nAUTHORS & Number of distinct authors that checked a file into the repository \\\\[0.05cm] \nLOC\\_ADDED & Sum over all revisions of the lines of code added to a file \\\\[0.05cm] \nMAX\\_LOC\\_ADDED & Maximum number of lines of code added for all revisions \\\\[0.05cm] \nAVE\\_LOC\\_ADDED & Average lines of code added per revision \\\\[0.05cm] \nLOC\\_DELETED & Sum over all revisions of the lines ofcode deleted from a file \\\\[0.05cm] \nMAX\\_LOC\\_DELETED & Maximum number of lines of code deleted for all revisions \\\\[0.05cm] \nAVE\\_LOC\\_DELETED & Average lines of code deleted per revision \\\\[0.05cm] \nCODECHURN & Sum of (added lines of code - deleted lines of code) over all revisions \\\\[0.05cm] \nMAX\\_CODECHURN & Maximum CODECHURN for all revisions \\\\[0.05cm] \nAVE\\_CODECHURN & Average CODECHURN for all revisions \\\\[0.05cm] \nAGE & Age of a file in weeks (counting backwards from a specific release) \\\\[0.05cm] \nWEIGHTED\\_AGE & Sum over age of a file in weeks times number of lines added during that week normalized by the total number of lines added to that file \\\\[0.05cm] \\hline\n\n\\end{tabularx}\n\\end{center}\n\\end{table}\n\n\n\\section{Motivation}\n\\label{motivation}\n\nIn this section, we shortly discuss the importance of predicting the number of bugs in software entities.\nThen, we highlight the impact of feature selection on bug prediction and particularly motivate the need for studying the wrapper methods.\n\n\\subsection{Regression vs Classification}\nMost of the previous research treats bug prediction as a classification problem where software entities are classified as either buggy or clean, and there have been several studies on the impact of feature selection on defect classification models.\nOn the other hand, bug prediction as a regression problem is not well-studied, and the effect of feature selection on predicting the number of bugs is not well-understood.\n\nSoftware bugs are not evenly distributed and tend to cluster \\cite{Ostr04a}, and some software entities commonly have larger numbers of bugs compared to others.\nPredicting the number of bugs in each entity provides valuable insights about the quality of these software entities~\\cite{Osma16c}, which helps in prioritizing software entities to increase the efficiency of related development tasks such as testing and code reviewing~\\cite{Khos03a}.\nThis is an important quality of a bug predictor especially for cost-aware bug prediction~\\cite{Mend09b}\\cite{Aris10a}\\cite{Kame10a}\\cite{Koba11a}\\cite{Hata12a}.\nIn fact, predicting the number of bugs in software entities and then ordering these entities based on bug density is the most cost-effective option \\cite{Osma17f}.\n\n\\subsection{Dimensionality Reduction}\n\\label{reduction}\n\nWhen the dimensionality of data increases, distances grow more and alike between the vectors and it becomes harder to detect patterns in data~\\cite{Bell60a}.\nFeature selection not only eliminates the confounding effects of noise and feature multicollinearity, but also reduces the dimensionality of the data to improve accuracy.\nHowever, feature selection does not seem to be considered as important as it should be in the field of bug prediction.\nFor instance, only 25 out of the 64 studied techniques in a recent research apply feature selection before training a machine learning model~\\cite{Malh15a}.\nOnly 2 out of the 25 are applied to bug prediction as a regression problem.\n\n\\subsection{Filters vs Wrappers}\n\\label{filtersWrappers}\nFeature selection methods are of two types: wrappers and filters \\cite{Koha97a}.\nWith wrappers, the scoring function is the accuracy of the prediction model itself.\nWrappers look for the feature subset that works best with a specific machine learning model.\nThey are called wrappers because the machine learning algorithm is wrapped into the selection procedure.\nWith filters (\\emph{e.g.},\\xspace CFS, InfoGain, PCA), the scoring function is independent of the machine learning model.\nThey are called filters because the attribute set is filtered before the training phase.\nGenerally, filters are faster than wrappers but less powerful because wrappers address the fact that different learning algorithms can achieve best performance with different feature subsets.\nIn this paper we aim at finding whether there is actually a difference between filters and wrappers in bug prediction, and then quantifying this difference.\n\nWrappers are known to be computationally expensive.\nThey become a bottleneck when the size of a dataset (features + data items) becomes large.\nHowever, this rarely happens in the bug prediction and bug prediction datasets tend to be relatively small.\nThis means that although wrappers are more resource intensive, they are easily applicable to bug prediction.\nNevertheless, our literature research yielded relatively few works that use wrappers for predicting number of bugs.\n\n\n\\section{Empirical Study}\n\\label{empirical}\n\nIn this section, we investigate the effect of feature selection on the accuracy of predicting the number of bugs in Java classes.\nSpecifically, we compare five widely-used machine learning models applied to five open source Java projects to answer the following research questions:\n\\\\\\emph{RQ1: How does feature selection impact the prediction accuracy?}\n\\\\\\emph{RQ2: Are wrapper feature selection methods better than filters?}\n\\\\\\emph{RQ3: Do different methods choose different feature subsets?}\n\n\\begin{table*}[h!]\n\\renewcommand{\\arraystretch}{1.0}\n\\footnotesize\n\\caption{Details about the systems in the studied dataset, as reported by D'Ambros \\emph{et al.}\\xspace~\\cite{DAmb10c}}\n\\begin{center}\n\\begin{tabular}{llrrrr} \n\t\t\t\t&\t\t&\t\t\t\t&\t\t\t&\t\t\t\t&\\% classes with more \\\\\nSystem \t\t\t&Release\t&KLOC\t\t\t&\\#Classes \t& \\% Buggy\t\t&than one bug\t\\\\ \\hline\n\nEclipse JDT Core\t&3.4\t\t&$\\approx 224 $\t&997\t\t\t& $\\approx 20\\%$\t& $\\approx 7\\%$\\\\ \n\t\t\t\nEclipse PDE UI\t\t&3.4.1\t&$\\approx 40 $\t\t&1,497\t\t& $\\approx 14\\%$\t & $\\approx 5\\%$ \\\\ \n\t\t\t\nEquinox\t\t\t&3.4\t\t&$\\approx 39 $\t\t&324\t\t\t& $\\approx 40\\%$ \t& $\\approx 15\\%$ \\\\ \n\t\t\t\nMylyn\t\t\t&3.41\t&$\\approx 156$\t&1,862\t\t& $\\approx 13\\%$\t& $\\approx 4\\%$ \\\\ \n\t\t\t\nLucene\t\t\t&2.4.0\t&$\\approx 146$\t&691\t\t\t& $\\approx 9\\%$ \t& $\\approx 3\\%$\\\\ \\hline\n\\label{tbl:dataset}\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\subsection{Experimental Setup}\n\\subsubsection*{Dataset}\nWe adopt the ``Bug Prediction Dataset\" provided by D'Ambros \\emph{et al.}\\xspace \\cite{DAmb10c} which serves as a benchmark for bug prediction studies.\nWe choose this dataset because it is the only dataset that contains both source code and change metrics at the class level, in total 32 metrics listed in~\\autoref{tbl:sourceMetrics} and \\autoref{tbl:changeMetrics}; and also provides the number of post-release bugs as the response variable for five large open source Java systems listed in \\autoref{tbl:dataset}.\nThe other dataset that has the number of bugs as a response variable comes from the PROMISE repository, but contains only 21 source code metrics \\cite{Jure10a}.\n \n\\subsubsection*{Prediction Models}\nWe use Multi-Layer Perceptron (MLP), Random Forest (RF), Support Vector Machine (SVM), Linear Regression (LR), and an implementation of the k-nearest neighbour algorithm called IBK.\nEach model represents a different category of statistical and machine learning models that is widely used in the bug prediction research~\\cite{Malh15a}.\n\nWe use the correlation-based feature selection (CFS) method \\cite{Hall00a}, the best \\cite{Chal08b}\\cite{Ghot17a} and the most commonly-used filter method in the literature \\cite{Malh15a}.\nFor the wrapper feature selection method we use the corresponding wrapper applicable to each prediction model.\nIn other words, we use MLP wrapper for MLP, RF wrapper for RF, SVM wrapper for SVM, LR wrapper for LR, and IBK wrapper for IBK.\nEvery feature selection method also needs a search algorithm.\nWe use the \\emph{Best First} search algorithm which searches the space of feature subsets using a greedy hill-climbing procedure with a backtracking facility.\nWe use this search algorithm because it returns the results in a reasonable amount of time while being exhaustive to a certain degree.\n \nWe use the Weka data mining tool \\cite{Hall09} to build prediction models for each project in the dataset.\nFollowing an empirical method similar to that of Hall and Holmes \\cite{Hall03a}, we apply each prediction model to three feature sets: the full set, the subset chosen by CFS, and the subset chosen by the wrapper.\nThe prediction model is built and evaluated following the 10-fold cross validation procedure.\nThe wrapper feature selection is applied using a 5-fold cross validation on the training set of each fold, then the best feature set is used.\nThe CFS algorithm is applied to the whole training set of each fold.\nThen the whole process is repeated 30 times.\nWe evaluate the predictions by means of the root mean squared error (RMSE).\nIn total, we have 25 experiments.\nEach experiment corresponds to a specific project and a specific prediction model trained on the three feature sets.\n\nWe use the default hyperparameter (\\emph{i.e.},\\xspace configuration) values of Weka 3.8.0 for the used machine learning models.\nAlthough hyperparameters can be tuned \\cite{Tant16a}\\cite{Osma17a}, we do not perform this optimization because we want to isolate the effect of feature selection.\nBesides, Linear Regression does not have hyperparameters and the gained improvement of optimizing SVM and RF is negligible \\cite{Tant16a}\\cite{Osma17a}.\n\n\n\n\\subsection{Results}\n\\autoref{fig:boxplots} shows standard box plots for the different RMSE values obtained by the different feature sets per prediction model per project.\nEach box shows 50\\% of that specific population.\\footnote{By population we mean the RMSE values of a specific experiment with a specific feature set.\nEach population consists of $10 \\times 50 = 500$ data items (10-fold cross validation done 50 times)} We can see that the wrapper populations are almost always lower than the full set ones, have smaller boxes, and have fewer outliers.\nThis means that applying the wrapper gives better and more consistent predictions.\nOn the other hand, we cannot make any observations about applying CFS because the difference between the CFS populations and the full set populations are not apparent.\n\n\\begin{figure*}\n\\center{\\includegraphics[width=1.0\\linewidth]{.\/img\/boxplots.pdf}}\n\\caption{Boxplots of all the experiments in our empirical study.\nThe y-axis represents the root mean squared error (RMSE).\nFor each project\/model, we examine three feature sets: the full set, the subset chosen by the CFS filter, and the subset chosen by the wrapper corresponding to the model.}\n\\label{fig:boxplots}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\center{\n\\includegraphics[width=1.0\\linewidth]{.\/img\/effect_size.pdf}}\n\\caption{This figure shows the bar plots of the effect size of the Dunn post-hoc analysis, which is carried out at the 95\\% confidence interval.\nThe x-axis indicates the pairwise comparison and the y-axis indicates the effect size.\nThe bars are color-coded.\nIf the bar is red, this means that the difference is not statistically significant.\nGrey means that there is a statistical significant difference, but the effect is negligible.\nBlue, golden, and green indicate a small, medium, and large statistically significant effect, respectively.}\n\\label{fig:effectSize}\n\\end{figure*}\n\n\nWhile box plots are usually good to get an overview of the different populations and how they compare to each other, they do not provide any statistical evidence.\nTo get more concrete insights, we follow the two-stage statistical test: Kruskal-Wallis + Dunn post-hoc analysis, both at the 95\\% confidence interval.\nWe apply the Kruskal-Wallis test on the results to determine whether different feature subsets have different prediction accuracies (\\emph{i.e.},\\xspace different RMSE).\nOnly when this test indicates that the populations are different, can we quantify such differences with a post-hoc analysis.\nWe perform Dunn post-hoc pairwise comparisons and analyze the effect size between each two populations.\n\\autoref{fig:effectSize} shows on the y-axis the detailed effect size between the two compared RMSE populations on the x-axis.\nIn this plot, there are two possible scenarios:\n\\begin{enumerate}\n\\item The Kruskal-Wallis test indicates that there is no statistical difference between the populations.\nThen all the bars are red to show that there is no effect between any two populations.\n\\item The Kruskal-Wallis test indicates a statistically significant difference between the populations.\nThen the color of the bars encode the pairwise effect size.\nRed means no difference and the two populations are equivalent.\nGrey means that there is a significant difference but can be ignored due to the negligible effect size.\nBlue, golden, and green mean small, medium, and large effect size respectively.\n\\end{enumerate}\n\nTo see how feature selection methods impact the prediction accuracy (\\emph{RQ1}), we compare the RMSE values obtained by applying CFS and wrappers with those obtained by the full feature set.\nWe observe that the RMSE value obtained by the CFS feature subset is statistically lower than the full set in 8 experiments (32\\%),\\footnote{negative non-red effect size in \\autoref{fig:effectSize}} statistically higher in other 6 experiments (24\\%),\\footnote{positive non-red effect size in \\autoref{fig:effectSize}} and statistically equivalent in 11 experiments (44\\%).\\footnote{red effect size in \\autoref{fig:effectSize}} Although CFS can decrease the RMSE by 24\\% on average (MLP with Mylyn), it can increase it by up to 24\\% (SVM with Lucene).\nWe also notice that applying CFS is not consistent within experiments using the same model.\nIt does not always improve, or always degrade, or always retain the performance of any model throughout the experiments.\nWe conclude that CFS is unreliable and gives unstable results.\nFurthermore, even when CFS reduces the RMSE, the effect size is at most small.\n\nOn the other hand, the RMSE value of the wrapper feature subset is statistically lower than that of the full set in 19 experiments (76\\%) and statistically equivalent in the rest.\nApplying the wrapper feature selection method can decrease RMSE of a model by up to 33\\% (MLP with Eclipse JDT).\nWe also observe that the impact of the wrapper feature selection method on the accuracy is different from one model to another.\nIt has a non-negligible improvement on the prediction accuracy of IBK, LR, MLP, RF, and SVM in 80\\%, 60\\%, 100\\%, 20\\%, and 20\\% of the experiments, respectively.\nThis is due to the fact that different machine learning models are different in their robustness against noise and multicollinearity.\nMLP, IBK, and LR were improved significantly almost always in our experiments.\nOn the other hand, SVM and RF were not improved as often, because they are known to be resistant to noise, especially when the number of features is not too high.\nRF is an ensemble of decision trees created by using bootstrap samples of the training data and random feature selection in tree induction \\cite{Brei01a}.\nThis gives RF the ability to work well with high-dimensional data and sift the noise away.\nThe SVM algorithm is also designed to operate in a high-dimensional feature space and can automatically select relevant features \\cite{Hsu03a}.\nIn fact, this might be the reason behind the proven record of Random Forest and Support Vector Machine in bug prediction \\cite{Guo04a}\\cite{Elis08a}.\n\n\n\\begin{table}[]\n\\renewcommand{\\arraystretch}{1.2}\n\\normalsize\n\\caption{The level of agreement between different feature selection methods in each project}\n\\begin{center}\n\\begin{tabular}{lll} \nProject \t\t\t& $k$ \t&Agreement \\\\ \\hline\nEclipse JDT Core\t&0.18\t& Slight\\\\ \n\t\t\t\nEclipse PDE UI\t\t&0.17\t & Slight \\\\ \n\t\t\t\nEquinox\t\t\t&0.40\t & Fair \\\\ \n\t\t\t\nMylyn\t\t\t&0.08\t & Slight \\\\ \n\t\t\t\nLucene\t\t\t&0.18\t& Slight \\\\ \\hline\n\\label{tbl:kappaOfProjects}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\begin{table}[]\n\\renewcommand{\\arraystretch}{1.2}\n\\normalsize\n\\caption{The level of agreement between the feature subsets selected by each method over all projects}\n\\begin{center}\n\\begin{tabular}{lll} \nFeature Selection Method \t\t\t& $k$ \t&Agreement \\\\ \\hline\nCFS\t\t\t\t\t\t\t\t&0.23\t& Fair\\\\ \n\t\t\t\nIBK Wrapper\t\t\t\t\t\t&0.26\t & Fair \\\\ \n\t\t\t\nLR Wrapper\t\t\t\t\t\t&0.16\t & Slight \\\\ \n\t\t\t\nMLP Wrapper\t\t\t\t\t\t&0.04\t & Slight \\\\ \n\nRF Wrapper\t\t\t\t\t\t&0.04\t & Slight \\\\ \n\t\t\t\nSVM Wrapper\t\t\t\t\t\t&-0.01\t& Poor \\\\ \\hline\n\n\\label{tbl:kappaOfMethods}\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nThe wrapper method is statistically better than CFS in 18 experiments, statistically equivalent in 6 experiments, and worse in one experiment, but with a negligible effect size.\nThese results along with the fact that CFS sometimes increases the RMSE, clearly show that the wrapper selection method is a better choice than CFS (\\emph{RQ2}).\n\n\\begin{figure*}[]\n \\begin{center}\n \\subfigure[]{%\n \\label{fig:selectedFeaturesGrid}\n \\includegraphics[width=0.9\\textwidth]{.\/img\/selectedFeaturesGrid2.pdf}}\n\n \\subfigure[]{%\n \\label{fig:selectedFeaturesCounts}\n \\includegraphics[width=0.9\\textwidth]{.\/img\/selectedFeaturesCounts2.pdf}}\n \\end{center}\n\\end{figure*}\n\\begin{figure*}[]\n \\begin{center}\n\n \\subfigure[]{%\n \\label{fig:selectedFeaturesPerProject}\n \\includegraphics[width=1\\textwidth]{.\/img\/selectedFeaturesPerProject.pdf}}\n\n \\end{center}\n \\caption{Subfigure (a) shows the features selected by each method using the whole data of each project.\nSubfigure (b) shows the number of times each feature is selected out of the 30 (1 CFS feature set + 5 wrapper sets per project).\nThe more times a feature is selected the more important it is for making accurate predictions.\nSubfigure (c) shows how different selection methods vary in the number of selected features.\nDetails about the features (metrics) are in \\autoref{tbl:sourceMetrics} and \\autoref{tbl:changeMetrics}}\n \\label{fig:selectedFeatures}\n\\end{figure*}\n\n\n\\autoref{fig:selectedFeatures} shows the details about the features selected by each method using the whole data of each project in the dataset.\nTo answer the third research question (\\emph{RQ3}), we use the Fleiss' kappa statistical measure \\cite{Flei71a} to evaluate the level of agreement between the different feature selection methods for each project and the level of agreement of each feature selection method over the different projects. The Fleiss' kappa value, called $k$, is interpreted as follows:\n\\\\$k\\leq0 \\implies$ poor agreement\n\\\\$0.01 1.$ As a model problem, one may consider an analogous question where $\\rea^n$ is replaced by a vector space over a finite-field. Our main result is that the corresponding conjecture holds for all $d$ ($|\\cdot|$ denotes cardinality).\n\n\\begin{theorem} \\label{firstlinethm}\nSuppose $d \\geq 1$ is an integer, $F$ is a finite field, $0 \\leq \\beta \\leq 1,$ and that ${\\bf L}$ is a collection of lines in $F^n$ with $|{\\bf L}| \\geq |F|^{2(d-1) + \\beta}.$ Then\n\\begin{equation} \\label{dimrangeff} \n|\\bigcup_{L \\in {\\bf L}}L| \\gtrsim |F|^{d + \\beta}\n\\end{equation}\nwhere the implicit constant may depend on $d$, but is independent\\footnote{The constant is also independent of $\\beta$ and $n$, but this is only of secondary interest.} of $F$.\n\\end{theorem}\n\nReusing the examples above, one sees that \\eqref{dimrangeff} is sharp, up to the loss in the implicit constant, and that there is nothing to be gained by taking $1 < \\beta \\leq 2.$\n\nThe main tool we use in the proof of \\eqref{dimrangeff} is an iterated version of Wolff's hairbrush argument \\cite{wolff95ibk}. For comparison, we state the finite-field version of his result\\footnote{Wolff's main interest in this method was likely its use towards a partial resolution of the Kakeya conjecture (up to a negligible constant, any direction separated collection of lines satisfies the Wolff axiom). To that end, it has been superceded by Dvir's theorem \\cite{dvir09osk} (see also \\cite{ellenberg09ksm}), whose proof makes stronger use of the direction-separation hypotheses and does not seem to be applicable to the present question.} (see \\cite{wolff99rwc},\\cite{mockenhaupt04rkp}), starting with the following definition. A set of lines ${\\bf L}$ in $F^n$ satisfies the \\emph{Wolff axiom} if for every two-plane $R \\subset F^n$ \n\\[\n|\\{L \\in {\\bf L} : L \\subset R\\}| < |F|.\n\\]\n\n\\begin{theorem}[Wolff]\nSuppose that $\\alpha \\geq 1$, $F$ is a finite field, and ${\\bf L}$ is a collection of lines in $F^n$ with $|{\\bf L}| \\geq |F|^{\\alpha}.$ If ${\\bf L}$ satisfies the Wolff axiom then\n\\begin{equation} \\label{wolffexponent}\n|\\bigcup_{L \\in {\\bf L}}L| \\gtrsim |F|^{\\frac{\\alpha + 3}{2}}\n\\end{equation}\nwhere the implicit constant is independent of $F$.\n\\end{theorem}\n\nAn immediate consequence of Theorem \\ref{firstlinethm} is that, for odd integers $\\alpha$, Wolff's theorem holds even for collections of lines that do not satisfy the Wolff axiom. \n\nThe proof of Theorem \\ref{firstlinethm} also shows that the Wolff axiom can be relaxed for general values of $\\alpha.$ We say that a set of lines ${\\bf L}$ in $F^n$ satisfies the \\emph{$d$-plane Wolff axiom} if for every $d$-plane $R \\subset F^n$ \n\\begin{equation} \\label{dpwa}\n|\\{L \\in {\\bf L} : L \\subset R\\}| < |F|^{2d-3}.\n\\end{equation}\nIf $R$ is a $d$-plane then there are approximately $|F|^{3(d-2)}$ 2-planes $S$ contained in $R$ and for each line $L \\subset R$ there are approximately $|F|^{d-2}$ 2-planes $S$ with $L \\subset S \\subset R.$ Thus, the $d$-plane Wolff axiom asserts that for every $d$-plane $R$ the standard Wolff axiom holds ``on average'' for two-planes $S \\subset R.$ In particular, the d-plane Wolff axiom is weaker than the standard Wolff axiom when $d > 2$ (assuming one is willing to adjust the axioms by a constant factor, which would make no impact on the validity of the stated theorems). \n\n\\begin{theorem} \\label{wolffrelaxed}\nSuppose that $d > 2$ is an integer, $2(d-1) - 1 < \\alpha < 2(d-1) + 1$, $F$ is a finite field, and ${\\bf L}$ is a collection of lines in $F^n$ with $|{\\bf L}| \\geq |F|^{\\alpha}.$ If ${\\bf L}$ satisfies the $d$-plane Wolff axiom \\eqref{dpwa} then\n\\begin{equation} \\label{wolffexponent2}\n|\\bigcup_{L \\in {\\bf L}}L| \\gtrsim |F|^{\\frac{\\alpha + 3}{2}}\n\\end{equation}\nwhere the implicit constant is independent of $F$.\n\\end{theorem}\n\nBounds of the form \\eqref{wolffexponent} do not seem to be sharp; at least, they can be slightly strengthened in the case when $F = \\BBZ_p$, $\\alpha = 2$, and $n=3$, see \\cite{bourgain04spe}. \n\nSince \\eqref{wolffexponent2} improves on \\eqref{dimrangeff} when $\\beta < 1$, one can use the $d$-plane Wolff axiom to extract structural information about quasi-extremizers (cf. \\cite{christ06qer}) of \\eqref{dimrangeff}.\n\n\\begin{theorem} \\label{qethm}\nSuppose $d \\geq 1$ is an integer, and that $0 \\leq \\beta < 1$. Then, for every $C$ there exist $M$ and $c > 0$ such that if $F$ is a finite field with $|F| \\geq M$ and ${\\bf L}$ is a collection of lines in $F^n$ with $|L| \\geq |F|^{2(d-1) + \\beta}$ satisfying\n\\begin{equation} \\label{qebound}\n|{\\bf P}| \\leq C |F|^{d + \\beta}\n\\end{equation}\nwhere ${\\bf P} := \\bigcup_{L \\in {\\bf L}}L,$\nthen there are $d$-planes $R_1, \\ldots, R_N$ with $N \\geq c |F|^\\beta$ such that \n\\[\n|{\\bf P} \\cap \\bigcup_{j} R_j| \\geq c |{\\bf P}|\n\\]\nand for each $j$\n\\begin{equation} \\label{denseindplane}\n|{\\bf P} \\cap R_j| \\geq c |R_j|.\n\\end{equation}\n\\end{theorem}\n\nOne can also prove a version of the statement above for $-1 < \\beta < 0,$ but we omit the details. \n\nBy adding two additional layers of recursion, the method of Theorem \\ref{firstlinethm} can be adapted to treat unions of $k$-planes. \n\n\\begin{theorem} \\label{upthm1}\nSuppose $d \\geq k > 0$ are integers, that $0 \\leq \\beta \\leq 1$ and that ${\\bf L}$ is a collection of $k$-planes in $F^n$ with \n\\begin{equation} \\label{upthm1hyp}\n|{\\bf L}| \\geq |F|^{(k+1)(d-k)+\\beta}.\n\\end{equation}\nThen \n\\begin{equation} \\label{unionplanesbound}\n|\\bigcup_{L \\in {\\bf L}}L| \\gtrsim |F|^{d + \\beta}.\n\\end{equation}\n\\end{theorem}\n\nOur proof requires simultaneous treatment of the following more general result.\n\n\\begin{theorem} \\label{upthm2}\nSuppose $d \\geq k > k' \\geq 0$ are integers, that $0 \\leq \\beta \\leq k' + 1$ and that ${\\bf L}$ is a collection of $k$-planes in $F^n$ with \n\\[\n|{\\bf L}| \\geq |F|^{(k+1)(d-k)+\\beta}.\n\\]\nLetting ${\\bf P}_L = \\{k'\\text{-planes\\ }P : P \\subset L\\}$ we have\n\\begin{equation} \n|\\bigcup_{L \\in {\\bf L}}{\\bf P}_L| \\gtrsim |F|^{(k' + 1)(d - k') + \\beta}.\n\\end{equation}\n\\end{theorem}\n\nTheorems \\ref{upthm1} and \\ref{upthm2} are sharp in the same sense as Theorem \\ref{firstlinethm}. For a $k$-plane analog of Wolff's theorem see \\cite{bueti06ibk}. It may be possible to modify the proof of \\eqref{unionplanesbound} to obtain a $k$-plane analog of Theorem \\ref{wolffrelaxed}, but we do not pursue the details here.\n\nThe outline of this article is as follows: Section \\ref{prsection} contains some technical machinery, Section \\ref{linesection} contains the proofs of Theorems \\ref{firstlinethm}, \\ref{wolffrelaxed}, and \\ref{qethm}, and Section \\ref{planesection} contains the proofs of Theorems \\ref{upthm1} and \\ref{upthm2}.\n\n\n\\section{Preliminaries} \\label{prsection}\n\nWe start by roughly describing the approach of \\cite{wolff95ibk}. If a union of lines is small, then there must exist a ``hairbrush'' of many lines intersecting one common line. The ambient space can then be foliated into two-dimensional planes containing the common line, and a classical bound can be applied to estimate the union of lines contained in each two-plane. \n\nIn the present situation, we instead consider a hairbrush of many lines or $k$-planes intersecting a common $m$-dimensional plane. The following lemma is used to determine the appropriate choice of $m$.\n\n\\begin{lemma} \\label{pslemma}\nLet ${\\bf L}$ be a collection of $k$-planes in $F^n$ and suppose $d$ is a nonnegative integer with $k \\leq d \\leq n$. There is an $m$ with $k \\leq m \\leq d$, a collection of $m$-planes $R_1, \\ldots, R_N$, and collections of $k$-planes ${\\bf L}_{R_1}, \\ldots, {\\bf L}_{R_N}$ such that\n\\begin{enumerate}[(a)]\n\\item the ${\\bf L}_{R_j}$ are pairwise disjoint subsets of ${\\bf L}$ with $L \\subset R_j$ for $L \\in {\\bf L}_{R_j}$;\n\\item if $m > k$ then $|{\\bf L}_{R_j}| \\geq |F|^{(k+1)(m - 1 - k) + k}$;\n\\item if $m=k$ then $|{\\bf L}_{R_j}| = 1$;\n\\item \\label{epcondition} letting ${\\bf L}^m = \\bigcup_{j}{\\bf L}_{R_j}$, we have $|{\\bf L}^m| \\geq 2^{-(d-m+1)}|{\\bf L}|$;\n\\item \\label{genwolffcondition} if $m < m' \\leq d$ then for every $m'$-plane $S$, $|\\{L \\in {\\bf L}^m : L \\subset S\\}| < |F|^{(k+1)(m' - 1 - k) + k} .$\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nSet ${\\bf L}^{*,d+1} := {\\bf L}.$ For $k \\leq m \\leq d$, suppose that ${\\bf L}^{*,m+1} \\subset {\\bf L}$ has been chosen so that $|{\\bf L}^{*,m+1}| \\geq 2^{-(d-m)} |{\\bf L}|$. Starting at $j = 1$, suppose $m$-planes $R^m_{j'}$ and collections of $k$-planes ${\\bf L}_{R^m_{j'}}$ have been selected for all integers $0 < j' < j$.\n\nIf there is an $m$-plane $R$ so that \n\\[\n|\\{L \\in {\\bf L}^{*,m+1} \\setminus \\bigcup_{j' < j} {\\bf L}_{R^m_{j'}} : L \\subset R\\}| \\geq |F|^{(k+1)(m-1-k) + k}\n\\]\nthen let $R^m_j$ be such an $m$-plane, let\n\\[\n{\\bf L}_{R^m_j} = \\{L \\in {\\bf L}^{*,m+1} \\setminus \\bigcup_{j' < j} {\\bf L}_{R^m_{j'}} : L \\subset R^m_j\\}\n\\]\nand continue the process with $j+1.$ If there is no such plane then terminate the process and set \n\\[\n{\\bf L}^{*,m} = {\\bf L}^{*,m+1} \\setminus \\bigcup_{j' < j} {\\bf L}_{R^m_{j'}}.\n\\]\nIf $|{\\bf L}^{*,m}| < |{\\bf L}^{*,m+1}|\/2$ then property (\\ref{epcondition}) is satisfied and we terminate the process. Otherwise, continue with $m-1.$\n\nIf the process reaches the stage $m=k,$ then let $R^{k}_1, \\ldots, R^k_N$ be some enumeration of ${\\bf L}^{*,k+1}$ and ${\\bf L}_{R^{k}_j} = \\{R^k_j\\}$ and we are finished.\n\\end{proof}\n\nNext, we describe in detail the foliation of the ambient space.\n\n\\begin{lemma} \\label{folprop}\nSuppose that $S$ is an $m$-plane in $F^n$, that $0 \\leq q \\leq k-1,$ and that $m + k-q \\leq n.$ Then we can find $(m+k-q)$-planes $T_1, \\ldots, T_N$ with $S \\subset T_i$ for all $i$ such that for all $(k-1)$-planes $P$ and $k$-planes $L$ satisfying\n\\begin{enumerate}[(a)]\n\\item $P \\subset L$\n\\item $L \\cap S$ is a $q$-plane\n\\item $P \\cap S$ is a $(q-1)$-plane if $q > 0$ and $P \\cap S = \\emptyset$ if $q = 0$\n\\end{enumerate}\nwe have $L \\subset T_i$ for some $i$ and $P \\not\\subset T_{i'}$ for $i' \\neq i.$\n\\end{lemma}\n\n\\begin{proof}\nTo find the $T_i$, write $S=x + \\spa(e_1, \\ldots, e_m)$ and \n\\[\nF^n = \\spa(e_1, \\ldots, e_m, f_1, \\ldots f_{n-m}).\n\\]\nThen write $T_i = S + V_i$ where, as $i$ varies, $V_i$ ranges over all $(k-q)$-dimensional subspaces of $\\spa(f_1, \\ldots, f_{n-m}).$\n\nFix some $P,L$ satisfying the hypotheses. One can check that there is an $i$ such that $L \\subset T_i.$ For any $i' \\neq i$ we have that $T_i \\cap T_{i'}$ contains $S$ and is, at most, an $(m + k-q - 1)$-plane. \n\nFirst consider the case $q > 0.$ Choose $y \\in P \\cap S$ and write \n\\begin{align*}\nP \\cap S &= y + \\spa(g_1, \\ldots, g_{q-1}),\\\\\nP &= y + \\spa(g_1, \\ldots, g_{q-1}, h_1, \\ldots, h_{k-q}),\\\\\n\\intertext{and}\nS &= y + \\spa(g_1, \\ldots, g_{q-1}, h'_{1}, \\ldots, h'_{m + 1 -q}).\n\\end{align*}\nClearly, \n\\[\nW := \\{g_1, \\ldots, g_{q-1}, h_1, \\ldots, h_{k-q}, h'_{1}, \\ldots, h'_{m + 1 -q } \\}\n\\]\nis linearly independent and, since $S \\subset T_i \\cap T_{i'}$, $P \\subset T_i \\cap T_{i'}$ would imply $y + \\spa(W) \\subset T_i \\cap T_{i'}$ contradicting the dimension estimate on the latter set. \n\nFor the case $q=0$ write \n\\begin{align*}\nP &= z + \\spa(h_1, \\ldots, h_{k-1})\\\\\n\\intertext{and}\nS &= y + \\spa(h'_1, \\ldots, h'_m).\n\\end{align*}\nAgain using the dimension estimate on $T_i \\cap T_{i'}$, we see that if $P \\subset T_{i} \\cap T_{i'}$ then, since $P \\cap S = \\emptyset,$ we have $v \\in \\spa(h_1', \\ldots, h'_m)$ for some $0 \\neq v \\in \\spa(h_1, \\ldots, h_{k-1}).$ But, since $L \\cap S \\neq \\emptyset$, this implies $\\dim(L \\cap S) > 0$, contradicting the assumption that $\\dim(L \\cap S) = 0.$\n\\end{proof}\n\nTo estimate the union of lines or $k$-planes contained in each leaf of the foliation, we will appeal to recursion. However, at the root we still use the classical method:\n \n\\begin{lemma} \\label{classmethod}\nSuppose that ${\\bf L}$ is a collection of $m$-planes, ${\\bf P}$ is a collection of $(k-1)$-planes such that for every $L \\in {\\bf L}$\n\\[\n|\\{P \\in {\\bf P} : P \\subset L\\}| \\geq M,\n\\]\nand \n\\begin{equation} \\label{mustpruneeq}\n|{\\bf L}| |F|^{k(m-k)} \\leq M.\n\\end{equation}\nThen\n\\[\n|{\\bf P}| \\gtrsim |{\\bf L}| M.\n\\]\n\\end{lemma}\n\n\\begin{proof}\nFor each $L \\in {\\bf L}$, let ${\\bf P}_L$ be a subset of $\\{P \\in {\\bf P} : P \\subset L\\}$ with $M \\leq {\\bf P}_L \\leq 2M.$ Set\n\\[\n{U} = \\{(P,L,L') : (L,L') \\in {\\bf L}^2, P \\in {\\bf P}_L \\cap {\\bf P}_{L'} \\}.\n\\]\nAn application of Cauchy-Schwarz gives \n\\[\n|{U}| \\geq \\frac{M^2|{\\bf L}|^2}{|{\\bf P}|}.\n\\]\nAny two distinct $m$-planes intersect in, at most, an $(m-1)$-plane. Since, by \\eqref{plcount} below, an $(m-1)$-plane contains $\\lesssim |F|^{k(m-k)}$ $(k-1)$-planes, we have \n\\begin{align*}\n|{U}| &\\leq C |{\\bf L}|^2 |F|^{k(m-k)} + |\\{(P,L) : L \\in {\\bf L}, P \\in {\\bf P}_L\\}| \\\\\n&\\leq C|{\\bf L}|^2|F|^{k(m-k)} + |{\\bf L}| 2 M \\\\\n&\\lesssim |{\\bf L}|M.\n\\end{align*}\n\\end{proof}\n\nWe finish the section with three standard estimates for collections of planes. \n\n\\begin{lemma} \\label{grascount}\nFor integers $0 \\leq k \\leq d$ we have\n\\begin{equation} \\label{sscount}\n|G(d,k)| \\approx |F|^{k(d-k)}\n\\end{equation}\nand\n\\begin{equation} \\label{plcount}\n|G'(d,k)| \\approx |F|^{(k+1)(d-k)}\n\\end{equation}\nwhere $G(d,k)$ is the set of $k$-dimensional subspaces of $F^d$ and $G'(d,k)$ is the set of $k$-planes in $F^d.$ \n\\end{lemma}\n\n\\begin{proof}\nA generic choice of $k$ vectors in $F^d$ is linearly independent, and there are approximately $|F|^{kd}$ such choices. By the same logic, each $k$-plane has approximately $F^{k^2}$ choices of basis, and hence we have \\eqref{sscount}. \n\nGiven a $k$-dimensional subspace $P$ with basis $e_1, \\ldots, e_k$, choose \\\\ $f_1, \\ldots, f_{d-k}$ so that $F^d = \\spa(e_1, \\ldots, e_k, f_1, \\ldots, f_{d-k})$. Then there is a one-to-one correspondence between linear combinations of $f_1, \\ldots, f_{d-k}$ and distinct translates of $P$, giving \\eqref{plcount}.\n\\end{proof}\n\n\n\\begin{lemma} \\label{sccprop}\nSuppose that $S$ is an $l$-plane in $F^m.$ Then for $l < l' \\leq m$\n\\begin{equation} \\label{scontainedcount}\n|\\{P \\subset F^m : P \\text{\\ is a\\ } l'\\text{-plane\\ and\\ } S \\subset P\\}| \\lesssim |F|^{(l' - l)(m - l')}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWrite $S = x + \\spa(e_1, \\ldots, e_l)$ and \n\\[\nF^m = \\spa(e_1, \\ldots, e_l, f_1, \\ldots, f_{m-l}).\n\\] \nThen there is a one-to-one correspondence between $l'$-planes $P \\supset S$ and $(l'-l)$-dimensional subspaces of $\\spa(f_1, \\ldots, f_{m-l})$, so \\eqref{scontainedcount} follows from \\eqref{sscount}.\n\\end{proof}\n\n\\begin{lemma} \\label{eiprop}\nSuppose $S$ is an $l$-plane in $F^k.$ Then\n\\[\n|\\{P \\subset F^k : P \\text{\\ is a\\ } (k-1)\\text{-plane\\ and\\ } P \\cap S = \\emptyset\\}| \\lesssim |F|^{k-l}.\n\\]\n\\end{lemma}\n\n\\begin{proof}\nWrite $S = x + \\spa(e_1, \\ldots, e_l)$, and fix $P = y + \\spa(f_1, \\ldots, f_{k-1}).$ If $P \\cap S = \\emptyset$, we must have $e_j \\in \\spa(f_1, \\ldots, f_{k-1})$ for $j = 1, \\ldots, l.$ Thus, $P$ is a translate of a $(k-1)$-plane containing $S$. Since, by Lemma \\ref{sccprop}, there are $\\lesssim |F|^{k-l-1}$ $(k-1)$-planes containing $S$, we have at most $|F|^{k-l}$ possible planes $P$. \n\\end{proof}\n\n\n\\section{Unions of lines} \\label{linesection}\nTheorems \\ref{firstlinethm} and \\ref{wolffrelaxed} follow immediately from:\n\\begin{proposition} \\label{linetheorem}\nSuppose $d \\geq 1$ is an integer, that $0 < \\gamma,\\lambda \\leq 1$, that $\\max(1 - d,-1) \\leq \\beta \\leq 1,$ that ${\\bf L}$ is a collection of lines in $F^n$ with \n\\[\n|{\\bf L}| \\geq \\gamma |F|^{2(d-1)+\\beta},\n\\]\nand that ${\\bf P}$ is a collection of points in $F^n$ satisfying \n\\[\n|\\{P \\in {\\bf P} : P \\in L\\}| \\geq \\lambda |F|\n\\]\nfor every $L \\in {\\bf L}.$ Then\n\\begin{equation} \\label{nondpwaresult}\n|{\\bf P}| \\gtrsim |F|^{d +\\max(0,\\beta)}.\n\\end{equation}\nwhere the implicit constant may depend on $d,\\gamma,\\lambda.$ Furthermore, if $d \\geq 2$ and ${\\bf L}$ satisfies the $d$-plane Wolff axiom \\eqref{dpwa} then we have\n\\begin{equation} \\label{dpwaresult}\n|{\\bf P}| \\gtrsim |F|^{d + \\frac{\\beta + 1}{2}}.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}[Proof of Theorem \\ref{qethm} assuming Proposition \\ref{linetheorem}]\nSuppose that ${\\bf L}$ satisfies \\eqref{qebound} and that $|F| \\geq M$ where $M$ is large and to be determined later.\n\n For $d$-planes $R$ let ${\\bf L}_R = \\{L \\in {\\bf L} : L \\subset R\\}$. Choose $d$-planes $R_1, \\ldots, R_N$ so that for each $j$, $|{\\bf L}_{R_j}| \\geq |F|^{2d-3}$ and such that for $R \\not\\in \\{R_j\\}_{j=1}^N$ we have $|{\\bf L}_{R}| < |F|^{2d-3}. $\n\nSetting ${\\bf L}' = \\bigcup_{j}{\\bf L}_{R_j}$, ${\\bf L}'' = {\\bf L} \\setminus {\\bf L}'$ and ${\\bf P}'' = \\bigcup_{L \\in {\\bf L}''}L$, we must have $|{\\bf L}''| < \\frac{1}{2}|{\\bf L}'|$ or \nelse we would have\n\\[\n|{\\bf P}| \\geq |{\\bf P}''| \\geq c' |F|^{d + \\frac{\\beta + 1}{2}} > C |F|^{d+\\beta}\n\\]\nwhere $c'$ is the implicit constant from \\eqref{dpwaresult} and $M$ is chosen large enough to overwhelm $\\frac{C}{c'}$ (In the second inequality above we have used the fact that ${\\bf L}''$ satisfies \\eqref{dpwa} to obtain \\eqref{dpwaresult} from Proposition \\ref{linetheorem}.) \n\nThus, $|{\\bf L}'| \\geq \\frac{1}{2}|{\\bf L}|$ and, by \\eqref{plcount} with $k=1$, $N \\geq c |F|^{\\beta}.$ Letting ${\\bf P}' = \\bigcup_{L \\in {\\bf L}'}L$ we have\n\\[\n|{\\bf P}'| \\geq c'' |F|^{d + \\beta} \\geq c C|F|^{d + \\beta} \\geq c |{\\bf P}|\n\\]\nwhere $c''$ is the implicit constant from \\eqref{nondpwaresult}, $c$ is chosen small enough to underwhelm $\\frac{c''}{C},$ and the last inequality follows from \\eqref{qebound}.\n\nA final application of $\\eqref{nondpwaresult}$ with $d' = d-1$ and $\\beta' = 1$ then gives \\eqref{denseindplane} since $|{\\bf L}_{R_j}| \\geq |F|^{2(d-1-1)+1}.$\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{linetheorem}]\nWe use induction on $d$. When $d=1$ the result follows from an application of Lemma \\ref{classmethod} (with $m=1$, $k=1$, and ${\\bf L}$ replaced by a subset of itself with cardinality $\\approx \\min(\\gamma,\\lambda) |F|^\\beta$ ), so we will prove it for $d > 1$, working under the assumption that it has already been proven for $1 \\leq d' < d.$\n\nAfter possibly deleting lines, we may assume\n\\begin{equation} \\label{refinedkk1}\n|{\\bf L}| < 2\\gamma |F|^{2(d-1)+\\beta}.\n\\end{equation}\nApplying Lemma \\ref{pslemma} to ${\\bf L}$ we obtain $m$-planes $R_1, \\ldots, R_N.$ Note that if $L$ satisfies the $d$-plane Wolff axiom \\eqref{dpwa} then we must have $m < d.$ \n\n\\ \n\n\\noindent{\\bf Case 1, $(m=d)$:}\\\\ \nLet ${\\bf P}_{R_j} = \\{P \\in {\\bf P} : P \\in R_j\\}.$ Applying the case $d' = d-1$ of the proposition to the ${\\bf L}_{R_j}$, we deduce that $|{\\bf P}_{R_j}| \\gtrsim |F|^{d}$ for each $j$. Since, by \\eqref{plcount}, $|{\\bf L}_{R_j}| \\lesssim |F|^{2(d-1)}$, we have $N \\gtrsim |F|^{\\max(0,\\beta)}.$ Then using Lemma \\ref{classmethod} (possibly applying it to a subset of $\\{R_j\\}_{j=1}^N$ in order to satisfy \\eqref{mustpruneeq}) to estimate $|\\bigcup_{j}{\\bf P}_{R_j}|$ shows that \n\\[\n|{\\bf P}| \\gtrsim |F|^{d + \\max(0,\\beta)}\n\\]\nas desired.\n \n\\ \n\n\\noindent{\\bf Case 2, $(m < d)$:}\\\\\nWe construct sets of lines and points using a standard ``popularity'' argument. Fix some large $C$ to be determined later and let \n\\[\n{\\bf P}^{\\sharp} = \\{P \\in {\\bf P} : |\\{L \\in {\\bf L} : P \\in L\\}| \\geq C |F|^{d-1 - \\frac{1-\\beta}{2}}\\}\n\\]\nand\n\\[\n{\\bf L}^{\\sharp} = \\{L \\in {\\bf L} : |\\{P \\in {\\bf P}^{\\sharp} : P \\in L\\}| \\geq \\frac{1}{4} \\lambda |F|\\}.\n\\]\nLetting ${\\bf L}^m$ be as in Lemma \\ref{pslemma}, we then have either \n\\begin{align} \\label{firstpossibilityk1}\n|{\\bf P}| &\\geq \\frac{1}{2}\\lambda |F| |{\\bf L}^m|\/(C|F|^{(d-1) - \\frac{1-\\beta}{2}})\n\\end{align}\n(in which case we are finished since the right hand side above is $\\gtrsim |F|^{d + \\frac{\\beta+1}{2}}$) or\n\\begin{equation} \\label{secondpossibilityk1}\n|{\\bf L}^{\\sharp}| \\geq \\frac{1}{8} |{\\bf L}^m|\n\\end{equation}\n\nIndeed, suppose that \\eqref{firstpossibilityk1} does not hold. Set\n\\[\nI = \\{(P,L) : P \\in {\\bf P}_L, L \\in {\\bf L}^{m}\\}\n\\]\nwhere, for each $L$, ${\\bf P}_L$ is a subset of ${\\bf P} \\cap L$ with $\\lambda |F| \\leq |{\\bf P}_L| < 2 \\lambda |F|.$ \nThen letting \n\\[\nI' = \\{(P,L) : P \\in {\\bf P}_L \\setminus {\\bf P}^{\\sharp} , L \\in {\\bf L}^{m}\\}\n\\]\nwe have\n\\[\n|I'| < C |F|^{d-1 - \\frac{1-\\beta}{2}} |{\\bf P}| < \\frac{1}{2} |I|\n\\]\nand so\n\\[\n|\\{(P,L) : P \\in {\\bf P}_L \\cap {\\bf P}^{\\sharp}, L \\in {\\bf L}^{m}\\}| \\geq \\frac{1}{2} \\lambda |F| |{\\bf L}^{m}|\n\\]\ngiving \n\\[\n|\\{(P,L) : P \\in {\\bf P}_L \\cap {\\bf P}^{\\sharp}, L \\in {\\bf L}^{\\sharp}\\}| \\geq \\frac{1}{4} \\lambda |F||{\\bf L}^{m}|\n\\]\nthus leading (by the upper bound on $|{\\bf P}_L|$) to \\eqref{secondpossibilityk1} as claimed.\n\nLet ${\\bf L}^{\\sharp}_{R_j} = {\\bf L}_{R_j} \\cap {\\bf L}^{\\sharp}$, ${\\bf P}^{\\sharp}_{R_j} = R_j \\cap {\\bf P}^{\\sharp}$, \n\\[\n{\\bf L}'_{R_j} = \\{L \\in {\\bf L}^m : |L \\cap R_j| = 1\\},\n\\]\nand\n\\[\n{\\bf P}'_{R_j} = {\\bf P} \\setminus R_j.\n\\]\nFix $j$ so that $|{\\bf L}^{\\sharp}_{R_j}| \\gtrsim |{\\bf L}_{R_j}|,$ and recall $|{\\bf L}_{R_j}| \\geq |F|^{2m-3}$ by Lemma \\ref{pslemma}.\nApplying the previously known case $d' = m-1$ of the proposition (or using the trivial estimate if $m=1$) we have\n\\begin{equation} \\label{lequalszeroPk1}\n|{\\bf P}^{\\sharp}_{R_j}| \\gtrsim |F|^{m}.\n\\end{equation}\n\nFor each point $P \\in {\\bf P}^{\\sharp}_{R_j}$ there are $\\geq C |F|^{d-1 - \\frac{1-\\beta}{2}}$ lines from ${\\bf L}^{m}$ intersecting $P$. Since, by Lemma \\ref{sccprop}, $\\lesssim |F|^{m-1}$ of these lines are contained in $R_j$, we have\n\\[\n|\\{L \\in {\\bf L}'_{R_j} : P \\in L\\}| \\geq \\frac{1}{2} |F|^{d - 1 - \\frac{1-\\beta}{2}}\n\\]\nprovided that $C$ is chosen sufficiently large. \nThus, \n\\begin{align*}\n|{\\bf L}'_{R_j}| &\\gtrsim |{\\bf P}^{\\sharp}_{R_j}| |F|^{d - 1 - \\frac{1 - \\beta}{2}} \\\\\n&\\gtrsim |F|^{m + d-1 - \\frac{1-\\beta}{2}}.\n\\end{align*}\nNote\\footnote{We may assume throughout that $|F|$ is sufficiently large relative to certain parameters (for instance $\\lambda$) since the implicit constants may be chosen so that the conclusion holds trivially for small $|F|$.} that for each $L \\in {\\bf L}'_{R_j}$, $|L \\cap {\\bf P}'_{R_j}| \\geq \\lambda |F| - 1 \\gtrsim |F|.$\n\n\nApplying Lemma \\ref{folprop}, we write $F^n$ as the union of $(m + 1)$-planes $T_i$ containing $R_j$. Let\n\\begin{align*}\n{\\bf L}_i &= \\{L \\in {\\bf L}'_{R_j} : L \\subset T_i\\} \\\\ \n{\\bf P}_i &= \\{P \\in {\\bf P}'_{R_j} : P \\in T_i\\}.\n\\end{align*}\nThen \n\\begin{align*}\n|{\\bf P}| &\\geq \\sum_i |{\\bf P}_i| \\\\ \n&\\gtrsim \\sum_i |{\\bf L}_i|\/|F|^{m-2} \\\\ \n&\\geq |{\\bf L}'_{R_j}|\/|F|^{m - 2}\\\\ \n&\\gtrsim |F|^{d +\\frac{\\beta+1}{2}}\n\\end{align*}\nwhere, for the second inequality, we used the fact (which follows from Lemma \\ref{pslemma}) that $|{\\bf L}_i| < |F|^{2(m -1) + 1} $ to see that $|{\\bf L}_i| = |F|^{2(d' - 1) + \\beta'}$ for some $d' \\leq m$ and so we can estimate $|{\\bf P}_i|$ using the previously known case $d'$ of \\eqref{nondpwaresult}.\n\\end{proof}\n\n\\section{Unions of Planes} \\label{planesection}\nTheorem \\ref{upthm2} is obtained by induction from the hyperplane case:\n\\begin{proposition} \\label{maintheoremcd1}\nSuppose $d,k > 0$ are integers, that $0 < \\gamma,\\lambda \\leq 1$, that $d \\geq k$, that $0 \\leq \\beta \\leq k$, that ${\\bf L}$ is a collection of $k$-planes in $F^n$ with \n\\[\n|{\\bf L}| \\geq \\gamma |F|^{(k+1)(d-k)+\\beta},\n\\]\nand that ${\\bf P}$ is a collection of $(k-1)$-planes in $F^n$ satisfying \n\\[\n|\\{P \\in {\\bf P} : P \\subset L\\}| \\geq \\lambda |F|^{k}\n\\]\n for every $L \\in {\\bf L}.$ Then\n\\[\n|{\\bf P}| \\gtrsim |F|^{k(d - k + 1)+\\beta}\n\\]\nwhere the implicit constant may depend on $d,\\gamma,\\lambda, k.$\n\\end{proposition}\n\n\\begin{proof}[Proof of Theorem \\ref{upthm2} assuming Proposition \\ref{maintheoremcd1}]\nTheorem \\ref{upthm2} follows directly from Proposition \\ref{maintheoremcd1} when $k-k' = 1$. Fix $k_0 \\geq 1$ and assume that the theorem holds for all $k-k' = k_0,$ and fix some $k,k'$ with $k-k' = k_0 + 1.$ \n\nFor $L \\in {\\bf L}$ satisfying \\eqref{upthm1hyp} let \n\\[\n{\\bf P}'_L = \\{(k'+1)\\text{-planes\\ }P' : P' \\subset L\\}\n\\] \nand for any $(k'+1)$-plane $P'$ let \n\\[\n{\\bf P}''_{P'} = \\{k'\\text{-planes\\ }P : P \\subset P'\\}.\n\\]\nApplying the previously known case of the theorem, we have\n\\[\n|\\bigcup_{L \\in {\\bf L}}{\\bf P}'_L| \\gtrsim |F|^{(k' + 2)(d - (k' + 1)) + \\beta}\n\\]\nand thus a second application of Proposition \\ref{maintheoremcd1} gives\n\\[\n|\\bigcup_{L \\in {\\bf L}}{\\bf P}_L| = |\\bigcup_{L} \\bigcup_{P' \\in {\\bf P}'_L} {\\bf P}''_{P'}| \\gtrsim |F|^{(k' + 1)(d - k') + \\beta}.\n\\]\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{maintheoremcd1}]\nWe use induction on $d$. When $d=k$ the result follows from an application of Lemma \\ref{classmethod} (with $m=k$). So we will prove it for $d > k$, working under the assumption that it has already been proven for $k \\leq d' < d.$\n\nAfter possibly deleting planes, we may assume\n\\begin{equation} \\label{refinedk}\n|{\\bf L}| < 2\\gamma |F|^{(k+1)(d-k)+\\beta}.\n\\end{equation}\nApplying Lemma \\ref{pslemma} to ${\\bf L}$ we obtain $m$-planes $R_1, \\ldots, R_N.$\n\n\\ \n\n\\noindent{\\bf Case 1, $(m=d)$:}\\\\ \nLet ${\\bf P}_{R_j} = \\{P \\in {\\bf P} : P \\subset R_j\\}.$ Applying the case $d' = d-1$ of the theorem to the ${\\bf L}_{R_j}$, we deduce that $|{\\bf P}_{R_j}| \\gtrsim |F|^{k(d-k+1)}$ for each $j$. Since, by \\eqref{plcount}, $|{\\bf L}_{R_j}| \\lesssim |F|^{(k+1)(d-k)}$, we have $N \\gtrsim |F|^{\\beta}.$ Using Lemma \\ref{classmethod} to estimate $|\\bigcup_{j}{\\bf P}_{R_j}|,$ we conclude\n\\[\n|{\\bf P}| \\gtrsim |F|^{k(d-k+1)+\\beta}\n\\]\nas desired.\n \n\\ \n\n\\noindent{\\bf Case 2, $(m < d)$:}\\\\\nWe construct sets of $k$-planes and $(k-1)$-planes using a standard ``iterated-popularity'' argument. Fix some large $C$ to be determined later. Let ${\\bf L}^{\\sharp,0} = {\\bf L}^m$, ${\\bf P}^{\\sharp,0} = {\\bf P}$ and for $1 \\leq q \\leq k$ let\n\\[\n{\\bf P}^{\\sharp,q} = \\{P \\in {\\bf P}^{\\sharp,q-1} : |L \\in {\\bf L}^{\\sharp, q-1} : P \\subset L| \\geq C|F|^{d-k}\\}\n\\]\nand\n\\[\n{\\bf L}^{\\sharp,q} = \\{L \\in {\\bf L}^{\\sharp,q-1} : |P \\in {\\bf P}^{\\sharp,q} : P \\subset L| \\geq 2^{-2q} \\lambda |F|^k\\}.\n\\]\nThen either \n\\begin{align} \\label{firstpossibility}\n|{\\bf P}| &\\geq 2^{-5k}\\lambda |F|^k |{\\bf L}^m|\/(C|F|^{d-k})\n\\end{align}\n(in which case we are finished since the right hand side above is $\\gtrsim |F|^{k(d-k+1) + \\beta}$) or for each $q \\leq k$\n\\begin{equation} \\label{secondpossibility}\n|{\\bf L}^{\\sharp,q}| \\geq 2^{-3q} |{\\bf L}^m|.\n\\end{equation}\n\nIndeed, suppose that \\eqref{firstpossibility} does not hold and that \\eqref{secondpossibility} holds for $0 \\leq q \\leq q_0 < k$. Set \n\\[\nI = \\{(P,L) : P \\in {\\bf P}_L, L \\in {\\bf L}^{\\sharp,q_0}\\}\n\\]\nwhere, for each $L$, ${\\bf P}_L$ is a subset of ${\\bf P}^{\\sharp,q_0}$ with $P \\subset L$ for every $P \\in {\\bf P}_L$ and $2^{-2q_0} \\lambda |F|^k \\leq |{\\bf P}_L| < 2^{-(2q_0-1)} \\lambda |F|^k.$ Note\n\\[\n|I| \\geq 2^{-2q_0} \\lambda |F|^k |{\\bf L}^{\\sharp,q_0}| \\geq 2^{-5q_0} \\lambda |F|^k |{\\bf L}^m|.\n\\]\nThen letting \n\\[\nI' = \\{(P,L) : P \\in {\\bf P}_L \\setminus {\\bf P}^{\\sharp,q_0 + 1} , L \\in {\\bf L}^{\\sharp,q_0}\\}\n\\]\nwe have\n\\[\n|I'| < C|F|^{d-k} |{\\bf P}| \\leq 2^{-5(k - q_0)} |I| \\leq \\frac{1}{2} |I|\n\\]\nand so\n\\[\n|\\{(P,L) : P \\in {\\bf P}_L \\cap {\\bf P}^{\\sharp,q_0 + 1}, L \\in {\\bf L}^{\\sharp,q_0}\\}| \\geq \\frac{1}{2} 2^{-2q_0} \\lambda |F|^k |{\\bf L}^{\\sharp,q_0}|.\n\\]\nThis gives\n\\[\n|\\{(P,L) : P \\in {\\bf P}_L \\cap {\\bf P}^{\\sharp,q_0 + 1}, L \\in {\\bf L}^{\\sharp,q_0+1}\\}| \\geq \\frac{1}{4} 2^{-2q_0} \\lambda |F|^k |{\\bf L}^{\\sharp,q_0}|\n\\]\nthus leading (by the upper bound on $|{\\bf P}_L|$) to \n\\[\n|{\\bf L}^{\\sharp,q_0 + 1}| \\geq \\frac{1}{8} |{\\bf L}^{\\sharp,q_0}|\n\\]\nas claimed.\n\nFor each $1 < q \\leq k$ let \n\\begin{align*}\n{\\bf L}^{\\sharp,q}_{R_j} &= \\{L \\in {\\bf L}^{\\sharp,q} : L \\cap R_j \\text{\\ is a }q\\text{-plane}\\} \\\\\n{\\bf P}^{\\sharp,q}_{R_j} &= \\{P \\in {\\bf P}^{\\sharp,q} : P \\cap R_j \\text{\\ is a }(q-1)\\text{-plane}\\}.\n\\end{align*}\nDefine ${\\bf L}^{\\sharp,0}_{R_j}$ as above and let ${\\bf P}^{\\sharp,0}_{R_j} = \\{P \\in {\\bf P}^{\\sharp,0} : P \\cap R_j = \\emptyset\\}.$\n\nLetting ${\\bf L}_{R_j}$ be as in Lemma 2.1, fix $j$ so that $|{\\bf L}^{\\sharp,k}_{R_j}| \\gtrsim |{\\bf L}_{R_j}|.$\nApplying the previously known case $d' = m-1$ of the theorem (or using the trivial estimate if $m=k$) we have\n\\begin{equation} \\label{lequalszeroP}\n|{\\bf P}^{\\sharp,k}_{R_j}| \\gtrsim |F|^{k(m-k+1)}.\n\\end{equation}\n\nSuppose for some $1 \\leq q \\leq k$ that \n\\begin{equation} \\label{equationA}\n|{\\bf P}^{\\sharp,q}_{R_j}| \\gtrsim |F|^{\\omega}.\n\\end{equation}\nOur immediate goal is to see that under certain conditions, \\eqref{equationA} implies \\eqref{equationB} and \\eqref{equationstar} below. For each $P \\in {\\bf P}^{\\sharp,q}_{R_j}$ there are $\\geq C |F|^{d-k}$ $k$-planes from ${\\bf L}^{\\sharp,q-1}$ containing $P$. Since, by Lemma \\ref{sccprop}, $\\lesssim |F|^{m-q}$ of these $k$-planes intersect $R_j$ in $q$-planes (and there is no possibility that for a $k$-plane $L \\supset P$ we have $\\dim(L \\cap R_j) > q$, since $\\dim(L) = \\dim(P) + 1$), we have\n\\[\n|\\{L \\in {\\bf L}^{\\sharp,q-1}_{R_j} : P \\subset L\\}| \\geq \\frac{1}{2} C |F|^{d - k}\n\\]\nprovided that $C$ is chosen sufficiently large and \n\\begin{equation} \\label{firstiterationrequirement}\nm + k-q \\leq d . \n\\end{equation}\nThus, using the fact (which follows from Lemma \\ref{sccprop}) that for each $(q-1)$-plane $S \\subset L$ there are at most $|F|^{k-q}$ $(k-1)$-planes $P$ with $S \\subset P \\subset L$, we have\n\\begin{equation} \\label{equationB}\n|{\\bf L}^{\\sharp,q-1}_{R_j}| \\gtrsim C |F|^{\\omega + (d-k) - (k-q)}\n\\end{equation}\nassuming \\eqref{firstiterationrequirement}. \n\nIt follows from the definition of ${\\bf L}^{\\sharp,q-1}$ that for each $L \\in {\\bf L}^{\\sharp,q-1}_{R_j}$\n\\[\n|\\{P \\in {\\bf P}^{\\sharp,q-1}: P \\subset L\\}| \\gtrsim |F|^k.\n\\]\nThus, using Lemma \\ref{sccprop} to obtain\n\\[\n|\\{P \\subset L : (L \\cap R_j) \\subset P\\}| \\lesssim |F|^{k-q} \\ll |F|^k\n\\]\nand (for $q > 1$) Lemma \\ref{eiprop} to obtain\n\\begin{equation} \\label{eipropapp}\n|\\{P \\subset L : (L \\cap R_j) \\cap P = \\emptyset\\}| \\lesssim |F|^{k-q+1} \\ll |F|^k\n\\end{equation}\nit follows that\n\\begin{equation} \\label{rieqn}\n|\\{P \\in {\\bf P}^{\\sharp,q-1}_{R_j}: P \\subset L\\}| \\gtrsim |F|^k.\n\\end{equation}\nEstimate \\eqref{rieqn} also holds when $q=1$, since the special definition of ${\\bf P}^{\\sharp,0}_{R_j}$ means that we do not need to use \\eqref{eipropapp}.\n\nApplying Lemma \\ref{folprop}\ngives $(m + k -q + 1)$-planes $T_i$ containing $R_j$. Let\n\\begin{align*}\n{\\bf L}_i &= \\{L \\in {\\bf L}^{\\sharp,q-1}_{R_j} : L \\subset T_i\\} \\\\ \n{\\bf P}_i &= \\bigcup_{L \\in {\\bf L}_i}\\{P \\in {\\bf P}^{\\sharp,q-1}_{R_j}: P \\subset L\\}.\n\\end{align*}\nThen assuming \n\\begin{equation} \\label{seconditerationrequirement}\nm + k -q + 1\\leq d\n\\end{equation}\nwe have\n\\begin{align}\n\\nonumber |{\\bf P}^{\\sharp,q-1}_{R_j}| &\\geq \\sum_i |{\\bf P}_i| \\\\ \n\\nonumber&\\gtrsim \\sum_i |{\\bf L}_i|\/|F|^{m-q-k} \\\\ \n\\nonumber&\\geq |{\\bf L}^{\\sharp,q-1}_{R_j}|\/|F|^{m-q - k}\\\\ \n\\label{equationstar}&\\gtrsim C |F|^{\\omega + d - (m+k-q) + q}\n\\end{align}\nwhere, for the second inequality, we used the fact (which follows from Lemma \\ref{pslemma}) that $|{\\bf L}_i| < |F|^{(k+1)(m -q) + k} $ to see that for some $d' \\leq m+k-q$ we can estimate $|{\\bf P}_i|$ using the previously known case $d'$ of the theorem.\n\nStarting with \\eqref{lequalszeroP} and iterating the fact that \\eqref{equationA} implies \\eqref{equationB} and \\eqref{equationstar}, we obtain for $0 \\leq q \\leq k$\n\\[\n|{\\bf P}^{\\sharp,q}_{R_j}| \\gtrsim C^{k-q} |F|^{k + q(m-k) + (k-q)d - (k-q)(k-q-1)}\n\\]\nif $m + k-q \\leq d$ and\n\\[\n|{\\bf L}^{\\sharp,q}_{R_j}| \\gtrsim C^{k-q} |F|^{k + (q+1)(m-k) + (k-q-1)d - (k-q-1)(k-q-2) + (d - k) - (k-q-1)} \n\\]\nif $m + k-q - 1 \\leq d.$\n\nSo if $m+k \\leq d$ we have\n\\[\n|{\\bf P}| \\geq |{\\bf P}^{\\sharp,0}_{R_j}| \\gtrsim |F|^{k(d - k + 1) + k}\n\\]\nas desired, and otherwise we have\n\\begin{equation} \\label{toomanykplanes}\n|{\\bf L}| \\geq |{\\bf L}^{\\sharp,k-(d + 1 - m)}_{R_j}| \\gtrsim C^{d+1-m} |F|^{(k+1)(d-k) + k}.\n\\end{equation}\nChoosing $C$ sufficiently large depending on the relevant implicit constants (none of which depended on $C$), we see that \\eqref{toomanykplanes} contradicts the assumption \\eqref{refinedk} and so we must have \\eqref{firstpossibility}.\n\\end{proof}\n\n\n\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\noindent\nExplosive nuclear burning in astrophysical environments produces\nunstable nuclei which again can be targets for subsequent reactions.\nMost of these nuclei are not accessible in terrestrial laboratories or not fully\nexplored by experiments, yet.\n\nApproximately half of all stable nuclei observed in nature in the heavy\nelement region $A>60$ were produced in the so-called r-process (i.e.,\nrapid neutron capture process), which is believed to occur\nin type II supernova explosions (see e.g., \\cite{cowan,ohurev}).\nAn environment with a high neutron density is the prerequisite for such an\nr-process, in which heavier elements are built up from seed elements by\nconsecutive neutron captures and $\\beta$-decays. Because of the abundant\nneutrons, a multitude of neutron captures ($\\simeq 15-35$)\nmay occur until the $\\beta$-decay\nhalf-life becomes shorter than the half-life against neutron capture. Thus,\nthe r-process path along which reactions take place, is pushed off the\nregion of stability towards neutron-rich unstable nuclei. The location of\nthe path has consequences for the resulting nuclear abundances, calculated in\nastrophysical models~\\cite{friedel,chen96}.\n\nFor most of the required neutron capture cross sections the statistical\nmodel (compound nucleus (CN) mechanism, Hauser-Feshbach approach)\ncan be applied. This\nmodel employs a statistical average over resonances, for which one has to\nknow level densities but not necessarily exact excitation energies and level\nspin assignments. However, the criterion for the applicability of that model\nis a sufficiently high level density. Especially for some light nuclei it\nhas been known for years that the statistical model cannot be applied and that\nthe direct capture (DC) mechanism dominates the cross sections. Nevertheless,\nit has only been realized recently that also for intermediate and heavy \nnuclei the\ndirect mechanism can become important near shell closures and for neutron-rich\nisotopes when the level density becomes too low for the CN\nmechanism. When approaching the drip-line, neutron separation energies\ndecrease and the nuclei become less deformed, both leading to a smaller\nlevel density at the relevant projectile energy. This relevant energy is\ndetermined by the peak\n$E=kT$ of the Maxwell-Boltzmann velocity distribution of the neutron gas.\nIf a segment of the r-process path at a given element lies close enough\nto the drip-line, the statistical model will not be applicable anymore and\nthe DC reactions will dominate~\\cite{mat83,gor97}.\n\nThe relation between DC and CN mechanisms has already been studied\nfor neutron capture by light\nand intermediate target nuclei~\\cite{ohurev,bal,gru,mei,meis,bee,kra}.\nInvestigations of the dependence of the level density on charge and mass number\nand a discussion of the applicability of the statistical model have been\ngiven elsewhere~\\cite{tfkl1}. In this paper we want to investigate\ndirect neutron capture on neutron-rich Sn and Pb isotopes with the emphasis\non discussing the difficulties, the level of reliability as well as\nthe predictive power of\ntheoretical calculations.\n\nThe main problem for the DC predictions is that neutron separation\nenergies and level properties (excitation energies, spins, parities)\nhave to be\nknown accurately, contrary to a statistical calculation in which it is\nsufficient to know the level density. As\nin the foreseeable future one can not expect\nany experimental\ninformation for the majority of nuclei close to the drip-line,\none has to turn to theory for providing the input for the DC\ncalculations. At the moment, there are several microscopic and\nmacroscopic-microscopic descriptions competing in the quest for predicting\nnuclear properties far off stability. For the first time, in this work we\nwant to investigate the difference in the level structure between several\nmodels and its impact on predicted neutron capture cross sections. The\ncompared models are a Hartree-Fock-Bogoliubov (HFB) model with the Skyrme SkP\nforce~\\cite{doba1,doba}, a relativistic mean field theory (RMFT) with the \nparameter set NLSH~\\cite{sharma1,sharma2}, and the macroscopic-microscopic\nfinite-range droplet model FRDM (1992) which was also used in\ncalculations of nuclear ground-state masses and \ndeformations~\\cite{moeller,moell2} and in calculations of quantities of\nastrophysical interest~\\cite{moellkra}.\n\nIn Section~\\ref{secDC} we very briefly introduce the method of the DC\ncalculation and Section~\\ref{secMic} gives an overview of the utilized\nmicroscopic models. For $^{208}$Pb, the DC results can directly be compared to\nexperimental values. This is described in Section~\\ref{secExp}.\nIn the following Sections~\\ref{secPb} and ~\\ref{secSn} we present our results\nfor the heavy Pb and Sn isotopes. Possible astrophysical signatures and\nremaining uncertainties are discussed in Section~\\ref{SecDis}.\nThe paper is concluded by the summary section~\\ref{summary}.\n\n\n\n\\section{Direct Capture and Folding Procedure}\n\\label{secDC}\n\\noindent\nThe theoretical cross section $\\sigma^{\\mathrm{th}}$ is derived from the\nDC cross section $\\sigma^{\\mathrm{DC}}$ given by~\\cite{kra,kim}\n\\begin{equation}\n\\sigma^{\\mathrm{th}}=\\sum_{i}C_i^2S_i\\sigma^{\\mathrm{DC}}_i \\quad .\n\\end{equation}\nThe sum extends over all possible final states (ground state and\nexcited states) in the residual nucleus. The isospin Clebsch-Gordan\ncoefficients and spectroscopic factors are denoted by $C_i$ and $S_i$,\nrespectively. The DC cross sections $\\sigma^{\\mathrm{DC}}_i$ are\nessentially determined by the overlap of the scattering wave function in\nthe entrance channel, the bound-state wave function in the exit channel,\nand the multipole transition operator. For the computation of the DC\ncross section we used the direct capture code TEDCA~\\cite{TEDCA}, which\nincludes E1, M1 and E2 transitions.\n\nFor determining the nucleon-nucleus potential the folding procedure was\nemployed, a method already successfully applied in the description of\nmany systems. In this approach the nuclear target\ndensity $\\rho_{\\mathrm{T}}$ is folded\nwith an energy and density dependent nucleon-nucleon interaction\n$v_{\\mathrm{eff}}$~\\cite{kob}:\n\\begin{equation}\nV(R)=\\lambda V_{\\mathrm{F}}(R)=\\lambda \\int \\rho_{\\mathrm{T}}\n(\\bbox{r})v_{\\mathrm{eff}}(E,\\rho_{\\mathrm{T}},\n\\vert\\bbox{R} - \\bbox{r} \\vert )d\\bbox{r} \\quad ,\n\\end{equation}\nwith $\\bbox{R}$ being the separation of the centers of mass of the two\ncolliding nuclei. The normalization factor $\\lambda$ accounts for\neffects of antisymmetrization and is close to unity. The nuclear density\n$\\rho_{\\mathrm{T}}$ can be derived from experimental charge\ndistributions or from theory. The potential\nobtained in this way ensures the correct behavior of the wave functions\nin the nuclear exterior. At the low energies considered in astrophysical\nevents the imaginary parts of the optical potentials are small.\n\nIn connection with the results presented below it is useful to recapitulate\nthe sensitivity of the DC calculations to various elements of the\ndescription. In ascending importance,\nin the present context the DC is sensitive to the optical potential and\ndensity distribution, respectively, the reaction $Q$-value, and the spin and\nparity of a level.\n\nFor the accuracy attempted here, there is almost no\ndifference in the results obtained by employing the optical potentials\nderived from the density distributions of the different models while leaving\nall other properties unchanged.\n\nA stronger dependence is seen when examining changes in the $Q$-value. An \nincrease\nin the $Q$-value will give a non-linear increase in the resulting cross section.\nAs the $Q$-value is computed as the difference in the binding energies of\ntarget and residual nucleus (i.e., the neutron separation energy) minus the\nexcitation energy of the level into which the neutron is captured\n\\begin{equation}\nQ_i=(B_{\\mathrm{T}}-B_{\\mathrm{R}})-E_i=S_{\\mathrm{n}}-E_i\\quad,\n\\end{equation}\nthe cross section will be sensitive to the masses (separation energies) derived\nin the different microscopic models as well as the level structure (excitation\nenergies) given in these models.\n\nThe by far strongest sensitivity is that to spins and parities of the\ninvolved initial and final states. In\norder to comply with the electromagnetic selection rules, a state has to\nhave the proper parity to contribute to the cross section significantly.\nThe dominant contribution to the DC cross section will stem from an E1\ntransition. In this case, parity has to change.\nConsequently, the capture of an incoming neutron $p$-wave will be important \nfor the Pb isotopes, whereas\n$s$-wave capture is dominating in the Sn cases.\nFurthermore, significant contributions only arise from low spin states\nlike 1\/2 and 3\/2 states, whereas the capture to levels with higher spins is \nstrongly suppressed.\nIn this respect, it will prove to be important that the different\nmicroscopic models make different predictions on which states are\nneutron-bound and which are not, since DC can only populate bound states.\n\n\\section{The Microscopic Input}\n\\label{secMic}\n\\noindent\nThe energy levels, masses, and nuclear density distributions\nneeded as input for the DC calculation were\ntaken from three different approaches. The first one was the\nRMFT which has turned out to be a successful tool\nfor the description of many nuclear properties~\\cite{gam}. The RMFT\ndescribes the nucleus as a system of Dirac nucleons interacting via\nvarious meson fields. There are six parameters which are usually\nobtained by fits to finite nuclear properties. For our calculations we\nhave used the parameter set NLSH~\\cite{sharma1,sharma2}.\n\nThe second method was FRDM (1992), which is a macroscopic-microscopic\nmodel based on the finite-range droplet macroscopic model and a\nfolded-Yukawa single-particle potential~\\cite{moeller}.\nFor pairing, the\nLipkin-Nogami pairing model~\\cite{yuk} is employed. This model proved to\nbe very successful in reproducing ground state spins along magic\nnumbers~\\cite{moell1} and\nhas been used in QRPA calculations of $\\beta$-decay half\nlives~\\cite{moell1,moellkra} and\nfor nuclear mass determinations~\\cite{moell2}.\n\nFinally, we also utilized the self-consistent mean field HFB\nmodel~\\cite{doba1,doba} in which the nuclear states are calculated by a one-step\nvariational procedure minimizing the total energy with respect\nto the occupation factors and the single-particle wave functions simultaneously.\n\nTo be able to compare the predictions from all of the models the nuclei were\nconsidered to be spherically symmetric. The limitations of such a\nrestriction\nare discussed in Section \\ref{SecDis}.\n\n\\newcommand{$^{208}$Pb(n,$\\gamma$)$^{209}$Pb}{$^{208}$Pb(n,$\\gamma$)$^{209}$Pb}\n\\section{Comparison with Experiments for the {\\protect$^{208}$Pb(n,$\\gamma$)$^{209}$Pb} reaction}\n\\label{secExp}\n\n\\noindent\nRecently, it became possible to extract the non-resonant part of the\nexperimental capture cross section for the\n$^{208}$Pb(n,$\\gamma$)$^{209}$Pb reaction~\\cite{corvi}. In that work, high\nresolution neutron capture measurements were carried out in order to\ndetermine twelve resonances in the range 1--400 keV. From these values the\nresonant Maxwellian-averaged cross section $<\\sigma>^{\\mathrm{R}}_{30\n\\mathrm{keV}}$=0.221(27)\\,mb was calculated.\nMeasurements of the total cross section using neutron\nactivation~\\cite{macklin,ratzel} are also available at 30 keV,\nyielding the value\n$<\\sigma>^{\\mathrm{t}}_{30\\mathrm{keV}}$=0.36(3)\\,mb. By a simple\nsubtraction of the\nresonant part from the total cross section the value of\n$<\\sigma>^{\\mathrm{NR}}_{30\\mathrm{keV}}$=0.14(4)\\,mb can be deduced\nfor the\nnon-resonant capture cross section.\n\nUsing the experimentally known density distributions~\\cite{devries},\nmasses~\\cite{audi} and energy levels~\\cite{nucldatasheets}, we\ncalculated the non-resonant contribution in the DC model. The strength\nparameter $\\lambda$ of the folding potential in the neutron channel was\nfitted to experimental scattering data at low energies~\\cite{scdata}.\nThe value of $\\lambda$ for the bound state is fixed by the requirement\nof correct reproduction of the binding energies.\nThe spectroscopic factors for the relevant low lying states of $^{209}$Pb\nare close to unity as can be inferred from\ndifferent $^{208}$Pb(d,p)$^{209}$Pb reaction data~\\cite{nucldatasheets}.\nFor the Maxwellian-averaged non--resonant DC cross section we obtained\n$<\\sigma>^{\\mathrm{DC}}_{30\\mathrm{keV}}=0.135$ mb, which is in excellent\nagreement with experiment. The by far highest contributions to the DC\ncross section come from the E1 $p$-wave capture to the low spin states\n$J^{\\pi}=1\/2^+, 3\/2^+, 5\/2^+$. Capture to the other states is negligible.\n\nIn order to test the different microscopic approaches we also calculated\nnon-resonant DC on $^{208}$Pb by consistently\ntaking the input (energy levels, masses\nand nuclear\ndensities) from the models described above.\nAgain, the strength parameter $\\lambda$ of the folding potential in the\nentrance channel was adjusted to the elastic scattering data for each\nof the models. The calculations for the neutron capture cross\nsections yield 0.0289 mb, 0.0508 mb, and 0.0135 mb\nfor RMFT, FRDM, and HFB, respectively.\nHence, each of the models gives a smaller value for\nthe Maxwellian-averaged 30 keV capture cross section than the\ncalculation using experimental input data. The differences are due to the\nneutron separation energies and level schemes of the relevant states in\n$^{209}$Pb (see\nFig.~\\ref{209}) in the microscopic models, leading to different $Q$-values\nfor capture to the\nexcited states ($J^{\\pi}=1\/2^+,3\/2^+,5\/2^+$).\nIt should be noted that in Fig.~\\ref{209} only those theoretical levels\nare shown which contribute to the cross section, i.e. only particle states.\nCapture into hole states is strongly suppressed by the fact that a re-ordering\nprocess would be required in the final nucleus (see e.g.~\\cite{tomenam} for\na similar case). This would be reflected in\nextremely small spectroscopic\nfactors. Therefore, the DC to such states is negligible.\n\n\\section{Results for Neutron-rich Pb Isotopes}\n\\label{secPb}\n\\noindent\nWe also investigated the model dependence of neutron capture on the\nneutron-rich even-even isotopes $^{210-238}$Pb. For these isotopes\nexperimental data are only available near the region of stability. For\nmore neutron-rich nuclei one has to rely solely on input parameters from\nmicroscopic models. In this and the following section we compare cross\nsections calculated with the nuclear properties predicted by different\nnuclear-structure models. Therefore, we consider nuclear cross sections\ninstead of Maxwellian-averaged ones as in the previous section.\n\nHaving obtained the relevant spins and calculated the $Q$-values from\nthe masses as\ndiscussed above, we still had to determine the scattering potentials\nwith their respective strength parameters\n(see Eq.~2). As a first\nstep, the folding potentials were calculated, using the density\ndistributions taken from the three different nuclear-structure models\n(HFB, RMFT, FRDM).\nIn the potentials for each of the isotopes a factor $\\lambda$ was chosen\ngiving the\nsame volume integral as for the fitted $^{208}$Pb+n potential, which was\nobtained as described in the previous section. This is justified because\nit is known that the\nvolume integrals only change very slowly when adding neutrons to a\nnucleus~\\cite{satchler}.\nFor the bound\nstate potentials $\\lambda$ is fixed by the requirement of correct\nreproduction of the binding energies. The spectroscopic factors were\nassumed to be unity for all transitions considered.\n\nThe results of our calculations are summarized in Fig.~\\ref{vgl}. For\ncomparison, the levels from all of the models\nfor $^{219}$Pb, $^{229}$Pb, and $^{239}$Pb are shown in\nFigs.~\\ref{level1}--\\ref{level3}.\nThe most\nstriking feature in Fig.~\\ref{vgl} is the sudden drop over several\norders of magnitude in\nthe cross sections calculated with the RMFT levels in the mass range\n$A=212-220$. This is due to the\nlack of low spin levels which are cut off by the decreasing neutron\nseparation energy. Only after the\n1i$_{11\/2}$ orbital\n(which forms the state at lowest energy in the RMFT) has been filled\ncompletely at\n$^{222}$Pb the\ncross section is increasing because low spin states become\navailable again. A similar gap is seen for $A=230-232$, and it is\nexpected that those gaps will repeatedly appear when approaching the\ndrip-line.\nSince in some cases there are unbound low spin states\nclose to the threshold a small shift in the level\nenergies could already close such a gap. However, note that the level spacing\nin the\nRMFT has the tendency to increase towards neutron rich\nnuclei~\\cite{sharma3},\ncontrary to the FRDM and the HFB prediction.\n\nThe values resulting from the FRDM exhibit a smoother and\nalmost constant\nbehavior in the considered mass range.\nOnly a slight dip is visible for $^{220}$Pb(n,$\\gamma$) since\nthe previously accessible 1\/2$^+$ and 3\/2$^+$ states have become\nunbound in $^{221}$Pb. The 2g$_{9\/2}$ orbital is at lower energy than the\n11\/2$^+$ level in this model.\nBeyond $^{223}$Pb it has been filled and\nat least one of the low spin states can be populated again. The known\nground state spins for the lighter isotopes are also reproduced\ncorrectly. For higher mass numbers the cross sections are similar to the\nones obtained in the HFB model.\n\nFor mass numbers below $A=232$, the HFB capture cross sections\nare always larger than those obtained in the other models.\nAlthough the neutron separation energies are also decreasing, the $Q$-values\nfor the capture to the low spin states \nbecome even larger, because the states are moving towards lower\nexcitation energies.\nIn general, the HFB cross sections of the investigated capture reactions\nexhibit a very smooth behavior with increasing neutron number.\n\n\\section{Results for Sn Isotopes}\n\\label{secSn}\n\\noindent\nProceeding in the same manner as for the Pb isotopes (Sec.\\ \\ref{secPb}),\nwe extended our investigation to the Sn nuclei.\nHere, the situation is different in\ntwo ways: Firstly, the drip-line lies at relatively much lower\nneutron numbers and the r-process path is not so\nfar off stability, and secondly, there are more experimental data available\nalso for the unstable nuclei close to or in the r-process path, which makes\na test of theoretical models possible.\n\nAgain, we took the nuclear properties and density distributions from the\nabove described models. The strengths of the scattering potentials were\nadjusted to reproduce the same value of the volume integral of 425 MeVfm$^3$\nas determined from the experimental elastic scattering data on the stable\nSn isotopes~\\cite{bal}.\nWe calculated the capture cross sections from\nthe stable isotope $^{124}$Sn out to the r-process path which\nis predicted at a neutron separation energy of about 2 MeV~\\cite{friedel}. \nAs the\nmodels make different predictions about masses and separation energies,\nthe r-process path is located at different mass numbers: $A\\simeq 135$ for\nRMFT and FRDM and $A \\simeq 145$ in the case of HFB.\nContrary to the Pb isotopes for which the $p$-wave capture is the main \ncontribution\nallowed by the electromagnetic selection rules, the Sn cross sections\nare dominated by the $s$-wave captures, due to the negative parities of the \nfinal states.\n\nThe level schemes of the $^{125}$Sn, $^{133}$Sn, and $^{141}$Sn\nnuclei are shown in\nFigs.~\\ref{sn1}--\\ref{sn3}, and the resulting cross sections\nfor all considered nuclei and models\nare combined in Fig.~\\ref{snfig}.\nSimilarly as in the Pb case,\nthe dependence of the cross sections on the mass number can be understood\nby considering the excitation energies of the low-spin states relative to the\nneutron separation energy predicted in various models\n(Figs.~\\ref{spinhfb}--\\ref{spinfrdm}). The 3\/2$^-$ state is bound in the\nFRDM already at low mass number, whereas it becomes bound only at $A=131$\nand $A=133$ in HFB and RMFT, respectively. Therefore, the FRDM cross sections\nare larger than the ones from HFB and RMFT for $A<133$. The drop in the\nFRDM cross sections beyond the $N=82$ shell is due to the fact that the\n1\/2$^-$ and 3\/2$^-$ states slowly become unbound (see Fig.~\\ref{spinfrdm}).\nIn the HFB model the two low-spin states move down in energy faster than\nthe neutron separation energy, thus providing an increasing $Q$-value and\nslightly increasing cross sections (Fig.~\\ref{spinhfb}).\nA similar trend can be found in the\nlevels from RMFT, although with a less pronounced increase of the $Q$-value\n(Fig.~\\ref{spinrmf}).\n\nThere are no data available concerning the pure DC contribution\nto the cross sections for the neutron-rich Sn isotopes. However, there is\nexperimental information regarding masses and level schemes. This can be\ncompared to theory (see Fig.~\\ref{sn2}).\nFor the experimentally\nknown isotope $^{133}$Sn we calculated DC by taking the experimentally known\nmasses and levels~\\cite{hoff} as input for the DC-calculation,\nthus arriving at a pseudo-experimental value for\nthe cross section which can be compared to the purely theoretical predictions.\nThe resulting value is marked by a cross in Fig.~\\ref{snfig}.\nNeutron capture on $^{132}$Sn is particularly\ninteresting because $^{133}$Sn is predicted to be already very close to\nthe r-process path by the two models RMFT and FRDM.\nAs it turns out, however, the resulting cross sections show the closest\nagreement among the investigated nuclei for this case.\nAll of the considered models predict the same ground state spin, a bound\n3\/2$^-$ state and a (barely) unbound 1\/2$^-$ state (cf.,\nFigs.~\\ref{spinhfb}--\\ref{spinfrdm}, and Fig.~\\ref{sn2};\nnote that the mass ranges in the plots are different). However, the resulting\n$Q$-value is largest in the RMFT, yielding the highest cross section. The cross\nsections from the HFB and FRDM levels are smaller by about a factor of 2\nbecause of the less strongly bound 3\/2$^-$ state. The additional 5\/2$^-$\nstate found in HFB gives only a small contribution to the total cross section\nand cannot compensate for the comparatively low $Q$-value of the capture to\nthe 3\/2$^-$ level. Nevertheless, compared to the large discrepancies\nregarding other nuclei, there is good agreement in the resulting cross\nsections. Therefore, this\nnucleus may be a bad choice to select between the different models, but it\nis reassuring in the astrophysics context that the cross sections agree so\nwell.\n\n\\section{Discussion}\n\\label{SecDis}\n\\noindent\nIn systematic r-process studies~\\cite{friedel} it was found that\nthe r-process path is touching nuclei with neutron separation energies\naround 2.5--1.7\\,MeV in the Sn region and\n$S_{\\mathrm{n}}\\simeq 1.5-0.9$\\,MeV in the\nPb region~\\cite{friedel}.\nIn our calculations for Pb (including $^{239}$Pb+n) we cover the\nastrophysically relevant mass region, with the possible exception of the\nHFB model. The neutron separation energies in the HFB model decrease\nmuch slower with increasing mass number than in the other models\n(cf., Fig.\\ \\ref{level3}), thus\nnot only leading to a drip-line at higher mass but also pushing the\nr-process path further out. However, the most extreme path location\nmight still be further out by not more than two or three isotopes from\n$^{240}$Pb, and therefore it is possible to extrapolate the trend seen in the\nHFB calculation at lower mass numbers.\nIt should be kept in mind, however,\nthat the location of the r-process path is determined by the ratio\nbetween neutron capture half-life and $\\beta$-decay half-life.\n\nIn the following we briefly discuss the possible astrophysical\nconsequences of the effects\nfound in the cross section behavior given by the different models.\nComplete r-process network calculations, which take into account all\npossible reaction links and do not postulate an a-priori $\\beta$-flow\nequilibrium, require a large number of astrophysical and nuclear-physics\ninput parameters (for a detailed discussion, see e.g.~\\cite{cowan}). In such\na non-equilibrium scenario, the location of the r-process path as well as\nthe time-scale of the r-matter flow is mainly determined by the neutron\ndensity as astrophysical quantity, and by the nuclear-physics parameters:\nthe neutron separation energy $S_{\\mathrm{n}}$ and the capture cross sections\n$\\sigma_{\\mathrm{n}}$. With this, details of the r-process are depending\non the specific nuclear models used. In the following discussion\nwe will consider as a first estimate\nonly the r-process paths found in detailed studies making use of FRDM\nmasses~\\cite{friedel} and vary the capture cross sections according to our\nfindings for the different microscopic inputs.\n\nIn the mass region beyond the $A\\simeq195$ r-abundance peak, neutron\ndensities of $n_{\\mathrm{n}}\\simeq10^{25}-10^{27}$\\,cm$^{-3}$ are required\nto produce sizeable amounts of $Z\\simeq80-84$, $A\\simeq230-250$ r-process\nisotopes very far from $\\beta$-stability. After successive $\\beta^-$- and\n$\\alpha$-decays they will form the long-lived r-chronometers $^{232}$Th and\n$^{235,238}$U, and the major part of the r-abundances of $^{206-208}$Pb and\n$^{209}$Bi (see, e.g.~\\cite{pfeiff97}). When regarding the\n$\\sigma_{\\mathrm{n}}$ cross sections for Pb from FRDM and HFB\n(see Fig.~\\ref{vgl}), very similar results are expected for the $^{230-238}$Pb\nprogenitor isotopes. Thus, also similar initial r-abundances for\n$^{232}$Th and $^{235,238}$U will result. However, when using the RMFT\ncross sections, a considerable hindrance of the nuclear flow around\n$A\\simeq130$ may occur which consequently would change the Th\/U abundance\nratios. These neutron capture cross sections which are 5 or more orders of\nmagnitude smaller than the ones given by FRDM and HFB levels would\nincrease the life-time of a nucleus against neutron-capture by the same\norder of magnitude and thus even prevent the flow to heavier elements\nwithin the time-scales given by the astrophysical environment.\n\nIn the case of the Sn isotopes, the situation is quite different from the\nPb region. The range of astrophysically realistic $n_{\\mathrm{n}}$-conditions\nfor producing the $A\\simeq130$ r-abundances is lower, with\n$n_{\\mathrm{n}}\\simeq10^{22}-5\\times10^{24}$\\,cm$^{-3}$. Hence, the r-process\npath is much closer to $\\beta$-stability, involving the progenitor isotopes\n$^{134,136,138}$Sn only a few neutrons beyond the doubly magic nucleus\n$^{132}_{50}$Sn$_{82}$. For these isotopes the Hauser-Feshbach (HF)\ncross sections used so far~\\cite{cowan} are of the order of 10$^{-4}$ to\n$5\\times10^{-5}$\\,barn. According to a recent investigation~\\cite{tfkl1},\nthe statistical model cannot be applied in that region and will overestimate\nthe capture cross sections. However, even if we use the experimental levels to\ncalculate a Breit-Wigner resonant cross section for\n$^{132}$Sn(n,$\\gamma$)$^{133}$Sn, we find it to be a factor of about 6 lower\nthan the HF cross sections.\nOur present calculations would add another DC\ncontribution of about the same magnitude as given by\nHF (see Fig.~\\ref{snfig}), which has so\nfar not been taken into account. As a consequence of the larger total cross\nsection, the r-matter flow to heavier elements would be facilitated, thus\navoiding the formation of a pronounced $A\\simeq134-138$ ``satellite peak'' in\nthe r-abundance curve sometimes observed in steady-flow calculations\n(see, e.g.\\ Fig.\\ 2 in \\cite{chen96}, or Fig.\\ 5 in \\cite{friedel}).\nSuch a signature is only indicated in the heavy-mass wing of the\n$A\\simeq130$ $N_{r,\\odot}$-peak. It is interesting to note in this context\nthat the HFB model, which exhibits the weakest $N=82$ shell closure and\nwith this also the weakest ``bottle-neck'' for the r-matter transit in this\nregion (for a detailed discussion, see e.g.~\\cite{klk97}), yields the\nhighest DC cross sections for the $A\\geq134$ Sn isotopes.\n\nSince we assumed spherical nuclei in order to be able to compare the\ndifferent microscopic models, deformation effects were not taken into\naccount which lead to level splitting and thus can increase the number\nof accessible levels. When considering deformation our results could be\nmodified in two ways: Firstly, the number of bound low-spin levels could\nbe increased, leading to larger DC cross sections; secondly,\ndue to a possibly larger number of levels at and above the neutron\nseparation energy, the compound reaction mechanism could be further\nenhanced and clearly dominate the resulting cross sections. However,\nas can be seen from level density~\\cite{ohurev,tfkl1} and\ndeformation (e.g., \\cite{moell2}) studies, deformation of Pb isotopes\nsets in at a mass number of about\n$A\\simeq 220$ and decreases already for masses beyond $A\\simeq 230$.\nCloser to the drip-line, the nuclei show low level densities again,\nnot only due to low neutron separation energies but also because of\nsphericity. Lead isotopes in the r-process path (especially for components\nwith low $S_{\\mathrm{n}}$) will therefore already\nhave reduced deformation and the DC -- being sensitive to the\nlevel structure -- will give an important contribution to the total\ncapture cross sections. Concerning Sn, a theoretical study of the ratio of\nDC over CN contributions for Sn isotopes~\\cite{bal} shows that CN dominates\nup to a mass number $A \\simeq 130$. Moreover, deformation is predicted to set\nin only at $A\\simeq140$ for Sn~\\cite{moellkra}.\nThis is supported by level density\nconsiderations~\\cite{tfkl1}, showing that the level density is too low\nin this region to apply the statistical model. Therefore, depending on the\nmodel, the r-process path lies at the border of or already well inside the region\nwhere the DC is non-negligible and dominating.\n\nAnother source of uncertainty is the assumption of pure single-particle\nstates, i.e., setting the spectroscopic factors to unity. This has been\nshown to be a good approximation for Pb isotopes close to stability and\nit is expected to hold for neutron-rich Pb isotopes. However, a range of\n0.01--1.0 for the spectroscopic factors could be realistic. This will play\nonly a minor role in the present comparison of different microscopic\nmodels, as the differences in the models may be only slightly enhanced\nwhen considering different theoretical spectroscopic factors. Nevertheless,\nit will be important in quantitative calculations of abundances, invoking\ncomplicated reaction networks.\n\n\\section{Summary}\n\\label{summary}\n\\noindent\nWe have shown that theoretical capture cross sections can depend\nsensitively on the microscopic models utilized to determine the\nnecessary input parameters.\nBecause of low level densities, the\ncompound nucleus model will not be applicable in those cases.\nDrops over several orders of magnitude in the cross sections -- as found\nwith the RMFT for Pb -- would change the position of the r-process path\nand possibly influence\nthe formation of heavy chronometer elements,\nwhereas the enhanced capture rates on Sn\ncould have direct effects in the final r-process abundance distribution.\nDeformation effects and the compound nucleus reaction mechanism\nmay still be of importance for the Pb isotopes and further investigations\nare needed. Nevertheless, the DC will be of major importance in\nthe Sn region. This region is also interesting for future experimental\ninvestigations of $S_{\\mathrm{n}}$, neutron single-particle levels and\n(d,p)-reactions studying spectroscopic factors.\nThere is also a need for improved microscopic nuclear-structure models\nwhich can also be compared in an astrophysical context following the\nsuccessful tradition of the interplay between nuclear physics and\nastrophysics.\n\n\n\n\\acknowledgements\n\\noindent\nThis work was supported in part by the Austrian Science Foundation\n(project S7307--AST) and by the Polish Committee for\nScientific Research.\nTR acknowledges support by an APART fellowship from\nthe Austrian Academy of Sciences.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n\\noindent The strategies to search for a dark matter (DM) component in the Universe are nowadays extremely varied, targeting \nmany possible gravitational and non-gravitational properties such as the DM mass or standard model (SM) \ncouplings~\\cite{Bertone:2004pz}. In astrophysical, cosmological, and laboratory settings, this broadband approach has yet to \nconclusively reveal any non-gravitational signatures. However, via both indirect and direct searches, the very wide DM \nmodel space has been significantly restricted. The focus of this article concerns the reach of the generic \nclass of experiments aiming to directly detect DM through a possible DM-nucleon coupling~\\cite{Goodman:1984dc}, \nknown as direct detection facilities. Currently, world-leading examples of this setup include \ne.g.~LUX-ZEPLIN (LZ)~\\cite{LZ:2022ufs}, PandaX-4T~\\cite{PandaX-4T:2021bab}, and Xenon-1T~\\cite{XENON:2018voc},\nwhich set the strongest limits in the DM mass \n$m_\\chi$ vs.~spin-independent nuclear coupling $\\sigma_{\\mathrm{SI}}$ parameter space.\n\nThe sensitivity of a given direct detection experiment is controlled by a number of factors. Firstly, the event rate $\\Gamma_N$\nscales with the number of DM particles that have a sufficiently large kinetic energy. Specifically, the DM energy \nmust be large enough \nto induce a nuclear recoil that can trigger a signal above the detector threshold. Secondly, the rate \nalso scales linearly with the DM-nucleon cross section $\\mathrm{d}\\sigma_{\\chi N} \/ \\mathrm{d} T_N$, \nat least in the above examples, where $T_N$ is the nuclear recoil energy. Thirdly, as in any count-based experiment, \nthis signal rate should be compared to some \nbackground event rate to derive a statistically significant detection threshold. Notably, in direct detection facilities, the background \nrates are typically extremely low as necessitated by the small expected signal rates, although there are some important \nexceptions, such as a dedicated CRESST surface run~\\cite{CRESST:2017ues}. \n\nThe standard target for these experiments is the DM in the Galactic halo, which has characteristic velocities of the order\n$v_\\chi \\sim 10^{-3}c$ and in any case cannot exceed the Galactic escape velocity \n$v_\\mathrm{esc} \\sim 540 \\, \\mathrm{km\/s}$~\\cite{Evans:2000gr,Evans:2005tn}. For a given DM mass \n$m_\\chi$, there is hence unavoidably a maximum DM kinetic energy available to excite nuclear recoil signals of the order \n$T_N \\sim m_\\chi^2 v_\\mathrm{esc}^2 \/ m_N$. For some DM mass $m_\\chi^\\mathrm{min}$\nthis must fall below the detectable threshold, and the experimental sensitivity drops to zero. For experiments such as \nXenon, PandaX and LZ, it is well-known that this cut-off\nlies around the GeV-scale, corresponding to a detectable threshold in the keV range. As such, even though these detectors \nhave impressive reach in interaction cross-section -- currently down to the level of \n$\\sigma_\\mathrm{SI} \\sim 10^{-47}\\,\\mathrm{cm}^2$~\\cite{XENON:2018voc,PandaX-4T:2021bab,LZ:2022ufs}, and even \napproaching the neutrino floor with ongoing searches~\\cite{Strigari:2009bq,OHare:2021utq} -- there is ample motivation \n(and hence, in fact, both experimental and theoretical activity)\nfor methods to probe the sub-GeV mass range~\\cite{Knapen:2017xzo,Essig:2022dfa}. \nThis describes the first ``window\" in which DM can hide -- it could just be that DM has \na small mass out of the reach of direct detection experiments. \nThere is yet another window at {\\it large} values of the cross-section \n$\\sigma_{\\mathrm{SI}}$, however, which will be a key focus of this article. This arises due to the fact that if DM interacts \\textit{too} \nstrongly, then it can actually be the case that DM particles are unable to reach the detectors due to the attenuation of the flux \nin the atmosphere or the rock overburden~\\cite{Starkman:1990nj,Zaharijas:2004jv, Mack:2007xj}. \nThis typically becomes the main prohibitive factor for cross \nsections at the level of $\\sigma_{\\mathrm{SI}} \\gtrsim 10^{-28}\\,\\mathrm{cm}^2$~\\cite{Emken:2018run}.\n\nThere have been a number of promising experimental proposals to probe these two open windows. Attempts \nto extend the sensitivity to DM-nucleus interactions into the sub-GeV realm include \nsearches for Migdal electrons~\\cite{Ibe:2017yqa,XENON:2019zpr} or bremsstrahlung photons~\\cite{Kouvaris:2016afs}, \naccompanied by an intense low-threshold direct detection program in the development of novel detector \nconcepts (for a recent review, see Ref.~\\cite{Essig:2022dfa}).\nCross sections sufficiently large for DM to scatter inside the Earth before reaching underground detectors,\non the other hand,\ncan be probed by surface runs of conventional direct detection experiments (like the one performed by the CRESST \ncollaboration~\\cite{CRESST:2017ues}), or by targeting the expected diurnal modulation in the\nsignal in this case~\\cite{Collar:1992qc,Collar:1993ss}. As far as this work is \nconcerned, however, we will be interested in the role played by the irreducible astrophysical flux of\nhighly boosted DM that originates from cosmic ray collisions with DM particles in the Galactic halo (CRDM). \nThis was pointed out only relatively recently~\\cite{Bringmann:2018cvk,Cappiello:2018hsu}, and subverts the issue\nof a loss in sensitivity by noting that a sub-dominant component of DM with velocities well above those in the Galactic halo can \nproduce a detectable signal even if it is very light, i.e.~for DM masses (well) below $1 \\, \\mathrm{GeV}$. \nThe sub-dominant nature of the flux \nnaturally introduces a trade-off with the interaction rates that can be probed, quantitatively resulting in limits\nat the level of $\\sigma_\\mathrm{SI} \\sim 10^{-31}\\,\\mathrm{cm}^2$~\\cite{Bringmann:2018cvk}.\nInterestingly, CRDM does not only probe previously open parameter space at small DM masses but also results\nin bounds extending into the relevant regime of the second open window described above. \nAfter this initial work pointed out the advantages of \nconsidering such a boosting mechanism, a large number of further analyses have addressed various aspects of the \nproduction~\\cite{Alvey:2019zaa,DeRocco:2019jti,Dent:2019krz,Wang:2019jtk,Zhang:2020nis,Plestid:2020kdm,Su:2020zny,Cho:2020mnc,Guo:2020oum,Xia:2020apm,Dent:2020syp,Dent:2020qev,Emken:2021lgc,Das:2021lcr,Bell:2021xff,An:2021qdl,Feng:2021hyz,Wang:2021jic,Granelli:2022ysi,Xia:2022tid,Bardhan:2022ywd}, \nattenuation~\\cite{Bondarenko:2019vrb,McKeen:2022poo}, and \ndetection~\\cite{Ema:2018bih, Cappiello:2019qsw,Berger:2019ttc,Kim:2020ipj,Guo:2020drq,DeRoeck:2020ntj,Ge:2020yuf,Cao:2020bwd,Jho:2020sku,Lei:2020mii,Harnik:2020ugb,Ema:2020ulo,Bramante:2021dyx,Emken:2021vmf,PandaX-II:2021kai} \nof astrophysically boosted DM. \nFor a recent comprehensive \\mbox{(re-)analysis} of all of these aspects see, e.g.~Xia~{\\it et al.}~\\cite{Xia:2021vbz},\nwho stressed in particular that form-factor suppressed attenuation in the overburden seemingly allows us to exclude\ncross sections much larger than $\\sigma_\\mathrm{SI} \\sim 10^{-28}\\,\\mathrm{cm}^2$.\n\nThis article builds on this literature in three important ways: firstly, we point out that when DM acquires such large energies, \ninelastic scattering in the rock overburden above detectors such as Xenon-1T will at some point become the dominant\nattenuation mechanism. \nAs such, to avoid being over-optimistic in terms of how much parameter space is excluded, we show how \nto include this physical effect in a self-consistent manner and\nderive the resulting bounds. Secondly, we broaden the applicability of these limits to models that are more realistic for DM \nwith sub-GeV masses, moving beyond simplified contact interactions to\ninteractions mediated by vector or scalar mediators, or DM that has some internal structure. Finally, we argue that with \nthese improvements, and when taking into account fully complementary constraints from cosmology,\nthere is generically no remaining open parameter space left unconstrained for nuclear cross sections exceeding \n$10^{-30}\\,\\mathrm{cm}^2$, \nfor DM masses in the entire MeV to GeV range. We demonstrate that possible loopholes to this statement -- still allowing \nan open window at larger cross sections -- require a combination\nof {\\it (i)} questioning the principal ability of CRESST to probe DM masses down to the published limit of \n$m_\\chi=140$\\,MeV~\\cite{CRESST:2017ues} and \n{\\it (ii)} choosing a rather narrow range of mediator masses $m_\\phi\\sim 30$\\,MeV (or finite DM extent $r_\\chi\\sim10$\\,fm).\nFor our numerical analysis throughout the article, we use the package {\\sf DarkSUSY}~\\cite{Bringmann:2018lay}. The improved CRDM \ntreatment reported in this work, including also updated cosmic ray fluxes and a more sophisticated use of form factors in the \nattenuation part, will be included in the next public release of the code.\n\n\\noindent The rest of the article is organized as follows: we start in section~\\ref{sec:crdm} by briefly reviewing the production \nof CRDM and the attenuation of the subsequent flux on its way to the detector, establishing our notation and\nsetting up the basic formalism that our analysis relies on. In the next two sections, we discuss in more detail how to model nuclear \nform factors (section~\\ref{sec:form_factors}) and the impact of inelastic scattering (section~\\ref{sec:inel}) on the attenuation \nof the flux. In section~\\ref{sec:m2}, we consider a number of \ngeneric options for the $Q^2$- and $s$-dependence of the scattering amplitude that are more realistic than assuming a constant \ncross-section. We complement this in section~\\ref{sec:hexaquark} with the analysis of a specific example, namely a\nbaryonic DM candidate that has been argued to evade traditional direct detection bounds despite its relatively\nstrong interactions with nuclei. We conclude and summarise our results in section~\\ref{sec:conclusions}.\n\n\n\n\n\\section{Cosmic-ray upscattering of dark matter}\n\\label{sec:crdm}\n\nWe describe here, in turn, how initially non-relativistic DM particles in the Galactic halo are up-scattered by cosmic rays (CRs),\nhow the flux of these relativistic CRDM particles is attenuated before reaching detectors at Earth, and\nhow to compute the resulting elastic scattering rate in direct detection experiments.\n\n\\medskip\n\\noindent\\textbf{Production:} The basic mechanism that we consider is the elastic scattering of CR nuclei $N$, \nwith a flux of ${{d\\Phi_N}}\/{dT_N}$, \non non-relativistic DM particles $\\chi$ in the Galactic halo. For a DM mass $m_\\chi$ and \ndensity profile $\\rho_\\chi(\\mathbf{r})$, this induces a relativistic CRDM flux incident on Earth \nof~\\cite{Bringmann:2018cvk,Bondarenko:2019vrb} \n\\begin{eqnarray}\n\\frac{d\\Phi_{\\chi}}{dT_\\chi}&=&\\int\\frac{d\\Omega}{4\\pi} \\int_{\\rm l.o.s.} \\!\\!\\!\\!\\!\\!d\\ell \\, \\frac{\\rho_\\chi}{m_\\chi} \n\\sum_N\n\\int_{T_N^\\mathrm{min}}^\\infty d T_N\\, \\frac{d \\sigma_{\\chi N} }{dT_\\chi} \\frac{{d\\Phi_N}}{dT_N}\\\\\n&\\equiv& \nD_\\mathrm{eff} \\frac{\\rho_\\chi^\\mathrm{local}}{m_\\chi} \n\\sum_N\n\\int_{T_N^\\mathrm{min}}^\\infty d T_N\\, \\frac{d \\sigma_{\\chi N} }{dT_\\chi} \\frac{{d\\Phi^\\mathrm{LIS}_N}}{dT_N}\n\\label{eq:chiCR}\n\\,.\n\\end{eqnarray}\nHere $\\mathbf{r}$ denotes the Galactic position, and \n${d \\sigma_{\\chi N} }\/{dT_\\chi}$ is the differential elastic scattering cross section\nfor accelerating a DM particle to a kinetic recoil energy $T_\\chi$. \nFor DM particles initially at rest, this requires a minimal CR energy $T_N^\\mathrm{min}$ of\n\\begin{equation}\n\\label{eq:Tmin}\nT_N^\\mathrm{min}=\n\\left\\{\n\\begin{array}{ll}\n\\left( \\frac{T_\\chi}{2} - m_N\\right) \\left[\n1-\\sqrt{1+\\frac{2 T_\\chi}{m_\\chi}\\frac{\\left(m_N + m_\\chi\\right)^2}{\\left(2m_N - {T_\\chi}\\right)^{2}}}\n\\right] & \\quad\\mathrm{for~}T_\\chi<2m_N\\\\\n\\sqrt{\\frac{m_N}{m_\\chi}} \\left(m_N + m_\\chi\\right) & \\quad\\mathrm{for~}T_\\chi=2m_N\\\\\n\\left( \\frac{T_\\chi}{2} - m_N\\right) \\left[\n1+\\sqrt{1+\\frac{2 T_\\chi}{m_\\chi}\\frac{\\left(m_N + m_\\chi\\right)^2}{\\left(2m_N - {T_\\chi}\\right)^{2}}}\n\\right] & \\quad\\mathrm{for~}T_\\chi>2m_N\n\\end{array}\n\\right. \\,.\n\\end{equation}\nFurthermore, in the second line of Eq.~(\\ref{eq:chiCR}), we have introduced an effective distance $D_{\\rm eff}$ that allows us to \nexpress the CRDM flux in the solar system in terms of the relatively well measured {\\it local} interstellar CR flux, \n${{d\\Phi_N}^{\\rm LIS}}\/{dT_N}$, and the {\\it local} DM density, for which we adopt \n$\\rho^\\mathrm{local}_\\chi=0.3\\,\\mathrm{GeV}\/\\mathrm{cm}^3$~\\cite{Read:2014qva} (noting that our final limits are \nindependent of this choice). The advantage of this parametersation is that uncertainties deriving from the integration \nover the volume relevant for CRDM production, $\\int d\\Omega \\int \\!d\\ell$, are captured in a single \nphenomenological parameter $D_\\mathrm{eff}$. Indeed, despite the complicated underlying physics, this parameter is \nsurprisingly well constrained,\nwith uncertainties dominated by the vertical extent of the confinement zone of Galactic CRs. \nIn what follows, we will use a fiducial value of $D_{\\rm eff}=10$\\,kpc.\\footnote{%\nWhen assuming an Einasto profile~\\cite{Einasto:2009zd} for the DM density, and a cylindric CR diffusion model \ntuned with {\\sf GalProp}~\\cite{Strong:1998pw} to describe the observed flux of light CR nuclei, \na more detailed analysis reveals that $D_{\\rm eff}$ varies between $\\sim9$\\,kpc and \n$\\sim11$\\,kpc for DM recoil energies above 1\\,MeV~\\cite{Xia:2021vbz} . \n}\nWe note that our final limits only depend logarithmically on this quantity, for large interaction rates,\nor scale as $D_{\\rm eff}^{-1\/2}$ when attenuation in the soil or atmosphere is inefficient, respectively.\n\nWhen computing the CRDM flux in Eq.~(\\ref{eq:chiCR}), we take into account the four most abundant\nCR species, $N=\\{p,{\\rm He},{\\rm C}, {\\rm O}\\}$, for which high-quality determinations of the local \ninterstellar fluxes exist~\\cite{Boschini:2018baj}. The fluxes of heavier nuclei are subject to significant \nuncertainties for the energies of interest to us, see e.g.~the discussion in Ref.~\\cite{Boschini:2020jty}, not least due to \napparent discrepancies between AMS-02 data~\\cite{AMS:2018tbl,AMS:2018cen,AMS:2020cai} and \nearlier measurements. We also note that the CRDM flux contribution from these heavier elements is \nstrongly form-factor suppressed at large $T_\\chi$, see section \\ref{sec:form_factors}, and hence \nanyway not relevant for constraining DM with masses $m_\\chi\\gtrsim0.1$\\,GeV.\n\n\\medskip\n\\noindent\\textbf{Attenuation:} On its way to the detector, the CRDM flux given by Eq.~(\\ref{eq:chiCR}) is attenuated due to \nscattering of the CRDM particles with nuclei in the atmosphere and soil (overburden) above the experimental\nlocation. This effect can be well modelled by the energy loss equation\n\\begin{equation}\n\\label{eq:eloss}\n\\frac{dT_\\chi^z}{dz}=-\\sum_N n_N\\int_0^{\\omega_\\chi^\\mathrm{max}}\\!\\!\\!d\\omega_\\chi\\,\\frac{d \\sigma_{\\chi N}}{d\\omega_\\chi} \\omega_\\chi\\,,\n\\end{equation}\nwhich can be used to relate the average kinetic energy at depth $z$, $T_\\chi^z$, to an initial energy \n$T_\\chi$ at the top of the atmosphere.\nHere, the sum runs over the nuclei $N$ in the overburden,\ni.e.~no longer over the CR species, and $\\omega_\\chi$ is the {\\it energy loss} of a DM particle\nin a single collision. \nFor elastic scattering, $\\omega_\\chi$ is equal to the nuclear recoil energy $T_N$.\nIn that case, the maximal energy loss of a DM particle with initial kinetic energy $T_\\chi^z$\nis given by\n\\begin{equation}\n\\label{eq:tmax}\n\\omega_\\chi^\\mathrm{max}=T_N^\\mathrm{max}=\\frac{2m_N}{s}\\left[\\left(T_\\chi^z\\right)^2+2m_\\chi T_\\chi^z\\right],\n\\end{equation} \nwhere \n\\begin{equation}\n\\label{eq:sdef}\ns=(m_N+m_\\chi)^2+2m_N T_\\chi^z\n\\end{equation}\nis the (squared) CMS energy of the process. For inelastic scattering on the other hand, which we will discuss in more detail \nin section \\ref{sec:inel}, the energy loss can in principle be as high as $\\omega_\\chi^\\mathrm{max}=T_\\chi^z$.\nFor the purpose of this work we will mostly be interested in the Xenon-1T\ndetector, located at a depth of $z=1.4\\, \\text{km}$ in the Gran Sasso laboratory. In this case the \nlimestone overburden has a density of 2.71 g\/cm$^3$~\\cite{Miramonti:2005xq},\nmostly consisting of an admixture of CaCO$_3$ and MgCO$_3$, and attenuation in the\natmosphere can be neglected; in terms of weight percentages\nthe dominant elements are O (47.91\\%), Ca (30.29\\%), C (11.88\\%), Mg (5.58\\%), Si (1.27\\%),\nAl (1.03\\%) and K (1.03\\%)~\\cite{Wulandari:2003cr}. We note that Eq.~(\\ref{eq:eloss}) only provides an approximate \ndescription of the stopping \neffect of the overburden, which is nonetheless sufficiently accurate for our purposes. For a detailed comparison of this \napproach with Monte Carlo simulations of individual particle trajectories, see \nRefs.~\\cite{Emken:2017qmp,Emken:2018run,Mahdawi:2018euy,Emken:2019hgy,Xia:2021vbz}\n\n\\medskip\n\\noindent\\textbf{Detection:} The elastic scattering rate of relativistic CRDM particles arriving at underground detectors \nlike the Xenon-1T experiment is given by\n\\begin{equation}\n\\label{eq:gammarate}\n {\\frac{d\\Gamma_N}{d T_{N}}=\n \\int_{T_\\chi^{\\rm min}}^\\infty \\!\\!dT_\\chi\\ \n \\frac{d \\sigma_{\\chi N}}{dT_N} \\frac{d\\Phi_\\chi}{dT_\\chi}} \\,.\n\\end{equation}\nNote that the above integral is over the energy of the DM particles \\emph{before} entering the atmosphere. \nOn the other hand, the elastic scattering cross section ${d \\sigma_{\\chi N}}\/{dT_N} $ must still be evaluated at the actual \nDM energy, $T_\\chi^z$, at the detector location, which requires numerically solving Eq.~(\\ref{eq:eloss}) \nfor $T_\\chi^z(T_\\chi)$. The lower bound on the integral then represents the minimal {\\it initial} CRDM energy \nthat is needed to induce a nuclear recoil of energy $T_N$ {\\it at depth $z$}, i.e.\n$T_\\chi^{\\rm min}=T_\\chi(T_\\chi^{z, \\mathrm{min}})$. This can be obtained by inverting the solution of Eq.~(\\ref{eq:eloss}),\nwhere $T_\\chi^{z, \\mathrm{min}}$ is given by the right-hand side of Eq.~(\\ref{eq:Tmin}) under the replacement\n$(T_\\chi,m_\\chi,m_N)\\to(T_N,m_N,m_\\chi)$.\nIn general, the elastic nuclear scattering cross section \n${d \\sigma_{\\chi N}}\/{dT_N} $ \nis a function of both $s$ and the (spatial) momentum transfer, \n\\begin{equation}\n\\label{eq:q2}\nQ^2=2m_N T_N\\,.\n\\end{equation}\nIf the dependence on $s$ can be neglected or the (dominant) dependence on $Q^2$ factorizes -- as in the case of\nstandard form factors -- then the rate in the detector given in Eq.~(\\ref{eq:gammarate}) will have an {\\it identical}\n$Q^2$-dependence as compared to the corresponding rate expected from the standard population of \nnon-relativistic halo DM. As pointed out in Ref.~\\cite{Bringmann:2018cvk}, this salient feature makes it possible to \ndirectly re-interpret published limits on the \nlatter (conventionally expressed as limits on the scattering cross section with protons) into limits on the \nformer. Otherwise, for an accurate determination of the expected count rate in \na given analysis window, one would in principle have to also model the detector response in the \nevaluation of Eq.~(\\ref{eq:gammarate}) and then infer limits based on the full detector likelihood \n(e.g.~with a tool like {\\sf DDCalc}~\\cite{GAMBITDarkMatterWorkgroup:2017fax,GAMBIT:2018eea}).\n\n\\section{Nuclear form factors}\n\\label{sec:form_factors}\n\nThe target nuclei used in direct detection experiments \nare typically larger than the de Broglie wavelength of DM with standard Galactic velocities, \nat least for heavy nuclei, implying that the incoming DM particles only `see' part of the nucleus. \nSince the elastic scattering process is fundamentally induced by a coupling between DM and the \nconstituents of these nuclei, this means that it should be suppressed by a \nnuclear form factor, $G^2(Q^2)$, compared to the naive expectation that the nuclear cross section \nis merely a coherent sum of the cross sections of all the constituents (for recent pedagogic \naccounts of conventional direct DM searches, see e.g.~Refs.~\\cite{DelNobile:2021icc, Cooley:2021rws}).\\footnote{%\nWe focus here on spin-independent elastic scattering. For {\\it spin-dependent} scattering, \nthe sum would not be coherent and hence generally result in much smaller cross sections.\nThis prevents standard DM from being stopped in the overburden before reaching the experimental\nlocation -- unless the scattering cross section {\\it per nucleon} is so large that it becomes incompatible with other\nastrophysical constraints.\nA detailed treatment of attenuation in the Earth's crust is, hence, less relevant in this case. \n}\nFor CRDM, this effect is amplified, given the smaller de Broglie wavelengths\nassociated to the faster moving upscattered DM particles. \n\nThese nuclear form factors are essentially Fourier transforms of the number density of nucleons inside\nthe nucleus, usually approximated by the experimentally easier accessible charge density. A common\nparameterization is the one suggested by Helm~\\cite{Helm:1956zz}, which is based on modelling \nthe nucleus as a hard sphere with a Gaussian smearing (in configuration space). For heavy nuclei we follow instead a slightly\nmore accurate approach and implement model-independent form factors~\\cite{Duda:2006uk}, based\non elastic electron scattering data. Concretely, we implement their Fourier-Bessel (FB) expansion approach,\nwith parameters taken from Ref.~\\cite{DeVries:1987atn}. For nuclei where the FB parameters\nare not available, notably Mg and K, we use model-independent Sum of Gaussians (SOG) form factors instead. \n\nFor $Q^2\\gg (0.1\\,\\mathrm{GeV})^2$ one starts\nto resolve the inner structure of the nucleons themselves, which we discuss in more detail in section \\ref{sec:inel}. \nLet us however briefly mention \nthat in the case of He, this effect is already largely captured by the above description in that we take the \nSOG form factors from Ref.~\\cite{DeVries:1987atn} (thus improving on the simple dipole prescription used, e.g.,\nin Ref.~\\cite{Bringmann:2018cvk}). For the proton, we adopt the usual dipole {\\it nucleon} form factor,\nnoting that the {\\it nuclear} form factor would formally equal unity,\n\\begin{equation}\n\\label{eq:Gp}\nG_p^2(Q^2)=\\left(1+ Q^2\/\\Lambda_p^2 \\right)^{-4}\\,,\n\\end{equation}\nwith $\\Lambda_p=0.843$\\,GeV. This provides a very good fit to experimental data up to momentum\ntransfers of at least $Q^2\\sim1$\\,GeV$^2$, with an agreement of better than 10\\% for \n$Q^2\\leq10$\\,GeV$^2$~\\cite{Perdrisat:2006hj,Punjabi:2015bba}. \nWe note that our final results are highly insensitive to such large momenta.\n\n\nIn the rest of the section, we will briefly describe the impact of nuclear form factors on \nthe CRDM flux and the attenuation of this flux on its way to the detector.\nIn both cases the effect is sizeable, motivating the need for a precise modelling of $G^2(Q^2)$.\n\n\n\\subsection{Impact on production}\n\\label{sec:FF_prod}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.99\\textwidth]{figures_draft\/figure_one_wl}\n\\caption{{\\it Left panel.} Expected CRDM fluxes for DM masses $m_\\chi=0.001, 0.01, 0.1,1,10$\\,GeV, \nfrom top to bottom, assuming a constant spin-independent scattering cross section of \n$\\sigma_{\\rm SI}^{p,n}=10^{-30}\\,\\mathrm{cm}^2$ (solid lines). The effect of inelastic scattering is neglected.\nDashed lines show the CRDM fluxes that would result when not taking into account \nthe effect of form factors. \n{\\it Right panel.} Black lines indicate the individual contributions to the CRDM flux from scattering on CR $p$, He, C and O,\nfor the example of $m_\\chi=100$\\,MeV. Other lines (highlighted only for the $m_\\chi=100$\\,MeV case) show the\ntotal flux, as in the left panel.\n}\n\\label{fig:flux_ff}\n\\end{center}\n\\end{figure}\n\n\nThe solid lines in Fig.~\\ref{fig:flux_ff} show the expected CRDM flux before attenuation, cf.~Eq.~(\\ref{eq:chiCR}),\nfor a range of DM masses. For the purpose of this figure, we have assumed a constant elastic \nscattering cross section $\\sigma_{\\rm SI}^p=\\sigma_{\\rm SI}^n$ on nucleons, i.e.~a nuclear cross section\ngiven by\n\\begin{equation}\n\\frac{d \\sigma_{\\chi N}}{dT_\\chi} = \\mathcal{C}^2\n\\times \\frac{\\sigma_{\\rm SI}^p}{T_\\chi^{\\rm max}} \\times G^2(2T_\\chi m_\\chi)\\,.\n\\label{eq:siconst}\n\\end{equation}\nHere, \n\\begin{equation}\n\\label{eq:c_coh}\n\\mathcal{C}^2= A^2\\frac{\\mu_{\\chi N}^2}{\\mu_{\\chi p}^2}\n\\end{equation}\ndescribes the usual coherent enhancement, in this case proportional to the square of the atomic number $A$ \nof nucleus $N$. In the rest of the expression, $\\mu_{\\chi N}$ ($\\mu_{\\chi p}$) is the reduced mass of the \nDM\/nucleus (DM\/nucleon) system and\nthe maximal DM energy $T_\\chi^{\\rm max}$ that can result from a CR nucleus with energy $T_N$ \nis given by the right-hand side of Eq.~(\\ref{eq:tmax}) after replacing $T_\\chi^z \\to T_N$ and $m_\\chi\\leftrightarrow m_N$.\n\nIn the left panel of the figure, we show that neglecting nuclear form factors (dashed lines) would lead to a \nsignificant overestimate of the CRDM flux at high energies. For $m_\\chi\\gtrsim0.1$\\,GeV, the form factor\nsuppression even becomes the dominant effect to determine the overall normalization of the flux,\nwhile for lower DM masses, the peak of the distribution is entirely determined by the fact that the \nCR flux itself peaks at GeV energies. This suppression in the flux leads to a rapid deterioration \nof CRDM limits.\nModelling form factors correctly is thus particularly important for the highest DM masses that can be\nprobed by cosmic-ray upscattering, i.e.~for $m_\\chi \\sim 1 - 10 \\, \\mathrm{GeV}$.\n\nIn the right panel of Fig.~\\ref{fig:flux_ff}, the contributions from the individual CR nuclei to the \nCRDM flux are shown. At low energies the dominant contribution is always from Helium, closely followed by the one from protons. \nThe high-energy part of the CRDM flux, on the other hand, \nis almost exclusively due to CR protons because the contribution from heavier CR nuclei is \nheavily form-factor suppressed. In addition, for $m_\\chi\\gtrsim1$\\,GeV, the \npeak amplitude of the CRDM flux -- which typically has the most constraining\npower in direct detection experiments -- is almost exclusively determined by CR $p$ and He nuclei\n (see also Fig.~\\ref{fig:attenuation_ff} below to better gauge the relevant range of energies {\\it after} attenuation\n in the overburden).\nFor lower DM masses, on the other hand, including further high-$Z$ CR species than those taken into account \nhere could in principle increase the relevant part of the CRDM flux by up to $\\sim50$\\,\\%~\\cite{Xia:2021vbz}. \nIn what follows, we conservatively neglect these contributions, in view of both the larger uncertainties in the underlying\nCR fluxes and the fact that we are mainly interested in DM masses around the GeV scale.\n\n\n\\subsection{Impact on attenuation}\n\\label{sec:FF_att}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{figures_draft\/figure_two_wl.pdf}\n\\caption{%\nMinimal kinetic energy $T_\\chi$ that a DM particle must have at the surface of the Earth ($z=0$) in order \nto trigger a signal in the Xenon-1T experiment, as a function\nof a (constant) spin-independent scattering cross section $\\sigma_{\\rm SI}^{p,n}$.\n Different colors correspond to different DM masses, \n as in Fig.~\\ref{fig:flux_ff}.\n Dash-dotted lines show the kinetic energies that would be necessary when computing the attenuation in the \nzero momentum transfer limit. Dashed lines illustrate the effect of adding\nthe expected form factor suppression, cf.~section \\ref{sec:form_factors}, while solid \nlines show the result of our full treatment, including also inelastic scattering events \n(discussed in section \\ref{sec:inel}).\n}\n\\label{fig:attenuation_ff}\n\\end{center}\n\\end{figure}\n\n\nWe now turn our attention to assessing the effect that the form factor suppression has on the attenuation of DM\nparticles on their way to the detector in a direct detection experiment. For concreteness we will again focus\non the case of Xenon-1T, where Xe nuclei recoiling with an energy of at least\n$T_{\\rm Xe}=4.9$\\,keV trigger a detectable signal~\\cite{XENON:2018voc}. In \nFig.~\\ref{fig:attenuation_ff}, we show the minimal initial DM energy that is required to kinematically allow\nfor this, after penetrating through the Gran Sasso rock. In practice this is done by numerically solving Eq.~(\\ref{eq:eloss}) with\n{\\sf DarkSUSY}. Dash-dotted lines indicate the result when conservatively\nassuming that the stopping power in the overburden is as efficient as in the zero-momentum transfer\nlimit (as in Ref.~\\cite{Bringmann:2018cvk}), while dashed lines show the effect of adding the additional\nform factor suppression for high $Q^2$ (as in Refs.~\\cite{Bell:2021xff,Xia:2021vbz}). \nSolid lines, finally, demonstrate the effect of also adding the attenuation power of inelastic scattering events,\nas described in detail below in Section \\ref{sec:inel}.\n\nFor small cross sections, attenuation is inefficient and, as expected, the three approaches give the \nsame answer. In this limit, the difference in the required DM energy is entirely due to the well-known\nkinematic effect, cf.~Eq.~(\\ref{eq:Tmin}), that lighter particles require a higher energy to induce a \ngiven recoil of much heavier particles\n(up to a minimum energy of $T_\\chi\\geq\\sqrt{m_{\\rm Xe}T_{\\rm Xe}\/2} =17.3$\\,MeV in the limiting case where $m_\\chi\\to0$).\nCorrespondingly, this also means that the CRDM fluxes cannot actually be probed by Xenon-1T \nfor the entire range of $T_\\chi$ shown in Fig.~\\ref{fig:flux_ff};\nunless $m_\\chi\\lesssim10$\\,MeV, however, the lowest detectable energy is always smaller\n than the energy at which the CRDM flux peaks.\n \nFor large cross sections, on the other hand, Fig.~\\ref{fig:attenuation_ff} shows a \npronounced difference between the three \napproaches: while in the case of a constant cross section (dash-dotted lines) the energy loss equation\nresults in an exponential attenuation, adding form factors (dashed lines) implies that the required initial \nDM energy only rises as the square root of the scattering cross section in the $Q^2=0$ limit.\nIn fact, we note that this is exactly the behaviour one would expect from Eq.~(\\ref{eq:eloss}) for a \ncross section that falls off very rapidly at large momentum transfers. \nComparing again to Fig.~\\ref{fig:flux_ff},\nthis correspondingly enlarged range of kinetic energies that becomes kinematically accessible to Xenon-1T will \ninevitably lead to significantly larger rates in the detector -- which, indeed, is exactly the conclusion reached in\nRefs.~\\cite{Bell:2021xff,Xia:2021vbz}. However, such a strong \nsuppression of the physical stopping power of the Gran Sasso rock for a relativistic particle is highly \nunphysical. As we discuss in the next section, this is simply because the DM particles will start to scatter off the constituent \nnucleons themselves, albeit not \ncoherently across the whole nucleus. Adding this effect (solid lines), \nresults again in exponential attenuation in the overburden -- though only at significantly larger cross sections \nthan what would be expected when adopting a constant cross section for simplicity. \n\n\n\\section{Inelastic Scattering}\n\\label{sec:inel}\n\nOur discussion so far has largely neglected the impact of inelastic scattering events of relativistic DM particles incident on nuclei \nat rest, or {\\it vice versa}. Physically, the inclusion of inelastic scattering processes is non-negotiable and should be considered in \na full treatment. This is because, whilst the form factor suppression described above is the relevant feature in the transition from \ncoherently scattering off the whole nucleus to only parts of it, once the DM or nucleus transfers a sufficiently large amount \nof energy $\\omega$, the scattering will probe individual nucleon-, or even quark-level processes. The result is an additional \ncontribution to the total scattering cross-section that can easily dominate \nin the large energy transfer regime. \nAs far as CRDM limits are concerned, the most important effect \nthat the inclusion of inelastic scattering modifies is the attenuation of the flux through the Earth or atmosphere.\nNot including it, therefore, will lead to an overly optimistic estimate as to the amount of parameter space that is ruled out via this \nmechanism.\\footnote{In order to keep our results conservative, we neglect the effect of inelastic scattering on CRDM {\\it \nproduction} in our analysis. We leave the study of this additional contribution of the flux to future work, noting that we \nexpect it to mostly improve limits for larger DM masses (where the form factor suppression nominally leads \nto a significant reduction of the CRDM flux, see Fig.~\\ref{fig:flux_ff}).}\nLet us note that inelastic scattering of {\\it non-relativistic} DM, resulting in the excitation of low-lying states in the target nuclei, \nwas previously both studied \ntheoretically~\\cite{Baudis:2013bba, McCabe:2015eia, Kouvaris:2016afs, Hoferichter:2018acd}\nand searched for experimentally~\\cite{XMASS-I:2014lnb,XENON:2017kwv,Lehnert:2019tuw,XENON:2020fgj}. \nHere we concentrate on different types of inelastic processes that are only accessible to nuclei scattering off \nhigh-energy DM particles.\n\nThe rest of this section is organised as follows: firstly we give a qualitative description of the most important \ninelastic scattering processes, such as the excitation of hadronic resonances or quasi-elastic scattering \noff individual nucleons. Secondly, we explain how we obtain a quantitative estimate of these complicated nuclear interactions \nby making a direct analogy to the case of neutrino-nucleus scattering. In this regard, we make use of the public code \n\\texttt{GiBUU}~\\cite{Buss:2011mx,gibuuweb}.\nFinally, we will explain how to build this into the formalism described in section~\\ref{sec:crdm} in terms of the DM energy loss, see \nEq.~\\eqref{eq:eloss}.\n\n\\subsection{Scattering processes and associated energy scales}\nThere are a number of relevant contributions to scattering cross-sections on nuclei that are associated to certain \ncharacteristic energies or nuclear length scales. In the highly non-relativistic limit, as described above,\ncoherently enhanced elastic scattering dominates. At somewhat higher energies, more specifically momentum transfers\ncorresponding to (inverse) length scales smaller than the size of the nucleus,\nthe elastic scattering becomes form factor suppressed -- a description which physically assumes a smooth distribution of \nscattering centres throughout the nucleus. The main characteristic of elastic scattering in both of these regimes is that \nthe energy loss of the incident DM particle is uniquely related to the momentum transfer by $\\omega=Q^2\/(2m_N)$.\n\nThis relation no longer holds for inelastic scattering processes, which are expected to become relevant at even higher energies. \nFor our purposes, these inelastic processes can be broadly split up into three scattering regimes, depending\non the energy that is transferred (see also Fig.~\\ref{fig:mchi_1GeV} below, as well as a review~\\cite{Formaggio:2012cpf} \nfor the discussion of the analogous situation in the case of neutrino-nucleus scattering):\n \n\\begin{itemize}\n \\item \\textbf{Quasi-Elastic Scattering} \\textbf{(}$\\mathbf{\\omega \\gtrsim 10^{-2}}$\\,\\textbf{GeV):} At suitably large \n\tenergy transfers, the form factor suppression cannot be totally physical. This is because \n\tthe incident DM particles will probe directly the constituent nucleons, which are \n\tinherently not smoothly distributed. \\emph{Quasi-elastic scattering} (QE) dominates for \n\t$10^{-2}\\,\\mathrm{GeV} \\lesssim \\omega \\lesssim 1 \\, \\mathrm{GeV}$, and describes this situation, \n\ti.e.~where the dominant scattering is directly off {\\it individual} protons (and neutrons) inside the nucleus, \n\t$\\chi\\, p (n) \\rightarrow \\chi\\, p (n)$.\n\n\t\\item \\textbf{Excitation of Hadronic Resonances} \\textbf{(}$\\mathbf{\\omega \\gtrsim 0.2}$\\,\\textbf{GeV):} At higher energies \n\tstill, DM-nucleon scattering can excite nuclear resonances such as \n\t$\\chi \\, p \\rightarrow \\chi \\, (\\Delta \\rightarrow p \\pi^0)$ etc., leading to a wide variety of hadronic final states. Often, the contribution due to the lowest lying \n\t$\\Delta$ resonances (DR) is distinguished from contributions from higher resonances (HR) since the former can \n\tbe well resolved and starts playing role at considerably smaller transferred energies. \n\tIn a complicated nucleus such as ${^{16}}\\mathrm{O}$, both the QE and resonance contributions to the scattering cross-section must be resolved numerically, \n\ttaking into account effects such as the nuclear potential and spin statistics. \n\n\t\\item \\textbf{Deep Inelastic Scattering} \\textbf{(}$\\mathbf{\\omega \\gtrsim 1}$ \\textbf{GeV):} Most DM couplings to \n\tnuclei and nucleons result from more fundamental couplings to quarks or gluons. \n\tAs such, once the energy transfer is large enough to probe the inner structure of the nucleons \n\t($\\omega \\gtrsim 1 \\, \\mathrm{GeV}$), then \\emph{deep inelastic scattering} (DIS) of DM with partons inside the nucleons\n\tcan occur. Again, this should be \n\tresolved numerically to give an accurate estimate of the impact at the level of the scattering cross-section.\n\\end{itemize}\n\n\n\\subsection{Computation of the inelastic cross-section for neutrinos}\n\\label{sec_gibuu}\n\nDue to the complicated nuclear structure of the relevant atomic targets in the Earth, \nor in the composition of cosmic rays, \nit is typically not possible to analytically compute all the contributions to DM-nucleus scattering \ndescribed above. Instead, to estimate their impact on our \nconclusions and limits, we will make a direct connection with the physics of neutrino-nucleus scattering for which numerical codes \n-- such as \\texttt{GiBUU}~\\cite{Buss:2011mx} -- are capable of generating the relevant differential cross-sections.\n\nIn more detail, we draw the analogy between neutral current neutrino-nucleon scattering via processes such as \n$\\nu \\, p \\rightarrow \\nu \\, p$ and DM-nucleon scattering. Numerically modelling the neutral current quasi-elastic scattering, \nresonances and deep inelastic scattering as a function of the energy transferred to the nucleus, $\\omega$, allows us to \nunderstand the relative importance of these processes as a function of the incoming neutrino energy\n(or DM kinetic energy $T_\\chi$). Of course, since \nthese codes are tuned for neutrino physics, simply outputting the differential cross-sections such as \n$\\mathrm{d}\\sigma_{\\nu N} \/ \\mathrm{d}\\omega$ is not sufficient. To map the results onto DM, see \nsection \\ref{sec:map_dm} below for further details, we should re-scale the results so \nas to respect both the relative interaction strengths and model dependences such as e.g. the mediator mass. In general, we \nexpect this approach to provide a good estimate of the DM-nucleus cross section (at least) for contact interactions and \nscattering processes dominated by mediators in the $t$-channel.\n\n\nAt the level of implementation, we choose the settings in the \\texttt{GiBUU} code described in Tab.~\\ref{tab:gibuu}. \nSince we are interested in quantifying the effect of inelastic scattering on the attenuation of the CRDM flux as it passes \nthrough the Earth, \nwe mostly focus on the total inelastic scattering cross section, i.e.~the sum over all the processes described in the \nprevious section. We numerically calculate this for the most abundant nuclei in the Gran Sasso rock, \n$N = \\{\\mathrm{O}, \\mathrm{Ca}, \\mathrm{C}, \\mathrm{Mg}, \\mathrm{Si}, \\mathrm{Al}, \\mathrm{K}\\}$.\nFundamentally, inelastic cross-sections are expressed in terms of double-differential cross-sections \nlike $\\mathrm{d}^2 \\sigma_{\\nu N} \/ \\mathrm{d}Q^2 \\mathrm{d}\\omega$, since for inelastic scattering $Q^2$ and $\\omega$ are \nindependent variables. \nFor integrating the energy loss equation, Eq.~\\eqref{eq:eloss}, however, it suffices to compute\n\\begin{equation}\n\\frac{\\mathrm{d} \\sigma_{\\nu N}}{ \\mathrm{d}\\omega} \\equiv \\int_{Q^2} \\frac{\\mathrm{d}^2 \\sigma_{\\nu N}}{ \\mathrm{d}Q^2 \\,\\mathrm{d}\\omega} \\,\\mathrm{d}Q^2\\,.\n\\end{equation}\nOn the other hand, the full information about the $Q^2$-dependence of \n$\\mathrm{d}^2 \\sigma_{\\nu N} \/ \\mathrm{d}Q^2 \\mathrm{d}\\omega$ provided by \\texttt{GiBUU} still remains a highly\nuseful input to our analysis. This is because the double-differential cross \nsections of the individual inelastic processes turn out to sharply peak at values of $Q^2$ that have simple relations to $\\omega$.\nFor example, the peak position for the QE contribution corresponds to the `elastic' relation~\\eqref{eq:q2} for nucleons. \nAs described below, this information will be used \nfor setting realistic reference values of $Q^2$ to capture the model-dependence of the DM cross sections.\n\n\n\\begin{table}[]\\label{tab:gibuu}\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{@{}llllll@{}}\n\t\\toprule\n\t\\multicolumn{2}{l}{\\hspace{-0.2cm}\\textbf{\\&neutrino\\_induced}} & \\multicolumn{2}{l}{\\textbf{\\&input}} & \\multicolumn{2}{l}{\\textbf{\\&nl\\_dSigmadElepton}} \\\\\n\t\\texttt{process\\_ID} & 3 & \\texttt{eventtype} & 5 & \\texttt{enu} & $T_\\chi$ \\\\\n\t\\texttt{flavor\\_ID} & 2 & \\texttt{numEnsembles} & 100 & \\texttt{elepton} & $0.005 T_\\chi$ \\\\\n\t\\texttt{nuXsectionMode} & 2 & \\texttt{numTimeSteps} & 0 & \\texttt{delta\\_elepton} & $\\Delta E_\\ell$ \\\\\n\t\\texttt{nuExp} & 0 & \\texttt{num\\_Energies} & 50 & \\multicolumn{2}{l}{\\textbf{\\&target}} \\\\\n\t\\texttt{includeQE} & T\/F & \\texttt{num\\_runs\\_sameEnergy} & 1 & \\texttt{Target\\_A} & $A$ \\\\\n\t\\texttt{includeDELTA} & T\/F & \\texttt{delta\\_T} & 0.2 & \\texttt{Target\\_Z} & $Z$ \\\\\n\t\\texttt{includeRES} & T\/F & \\texttt{localEnsemble} & T & \\multicolumn{2}{l}{\\textbf{\\&initDensity}} \\\\\n\t\\texttt{path\\_To\\_Input} & \\texttt{\/path\/to\/buuinput} & \\texttt{include1pi} & F & \\texttt{densitySwitch} & 2 \\\\\n\t\\texttt{includeDIS} & T\/F & \\multicolumn{2}{l}{\\textbf{\\&neutrinoAnalysis}} & \\multicolumn{2}{l}{\\textbf{\\&initPauli}} \\\\\n\t\\texttt{2p2hQE} & F & \\texttt{XSection\\_analysis} & T & \\texttt{pauliSwitch} & 2 \\\\\n\t\\texttt{include2p2hDelta} & F & \\texttt{detailed\\_diff\\_output} & F & \t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n \\texttt{include2pi} & F & & & \t\t\t\t\t\t\t\t\t\t\t\t\t\\\\ \\bottomrule\n\t\\end{tabular}\n\t}\n\t\\caption{Settings choices for running \\texttt{GiBUU} to study neutral current neutrino scattering. \n\tWe also enforced a logarithmic binning in the outgoing lepton energy, by changing the variable assignment \n\tof \\texttt{dElepton} from $E_\\ell \\rightarrow E_\\ell + \\Delta E_\\ell$ to $E_\\ell \\rightarrow (1 + \\Delta E_\\ell) E_\\ell$.\n\t}\n\t\\end{table}\n\n\n\\subsection{Mapping to the dark matter case}\n\\label{sec:map_dm}\n\nHaving described the technical details of how we obtain the neutrino-nucleus inelastic cross-sections using \\texttt{GiBUU}, we \nnow turn our attention to the mapping of these quantities onto DM models. \nThis is a necessary step for two broad reasons: \\emph{(a)} the interaction strength governing the DM-nucleus interactions is \ntypically very different from the neutrino-nucleus SM value, and \\emph{(b)} the way the interaction proceeds via e.g.~a contact \ninteraction or mediator exchange can lead to substantially different kinematics and non-trivial $Q^2$- or $s$-dependences.\n\nThe total scattering cross-section $\\mathrm{d}\\sigma_{\\chi N}\/\\mathrm{d}\\omega$ consists of the coherent elastic scattering contribution that we compute analytically for each of the models considered in this work, and the inelastic scattering cross section that we want to estimate based on the \\texttt{GiBUU} output:\n\\begin{align}\\nonumber\n\\frac{\\mathrm{d}\\sigma_{\\chi N} }{ \\mathrm{d}\\omega} &= \\left.\\frac{\\mathrm{d}\\sigma_{\\chi N} }{ \\mathrm{d}\\omega}\\right|_{\\mathrm{el}}+\\left.\\frac{\\mathrm{d}\\sigma_{\\chi N} }{ \\mathrm{d}\\omega}\\right|_{\\mathrm{inel}}\\\\ \\label{eq:rescaling}\n&\\equiv \n\t\\left.\\frac{\\mathrm{d}\\sigma_{\\chi N} }{ \\mathrm{d}\\omega}\\right|_{\\mathrm{el}, Q^2=2\\omega m_N} + \\sum_{i} \\left.\\frac{\\mathrm{d}\\sigma_{\\mathrm{SI}} }{ \\mathrm{d}\\omega}\\right|_{\\mathrm{el}, Q^2=Q_{i,\\mathrm{ref}}^2} \n\\times I_{\\chi,i}(T_\\chi,\\omega)\\,.\n\\end{align}\nHere $\\left.\\mathrm{d}\\sigma_{\\mathrm{SI}} \/ \\mathrm{d}\\omega\\right|_{\\mathrm{el}}$ is the differential\nDM-nucleon elastic cross section, excluding nucleon form factors such as the one given in \nEq.~(\\ref{eq:Gp}). \nThe sum runs over the various individual processes, $i\\in$(QE, DR, HR, DIS),\nwhich all have characteristic reference values of $Q^2=Q^2_{i, \\mathrm{ref}}(\\omega)$ where\nthe respective inelastic cross section peaks. In the second step above, we thus choose to rescale the inelastic scattering \nevents to the elastic scattering off a point-like nucleon. \nThis rescaling is motivated by the fact that for inelastic contributions like QE, the underlying process is much better \ndescribed by scattering on individual nucleons than on the entire nucleus.\nThe factor\n\\begin{equation}\nI_{\\chi,i}(T_\\chi,\\omega) \\equiv \\frac{\\mathrm{d}\\sigma^i_{\\chi N} \/\\mathrm{d}\\omega \\big|_{\\mathrm{inel}}}\n\t{\\mathrm{d}{\\sigma}_{\\mathrm{SI}} \/\\mathrm{d}\\omega \\big|_{\\mathrm{el},Q^2=Q^2_{i, \\mathrm{ref}}}}\n\\end{equation}\nthus quantifies the ratio of the inelastic scattering process on a nucleus to the elastic scattering on an individual\nnucleon.\n\nWe now make the simplifying assumption that this ratio is to a certain degree model-independent,\nbased on the expectation that DM should probe the inner structure of nucleons in a similar way as neutrinos do\nwhen only neutral current interactions are involved. Physically, indeed, this closely resembles the situation both for \ncontact interactions and $t$-channel mediators.\nThe model dependence thus dominantly comes from the structure of the term \n$\\left.\\mathrm{d}\\sigma_{\\mathrm{SI}} \/ \\mathrm{d}\\omega\\right|_{\\mathrm{el}}$, and we approximate\n\\begin{equation}\n\\label{eq:I2}\nI_{\\chi,i}(T_\\chi,\\omega) \\approx I_{\\nu,i}(E_\\nu,\\omega)\\equiv\\frac{\\left.\\mathrm{d}\\sigma^i_{\\nu N} \/\\mathrm{d}\\omega \\right|_{\\mathrm{inel}}}\n\t{\\mathrm{d}{\\sigma}^i_{\\nu,\\mathrm{SI}} \/\\mathrm{d}\\omega \\big|_{\\mathrm{el}}}\\,.\n\\end{equation}\nHere, the inelastic neutrino-nucleus cross section \n$\\left.\\mathrm{d}\\sigma_{\\nu N}^i\/\\mathrm{d}\\omega\\right|_{\\mathrm{inel}}(E_\\nu,\\omega)$ \ncan be obtained using the \\texttt{GiBUU} code, as described in section \\ref{sec_gibuu}, and we evaluate it \nat the incoming DM kinetic energy, $E_\\nu = T_\\chi$.\n On the other hand, a possible estimate for the denominator -- the elastic neutral current neutrino-nucleon cross \nsection without the form factor -- is the average of the proton and neutron cross sections in the $\\omega \\rightarrow 0$ \nlimit~\\cite{Formaggio:2012cpf}:\n\\begin{equation}\n\\label{eq:nuel_simp}\n\\left.\\frac{\\mathrm{d}\\sigma^i_{\\nu,\\mathrm{SI}}}{\\mathrm{d}\\omega} \\right|_{\\mathrm{el}} \n= \\frac{1}{2} \\sum_{j=n,p} \\frac{m_j G_F^2}{4\\pi}\\left[(g_A\\tau_3^j - \\Delta_S)^2 + (\\tau_3^j-2(1+\\tau_3^j)\\sin^2\\theta_W)^2\\right].\n\\end{equation}\nHere $\\tau_3^p=1$ and $\\tau_3^n=-1$, $\\theta_W$ is the weak mixing angle and $G_F$ is the Fermi constant.\nThe axial vector and strange quark contributions are encoded in the parameters \n$\\Delta_S\\approx -0.15$ (see, e.g., Ref.~\\cite{Alberico:1997vh} for a discussion) and \n$g_A= 1.267$~\\cite{ParticleDataGroup:2008zun}, respectively. Numerically the square bracket evaluates\nto a factor of $\\sim\\!2.24\\,(2.01)$ for neutrons (protons). \nLet us stress, however, that this formula is valid only for energies \nrelevant for inelastic scattering, $0.1\\,\\mathrm{GeV}\\lesssim E_\\nu\\lesssim10$\\,GeV. \nAt much smaller energies, only the valence quarks contribute to the scattering, and we would instead have\n\\begin{equation}\n\\label{eq:nuel_simpNR}\n\\left.\\frac{\\mathrm{d}\\sigma^i_{\\nu,\\mathrm{SI}}}{\\mathrm{d}\\omega} \\right|_{\\mathrm{el}} \n= \\frac{m_n G_F^2}{4\\pi}\n\\end{equation}\nfor neutrons, while the scattering on protons is strongly suppressed by a factor of \n$Q_W^2=(1-4\\sin^2\\theta_W)^2\\approx0.012$.\n\n \nIt is worth noting that in principle, we could improve the assumption made in Eq.~(\\ref{eq:I2})\nfor the quasi-elastic process, because there is a \nwell-controlled understanding of the analytic QE cross-section via the Llewellyn-Smith formalism (see section~V \nof~Ref.~\\cite{Formaggio:2012cpf}). For clarity, we choose to take a consistent prescription across all inelastic processes, \nand we have checked that including the full QE cross-section would only introduce an \nadditional $\\mathcal{O}(1)$ factor in the DM QE cross-section.\nFor the numerical implementation in {\\sf DarkSUSY}, we pre-tabulate $I_{\\nu,i}$ from $T_\\chi=0.01$\\,GeV up to energies of $T_\\chi=10$\\,GeV,\nwith $200$ ($101$) equally log-spaced bins in $T_\\chi$ ($\\omega$)\nand a normalization as given by Eq.~(\\ref{eq:nuel_simp}), and then interpolate between \nthese values.\\footnote\nFor significantly higher energies, \\texttt{GiBUU} is no longer numerically stable. Furthermore, \nthe underlying equations that describe the interaction processes begin to fall outside their ranges of validity\nas the $Z$ boson mass starts to get resolved.\nAt higher energies, where anyway only the DIS contribution is non-negligible, a reasonable estimate \ncan still be obtained by a simple extrapolation \n$I_{\\nu,i}(T_\\chi,\\omega)\\to I_{\\nu,i}(T_\\chi^{\\rm ref},\\omega^{\\rm ref})$, \nwith $\\omega^{\\rm ref} =\\omega\\,(T_\\chi^{\\rm ref}\/T_\\chi)^{0.25}$, beyond some reference energy\n$T_\\chi^{\\rm ref}\\approx10$\\,GeV\\@. By running \\texttt{GiBUU} up to $E_\\nu\\sim30$\\,GeV, we checked that\nthis prescription traces the peak location (in $\\omega$) of the DIS contribution very well, \nindependently of the exact choice of $T_\\chi^{\\rm ref}$. We also confirmed that \nthe peak value of $I$ becomes roughly constant for such large energies. \nOn the other hand, higher-order inelastic processes are expected to become increasingly important \nat very large energies, not covered in \\texttt{GiBUU}.\nWe therefore only add the above extrapolation as an {\\it option} in {\\sf DarkSUSY}, and instead completely cut the incoming CRDM\nflux at $10$\\,GeV in the default implementation. As a result, our bounds on the interaction strength\nmay be overly conservative for small DM masses $m_\\chi\\lesssim0.1$\\,GeV.\n}\n\nWe also must choose the reference values for the transferred momentum $Q^2_{i, \\mathrm{ref}}$, which allows us to account \nfor e.g.~mediators that may be much lighter than the electroweak scale. \nImportantly, each process (quasi-elastic, \n$\\Delta$-resonance,...) is expected to have a different characteristic $Q^2$-$\\omega$ dependence that takes into account the \nrelevant binding energies and kinematic scaling. For example, in the case of elastic scattering, the \nrelation $Q^2 = 2 m_N \\omega$ holds, whilst for quasi-elastic processes, the relevant scattering component is a nucleon such \nthat the cross-section is peaked around $Q^2 \\sim 2 \\,{\\overline{ m}}\\,\\omega$, where ${\\overline m} \\equiv (m_n + m_p)\/2$. \nThe resonance of a particle with mass $m_\\mathrm{res}$ can be accounted for by noting that part of the transferred\nkinetic energy is used to excite the resonance, such that the cross-section peaks around \n$Q^2 \\sim 2\\, {\\overline m}\\, (\\omega - (m_\\mathrm{res} - \\overline{m}))$. \nWe have confirmed these expectations numerically by comparing directly to the \ndoubly-differential cross-section extracted from \\texttt{GiBUU}.\nFrom this numerical comparison we further extract that $Q^2 \\sim 0.6\\,\\overline{m}\\,(\\omega \\!-\\! \\omega_{\\rm DIS})$,\nwith $\\omega_{\\rm DIS}=1.0$\\,GeV, constitutes a very\ngood fit to the peak location of the DIS cross-section.\nIn summary, we take the following reference values across the four inelastic processes:\n\\begin{align}\n\tQ^2_{\\mathrm{QE}, \\mathrm{ref}} = 2\\, {\\overline m} \\omega \\, \\, , \\,\\,\\,\\, &Q^2_{\\Delta, \\mathrm{ref}} = 2\\, {\\overline m}\\, (\\omega - \\Delta m_\\Delta) \\nonumber \\\\\n\tQ^2_{\\mathrm{res}, \\mathrm{ref}} = 2\\, {\\overline m}\\, (\\omega - \\Delta m_{\\mathrm{res}}) \\, \\, , \\,\\,\\,\\, &Q^2_{\\mathrm{DIS}, \\mathrm{ref}} = 0.6\\, {\\overline m}\\, (\\omega - \\omega_{\\rm DIS})\\,.\n\\end{align}\nHere, $\\Delta m_\\Delta = 0.29 \\, \\mathrm{GeV}$ is the mass difference between the $\\Delta$ baryon and an average nucleon, \nand $\\Delta m_\\mathrm{res} = 0.40 \\, \\mathrm{GeV}$ is an estimate for the corresponding average mass difference of the higher \nresonances (we checked that our final limits are insensitive to the exact value taken here).\n\n\n\\begin{figure}[t]\n\t\\begin{center}\n\t\\includegraphics[width=0.9\\textwidth]{figures_draft\/figure_three_wl}\n\t\\caption{Comparison between the elastic (green, lower energies) and inelastic (blue, higher energies) contributions to the \n\tDM-nucleus differential cross section $\\mathrm{d}\\sigma_{\\chi N}\/\\mathrm{d}\\omega$, where $\\omega$ is the \n\tDM energy loss. This figure shows these contributions \tfor a constant isospin-conserving DM-nucleus cross section, with \n\t$m_\\chi = 1\\, \\mathrm{GeV}$ and $N = {^{16}}\\mathrm{O}$. The small colorbar on \n\tthe inset of the plots, along with the stated numerical ratio, indicates the balance between elastic and inelastic\n\tscattering in terms of the contribution to the integrated cross section \n\t$\\sigma_{\\chi N}^\\mathrm{tot}$.\n\t}\n\t\\label{fig:mchi_1GeV}\n\t\\end{center}\n\\end{figure}\n\n\nTo illustrate this procedure concretely, we consider the simple case of a contact interaction where, cf.~Eq.~(\\ref{eq:siconst}), \n$\\left.\\mathrm{d}\\sigma_{{\\rm SI}}\/\\mathrm{d}\\omega\\right|_{\\mathrm{el.}} = \\sigma_{\\mathrm{SI}} \/ \\omega^\\mathrm{max}$ and \n$\\omega^\\mathrm{max} = 2\\, {\\overline m} (T_\\chi^2 + 2 \\chi T_\\chi) \/ (({\\overline m} + m_\\chi)^2 + 2 {\\overline m} T_\\chi)$. The results for the \nrescaled inelastic cross-section (blue) are shown in Fig.~\\ref{fig:mchi_1GeV} for a DM mass $m_\\chi = 1\\,\\mathrm{GeV}$ incident \non a $^{16}\\mathrm{O}$ nucleus. In this figure, we also compare to the coherent elastic contribution (green) and highlight the \nbalance between the relative contributions to the total (integrated) cross-section $\\sigma^\\mathrm{tot}_{\\chi N}$. In particular, we \nsee that above kinetic energies $T_\\chi \\gtrsim 0.2\\,\\mathrm{GeV}$, the inelastic contribution dominates, clearly \nmotivating the necessity of its inclusion. This is consistent with the picture previously encountered in Fig.~\\ref{fig:attenuation_ff}, \nwhere we could see the impact of inelastic scattering on the energy loss. More concretely, \nthe result lies in some intermediate regime between the $G(Q^2) = 1$ and $G(Q^2) \\neq 1$ cases, the former\/latter leading \nto conservative\/overly optimistic limits respectively. \nIn the next section we will derive the relevant CRDM limits in the \n$\\sigma_{\\mathrm{SI}}-m_\\chi$ plane for a number of models to make this point quantitatively.\n\n\nLet us conclude this section by briefly returning to the implicit assumption of isospin-conserving DM interactions that\nwe made above, with \n$\\sigma_{\\mathrm{SI}}=\\sigma^p_{\\mathrm{SI}}=\\sigma^n_{\\mathrm{SI}}$.\nInterestingly, neutral-current induced\ninelastic scatterings between neutrinos and nucleons hardly distinguish between protons and \nneutrons~\\cite{Formaggio:2012cpf}, such that the factor $I_{\\chi,i}\\approx I_{\\nu,i}$ indeed becomes, by construction,\nlargely independent of the nucleon nature. Naively, one would thus conclude that\nisospin-violating DM couplings can easily be incorporated in our treatment of inelastic scattering by replacing\n$\\sigma_{\\mathrm{SI}} \\to (1 \/ A) \\times (Z \\sigma^p_{\\mathrm{SI}} + (A - Z) \\sigma^n_{\\mathrm{SI}})$ in Eq.~(\\ref{eq:rescaling}).\nWhen doing so, however, it is important to keep in mind that the nucleon cross sections should be evaluated at \nenergies that are relevant for inelastic scattering, not in the highly non-relativistic limit.\nAt these high energies, isospin symmetry is typically largely restored because the nucleon couplings are no longer \nexclusively determined by the valence quarks, and instead receive corrections from a large number of sea quarks \n(and, in principle, gluons).\nAs pointed out above, the example of neutrino scattering illustrates this effect very clearly: even though isospin is almost\nmaximally violated at low energies, the effective neutrino couplings to neutrons and protons agree within\n$\\sim5$\\,\\% at energies around 0.1\\,GeV, cf.~Eqs.~(\\ref{eq:nuel_simp}) and (\\ref{eq:nuel_simpNR}). In practice, however,\na possible complication often arises in that the nucleon couplings $g_n$ and $g_p$ are only provided in the highly\nnon-relativistic limit. \nIn that case, an educated guess for $\\sigma_{\\rm SI}$ in the second term of Eq.~(\\ref{eq:rescaling})\nis to anyway take the leading order (Born) expression -- but to adopt (effective)\nvalues for {\\it both} nucleon couplings that correspond to\nthe maximum of $\\left|g_p\\right|$ and $\\left|g_n\\right|$ in the non-relativistic limit. \nThis induces a model-dependent uncertainty \nin the normalization of the inelastic contribution that can in principle only be avoided by fully implementing the concrete\ninteraction model in a code like \\texttt{GiBUU}. On the other hand, the neutrino example illustrates that this error should\ngenerally not be expected to be larger than a factor of $\\sim$\\,2, implying that for most applications such a \nmore sophisticated treatment is not warranted.\n\n\n\\section{Contact interactions and beyond}\n\\label{sec:m2}\n\nIn sections \\ref{sec:form_factors} and \\ref{sec:inel} we have discussed in detail the \n$Q^2$-dependence that arises due to both form factor suppression and inelastic\nscattering, as well as the impact this has on the production and attenuation of the CRDM flux.\nThis does not yet take into account, however, the possible angular and energy dependence of the\nelastic scattering cross section itself. In fact, for \\mbox{(sub-)GeV} DM, a significant dependence of this type is \nactually expected in view of null searches for new light particles at colliders. For example, it has been \ndemonstrated in a recent global analysis~\\cite{GAMBIT:2021rlp} that it is impossible to satisfy all relevant \nconstraints simultaneously (even well above GeV DM masses) \nand at the same time maintain the validity of an effective field theory description at LHC energies.\n\n\nOf course, this necessarily introduces a model-dependent element to the discussion, and in this section, the\naim will be to analyse the most generic situations that can appear when considering models beyond simple contact \ninteractions. Concretely, in section~\\ref{sec:scalar} we will study the case of a light scalar mediator, a light vector\nmediator in section~\\ref{sec:vector}, and the scenario where DM particles have a finite extent\nin section~\\ref{sec:puffy}. In all these cases, we will re-interpret the published Xenon-1T limits and assess \nwhether there is a remaining unconstrained window of large scattering cross sections for GeV-scale DM. \nJust before this, however, in section~\\ref{sec:const} we will briefly revisit the (physically less motivated) case of \na constant cross-section, which can be viewed as the highly non-relativistic limit of a contact interaction. \nThis will allow us to illustrate how the resulting CRDM constraints compare with \nestablished bounds from both surface and astrophysical experiments, as well as provide a more direct comparison with the \nexisting literature.\n\n\n\n\\subsection{Constant cross section}\n\\label{sec:const}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.99\\textwidth]{figures_draft\/figure_four_wl}\n\\caption{%\n{\\it Left panel.} Limits on a constant spin-independent DM-nucleon scattering\ncross section as a function of the DM mass, \nbased on a re-interpretation of Xenon-1T limits on non-relativistic DM~\\cite{XENON:2018voc}\nfor the CRDM component studied in this work (solid lines).\nDash-dotted lines show the excluded region that results when assuming a constant cross\nsection in the attenuation part (as in Ref.~\\cite{Bringmann:2018cvk}).\nDashed lines show the effects of adding form factors in the attenuation part, but no \ninelastic scattering, resulting in limits similar to those derived in Ref.~\\cite{Xia:2021vbz}.\nFor the latter case, for comparison, we also show the effect of artificially cutting the incoming CRDM flux \nat the indicated energies.\\\\ \n{\\it Right panel.} Updated CRDM limits (coinciding with the solid lines from the left panel) in comparison \nto limits from the Lyman-$\\alpha$ forest~\\cite{Rogers:2021byl}, the Milky Way satellite \npopulation~\\cite{Maamari:2020aqz}, gas clouds in the Galactic Centre \nregion~\\cite{Bhoonah:2018gjb}, the XQC experiment~\\cite{McCammon:2002gb,Mahdawi:2018euy}, and \na recently analysed storage dewar experiment~\\cite{Neufeld:2019xes,Xu:2021lmg}.\nWe also show upper limits on the cross section as published by the CRESST \ncollaboration~\\cite{CRESST:2017ues} (solid green lines), based on a surface run of their \nexperiment, along with the maximal cross section where \nattenuation does not prevent DM from leaving a signal in the detector~\\cite{Emken:2018run}.\nAlternative limits are indicated by green dashed~\\cite{Mahdawi:2018euy} \nand dash-dotted lines~\\cite{Xu:2020qjk}, based on the assumption of a thermalization efficiency \nof $\\epsilon_{\\rm th}=2$\\,\\% and $\\epsilon_{\\rm th}=1$\\,\\%, respectively, which is \nsignificantly worse than the one adopted in the CRESST analysis.\n}\n\\label{fig:constraints_constant}\n\\end{center}\n\\end{figure}\n\nFor the discussion of a constant cross section, we will again consider the case of spin-independent scattering with isospin \nconserving nucleon couplings, as described by Eq.~(\\ref{eq:siconst}). In the left panel of Fig.~\\ref{fig:constraints_constant}, we \nshow our improved constraints from a re-interpretation of the Xenon-1T limits in this case.\nBroadly, these updated and refined CRDM limits cover the mass range up to \n$m_\\chi \\lesssim 10 \\,\\mathrm{GeV}$ for cross-sections \n$10^{-31}\\,\\mathrm{cm}^2 \\lesssim \\sigma_{\\mathrm{SI}} \\lesssim 2 \\times 10^{-28}\\,\\mathrm{cm}^2$.\n\nFor comparison, we also indicate (with dash-dotted lines) the limits that result when neglecting both form-factor\ndependence of the cross section and inelastic scatterings in the attenuation part. As expected, this leads to a shape of the \nexcluded region very similar to that originally derived in Ref.~\\cite{Bringmann:2018cvk}, where the same simplifying\nassumptions were made. As a result of our improved treatment of CR fluxes and form factors, \nhowever, the limits indicated with dash-dotted lines are overall slightly more stringent than what is reported in that analysis.\nWe find that for very light DM, with $m_\\chi\\lesssim10$\\,MeV, this simplistic treatment actually leads to rather\nrealistic limits, the reason being that for highly relativistic particles the typical momentum transfer is always so large\nthat efficient inelastic scattering becomes relevant. For heavier DM masses, on the other hand, this treatment clearly \noverestimates the stopping power because it neglects the form factor suppression relevant for semi-relativistic DM\nscattering on nuclei.\n\nDashed lines furthermore show the effect of adding the form factor suppression during the attenuation in the soil, as done in \nRef.~\\cite{Xia:2021vbz}, but still not including inelastic scattering. Clearly, this vastly underestimates the actual\nattenuation taking place and therefore appears to exclude very large cross sections.\\footnote{%\nCompared to Ref.~\\cite{Xia:2021vbz}, we also find that the excluded region extends to somewhat larger DM masses,\nmostly as a result of our updated treatment of elastic form factors.\nOn the other hand, we recall that our attenuation prescription is based on the analytical energy loss treatment outlined in \nsection~\\ref{sec:crdm}, rather than a full Monte Carlo simulation. This likely overestimates the maximally excluded DM mass,\nbut only by less than a factor of 2~\\cite{Xia:2021vbz}.\n} \nIn order to gain a better intuitive understanding for the shape and strength of our final limits, finally, we also indicate the \neffect of neglecting inelastic scattering and instead artificially cutting the CRDM flux (prior to entering the soil) above \nsome given energy. The resulting upper limit on the cross section that can be probed in this fiducial setup strongly suggests that \ninelastic scattering events very efficiently stop the incident CRDM flux in the overburden as soon as they become\nrelevant compared to elastic scattering events. From Fig.~\\ref{fig:constraints_constant}, and well in accordance with the\nexpectations from section \\ref{sec:inel}, this happens at CRDM energies $T_\\chi\\gtrsim0.2$\\,GeV.\n\nIn the right panel of Fig.~\\ref{fig:constraints_constant} we show our improved constraints \nfrom a re-interpretation of the Xenon-1T limits in comparison with complementary limits from direct\nprobes of the DM-nucleon scattering cross section.\nAt small DM masses the dominant\nconstraint results from analysing the distribution of large-scale structures as traced by the Lyman-$\\alpha$ \nforest. This is based on the fact that protons scattering too strongly off DM \nwould accelerate the latter and thereby suppress the matter power spectrum at sub-Mpc scales. \nSuch limits have recently been significantly tightened~\\cite{Rogers:2021byl}, utilizing\nstate-of-the-art cosmological hydrodynamical simulations of the intergalactic medium at redshifts \n$2\\lesssim z\\lesssim6$. Similar bounds from the CMB (not shown here) are generally weaker by up to three \norders of magnitude~\\cite{Rogers:2021byl, Planck:2015bpv, Xu:2018efh}, while\nthe Milky Way satellite population~\\cite{Maamari:2020aqz} -- as inferred from the Dark Energy Survey and \nPanSTARRS-1~\\cite{DES:2019vzn} -- places bounds that are roughly one order of magnitude weaker.\nBeyond cosmological bounds, cold gas clouds near the Galactic Center provide an interesting complementary testbed, \nin particular at high DM masses, where halo DM particles scattering too efficiently on the much {\\it colder} baryon population \nwould heat up the latter~\\cite{Bhoonah:2018wmw}. Here we show updated \nconstraints~\\cite{Bhoonah:2018gjb} based on \nthe cloud G357.8-4.7-55, noting that these constraints might be improved by more than one order\nof magnitude if G1.4-1.8+87 is indeed as cold as $T\\leq22$\\,K (as reported in \nRefs.~\\cite{McClure-Griffiths:2013awa,DiTeodoro:2018ybg} but disputed in Ref.~\\cite{Farrar:2019qrv}).\nWe also display the limits~\\cite{Mahdawi:2018euy} that result from the ten minutes' flight of the X-ray Calorimetry \nRocket (XQC)~\\cite{McCammon:2002gb}, based on the observation that ambient DM particles scattering\noff the silicon nuclei in the quantum calorimeter would deposit (part of) their energy in the \nprocess~\\cite{Wandelt:2000ad,Zaharijas:2004jv,Erickcek:2007jv}. \nIn deriving these XQC limits, one must take into account that the recoil energy of a silicon nucleus \npotentially thermalizes much less efficiently in the calorimeter than the $e^\\pm$ pairs produced from \nan incoming X-ray photon, such that a nuclear recoil energy $T_N$ will leave a signal equivalent \nto a photon with a reduced `thermal' recoil energy $T_T=\\epsilon_{\\rm th} T_N$. Concretely, the limits shown in the \nplot are based on the very conservative assumption of a thermalization efficiency factor of $\\epsilon_{\\rm th} = 0.02$.\\footnote{%\nWhen the scattering is mediated by a Yukawa-like interaction, a perturbative description of the scattering\nprocess may no longer be adequate. In that case the constraints shown here, in particular for XQC, receive \ncorrections due to non-perturbative effects leading to resonances or anti-resonances in the scattering cross \nsection~\\cite{Xu:2020qjk}. Here, we will not consider this possibility further, noting that a variation of the relatively \nuncertain value of $\\epsilon_{\\rm th}$ anyway has a larger impact on the XQC limits~\\cite{Mahdawi:2018euy}.\n}\n\nFurthermore, in order to directly probe sub-GeV DM with very large cross sections, the CRESST collaboration has performed\na dedicated surface run of their experiment~\\cite{CRESST:2017ues}, deliberately avoiding the shielding \nof the Gran Sasso rock used in the standard run~\\cite{CRESST:2015txj}. The result of this search is the exclusion region \nindicated by the solid green line in Fig.~\\ref{fig:constraints_constant}. Here, upper bounds on the cross section correspond to \nthe published limits, obtained under the assumption that any attenuation in the overburden can be neglected. \nModelling the effect of attenuation with detailed numerical simulations also results in \nthe exclusion region limited from above~\\cite{Emken:2018run}, coming from the fact that one must have a sufficiently large flux of \nDM particles at the detector location.\nIn a series of papers, Farrar~\\textit{et al.}~have claimed \nthat the CRESST thermalization efficiency adopted in the official analysis is too optimistic~\\cite{Mahdawi:2018euy,Wadekar:2019mpc,Xu:2020qjk,Xu:2021lmg}, challenging the general ability of the experiment to probe sub-GeV DM.\nWe indicate the resulting alternatives to the published CRESST limits in the same figure, albeit noting that the underlying assumption \nof an efficiency as low as $\\epsilon_{\\rm th}\\sim1$\\,\\% is not supported by data or simulations.\nFor example, no indication for such a dramatic loss of efficiency at low energies is observed for neutrons from an AmBe \nneutron calibration source~\\cite{florian}.\n\nTo summarise, Fig.~\\ref{fig:constraints_constant} illustrates the fact that the existence of the CRDM component provides an \nimportant probe of strongly interacting light DM. In particular, below $m_\\chi\\lesssim100$\\,MeV, it restricts parameter space that \nis otherwise either unconstrained or only testable with \ncosmological probes (which -- at least to some degree -- are \nsubject to modelling caveats regarding the Lyman-$\\alpha$ forest and the non-linear evolution of density perturbations \nat small scales; see, e.g., Refs.~\\cite{Hui:2016ltb,Irsic:2017ixq}). The CRDM component also leads to\nhighly relevant complementary constraints up to DM masses of a few GeV, especially when noting that \nthese constraints are independent of the thermalization efficiency discussion above.\n\n\n\n\\subsection{Scalar mediators}\n\\label{sec:scalar}\n\nAs our first example beyond a constant scattering cross section we consider the case where \na new light scalar particle $\\phi$ mediates the interaction between DM and nucleons. \nWe thus consider the interaction Lagrangian \n\\begin{equation}\n\\mathcal{L}_{\\rm int}= - g_\\chi \\phi\\overline\\chi\\chi - g_p \\phi \\overline p p- g_n \\phi \\overline n n\\,,\n\\end{equation}\nand assume, for simplicity, isospin conservation ($g_p = g_n $).\nAt the level of the effective nuclear interaction Lagrangian, the dominant interaction terms with scalar ($N_0$) and fermionic ($N_{1\/2}$) \n{\\it nuclei} are thus given by\\footnote{%\nWhile the dominant cosmic-ray nuclei are either scalar or spin $1\/2$ particles, some heavier nuclei in the overburden\nhave higher spins. For simplicity we treat those nuclei as scalars when determining their contribution to the energy loss,\nas described by Eq.~(\\ref{eq:eloss}), noting that this induces a neglible error in the estimated elastic scattering cross section,\nof the order of $Q^2\/m_N^2\\ll1$. Moreover, nuclei with higher spins make up only about 2\\% of the total mass in the overburden.\n}\n\\begin{equation}\n\\label{eq:leff_scalar}\n\\mathcal{L}_{\\rm int}= -g_N\\left( 2m_N N_0N_0+\\overline N_{1\/2}N_{1\/2}\\right)\\,.\n\\end{equation}\nHere, the dimensionful coupling to scalar nuclei has been normalized such that both terms in the above expression result\nin the same scattering cross section in the highly non-relativistic limit. In addition, the coupling to individual nucleons is coherently \nenhanced across the nucleus, resulting in an effective coupling to both scalar and fermionic nuclei given by\n\\begin{equation}\n\\label{eq:gN_coh}\ng_N^2 =A^2 \\, g_p^2 \\times G_N^2(Q^2)\\,,\n\\end{equation}\nwhere $G_N$ is the same form-factor as in the case of a `constant' cross section.\nFor the resulting elastic scattering cross section for DM incident on nuclei at rest we find\n\\begin{equation}\n\\label{diffsig_full_scalar}\n\\frac{d\\sigma_{\\chi N}}{d T_N}=\\frac{\\mathcal{C}^2\\sigma_{\\rm SI}^\\mathrm{NR}}{T_N^\\mathrm{max}}\n\\frac{m_\\phi^4}{(Q^2+m_\\phi^2)^2}\n\\frac{m_N^2\\left(Q^2+4m_\\chi^2 \\right)}{4 s\\,{\\mu_{\\chi N}^2}}\n\\times\\left\\{\n\\begin{array}{ll}\n1& ~~\\mathrm{for~scalar~}N\\\\\n1+\\frac{Q^2}{4m_N^2} & ~~\\mathrm{for~fermionic~}N\n\\end{array}\n\\right\\}\n\\times G_N^2(Q^2)\\,,\n\\end{equation}\nwhere $\\mu_{\\chi p}$ is the reduced mass of the DM\/nucleon system and \n\\begin{equation}\n\\label{eq:sig0_scalar}\n\\sigma_{\\rm SI}^\\mathrm{NR} = \\frac{g_\\chi^2 g_p^2 \\mu_{\\chi p}^2}{\\pi m_\\phi^4}\n\\end{equation}\nis the spin-independent scattering cross section {\\it per nucleon} in the ultra non-relativistic limit.\nFor reference, the kinematic quantities $T_N^\\mathrm{max}$, $s$ and $Q^2$ are given by \nEqs.~(\\ref{eq:tmax}), (\\ref{eq:sdef}) and (\\ref{eq:q2}), respectively. For the production part of the process, \nwhere CR nuclei collide with DM at rest, one\nsimply has to exchange $T_N\\leftrightarrow T_\\chi$ and $m_\\chi\\leftrightarrow m_N$ in \nthese expressions for kinematic variables -- but not in the rest of Eq.~(\\ref{diffsig_full_scalar}) \n-- in order to obtain ${d\\sigma_{\\chi N}}\/{d T_\\chi}$.\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.99\\textwidth]{figures_draft\/figure_five_wl}\n\\caption{{\\it Left panel.} Solid lines show the CRDM flux before attenuation for a constant interaction\ncross section, as in Fig.~\\ref{fig:flux_ff}, for DM masses $m_\\chi=10$\\,MeV and $m_\\chi=1$\\,GeV. \nFor comparison we also indicate the corresponding CRDM flux for a {\\it scalar} mediator, cf.~Eq.~(\\ref{diffsig_full_scalar}), \nwith mass $m_\\phi=100$\\,MeV (dash-dotted), $m_\\phi=10$\\,MeV \n(dashed) and $m_\\phi=1$\\,MeV (dotted), for a cross section (in the non-relativistic limit) of \n$\\sigma_{\\rm SI}^{\\rm NR}=10^{-30}\\,\\mathrm{cm}^2$. \n{\\it Right panel.} \nMinimal kinetic energy $T_\\chi$ that a DM particle must have, prior to attenuation, in order \nto trigger a signal in the Xenon-1T experiment. Line styles and colors match those of the left\npanel. In particular, solid lines show the case of a constant spin-independent scattering cross \nsection and are identical to those displayed in Fig.~\\ref{fig:attenuation_ff}.\n}\n\\label{fig:results_scalar}\n\\end{center}\n\\end{figure}\n\nIn the left panel of Fig.~\\ref{fig:results_scalar} we show the resulting CRDM fluxes for this\nmodel. For small kinetic energies these fluxes are, as expected, identical to those shown in Fig.~\\ref{fig:flux_ff} \nfor the case of a constant cross section. This is the regime where $Q^2=2m_\\chi T_\\chi$ is smaller\nthan the masses of both the mediator and CR nuclei, such that Eq.~(\\ref{diffsig_full_scalar}) reduces to \nEq.~(\\ref{eq:siconst}). For $Q^2\\gtrsim m_\\phi^2$, on the other hand, the presence of a light mediator \nclearly suppresses the fluxes. Note that the matrix element also contains a factor of $(Q^2+4m_\\chi^2)$, \nwhich additionally leads\nto a flux {\\it enhancement} for fully relativistic DM particles, $T_\\chi\\gtrsim 2 m_\\chi$. \nIn the figure, this latter effect is clearly visible for the case of $m_\\chi=10$\\,MeV and a heavy mediator.\nIn general, the appearance of such model-dependent features demonstrates the need to use the full matrix \nelement for the relativistic cross section. This is in contrast to the non-relativistic case, where a model-independent rescaling \nof the cross section by a factor of $(1+Q^2\/m_\\phi^2)^{-2}$ is usually sufficient to model the effect of a light mediator\n(see, e.g., Refs.~\\cite{Chang:2009yt,Fornengo:2011sz,Kaplinghat:2013yxa}).\n\nIn the right panel of Fig.~\\ref{fig:results_scalar}, we explore the minimal CRDM energy $T_\\chi$ that\nis needed to induce a detectable nuclear recoil. Compared to the situation of a constant scattering cross section (depicted by the \nsolid lines for easy comparison), the attenuation is as expected rather strongly suppressed when light scalar mediators are \npresent (with the exception of the\ncase with $m_\\chi=10$\\,MeV and $m_\\phi=100$\\,MeV, where the cross section is enhanced \ndue to the $(Q^2+4m_\\chi^2)$ factor in the squared matrix element). In order to understand the qualitative behaviour of \n$T_\\chi^{\\rm min} (z=0)$ better, we recall from the discussion of Fig.~\\ref{fig:attenuation_ff}\nthat there are two generic scaling regimes for solutions of the energy loss equation. Firstly, for cross-sections with no \n-- or only a mild -- dependence on the momentum transfer, $T_{\\chi}^{\\rm min}(z=0)$ grows exponentially \nwith increasing $\\sigma_{\\rm SI}^{\\rm NR}$. Secondly, in the presence of an effective cutoff in the cross section (like when form \nfactors or light mediators are introduced), $T_{\\chi}^{\\rm min}(z=0) \\propto \\sqrt{\\sigma_{\\rm SI}^{\\rm NR}}$ for large energies\n$T_\\chi$. These different regimes are clearly visible in the figure. \nFor the green dot-dashed curve ($m_\\chi=1$\\,GeV, $m_\\phi=100$\\,MeV), for example,\none observes as expected an initial steep rise at the smallest DM energies -- until the form factor and mediator suppression \nof the cross section cause a scaling with $\\sqrt{\\sigma_{\\rm SI}^{\\rm NR}}$ for kinetic energies above a few MeV. \nAt roughly $T_\\chi\\gtrsim0.1$\\,GeV, inelastic scattering kicks in, leading again to an exponential suppression of the flux. \nFor even higher energies, finally, the scattering cross section falls off so rapidly that the required initial DM energy once\nagain only grows as $\\sqrt{\\sigma_{\\rm SI}^{\\rm NR}}$.\n\nTurning our attention to the resulting CRDM limits, it is worth stressing here that $\\sigma_{\\rm SI}^\\mathrm{NR}$, as introduced in Eq.~(\\ref{eq:sig0_scalar}),\nis a somewhat artificial object that only describes the cross section for physical processes\nrestricted to $Q^2\\lesssim m_\\phi^2$.\nIn a direct detection experiment like Xenon-1T this is necessarily violated for $m_\\phi\\lesssim \\sqrt{2m_N T_N^{\\rm thr}}\\sim35$\\,MeV, \ngiven that $T_N^{\\rm thr}=4.9$\\,keV is the minimal recoil \nenergy needed to generate a signal. A natural consequence of this is that making a straight-forward comparison \nto the $\\sigma_{\\rm SI}$ appearing in the `constant cross section' case discussed in \nsection \\ref{sec:const} is challenging. Instead, the best we can achieve in terms of a meaningful \ncomparison is to define a {\\it reference cross section}\n\\begin{equation}\n\\label{eq:Qref}\n \\tilde \\sigma_{\\rm Xe,SI}^p \\equiv \\sigma_{\\rm SI}^\\mathrm{NR}\\times \\frac{m_\\phi^4}{(Q_{\\rm Xe,ref}^2+m_\\phi^2)^2}\n\\frac{Q^2_{\\rm Xe,ref}+4m_\\chi^2}{4m_\\chi^2}\\,,\n\\end{equation}\nwhere $Q_{\\rm Xe,ref}\\sim35$\\,MeV. It follows from Eq.~(\\ref{diffsig_full_scalar}) and Eq.~(\\ref{eq:siconst}), \nand the fact that $s\\approx (m_\\chi+m_N)^2$ for the energies of interest here, that $ \\tilde \\sigma_{\\rm Xe,SI}^p$ \nshould be interpreted as the effective CRDM cross section per nucleon that is dominantly seen in the Xenon-1T \nanalysis window.\nIt is thus this quantity, not the $\\sigma_{\\rm SI}^\\mathrm{NR}$ from Eq.~(\\ref{eq:sig0_scalar}), that should\nbe compared to the published Xenon-1T limits on the DM-nucleon cross section.\n\nThis also allows us to address the question of how the limits on the DM-nucleon coupling coming from the CRDM component\ncompare to the complementary constraints introduced in section \\ref{sec:const} (cf.~the right\npanel of Fig.~\\ref{fig:constraints_constant}). In order to do so, one first needs to\nrealize that all of those limits are derived under the assumption of non-relativistic\nDM and a constant cross section. In reality, however, they probe very different physical environments and typical \nmomentum transfers. In order to allow for a direct comparison, therefore, they also need to be re-scaled to a common \nreference cross section.\nAssuming that the DM energies in Eq.~(\\ref{diffsig_full_scalar}) are non-relativistic, a reported limit on the DM-nucleon \ncross-section $\\sigma_{\\rm SI}^p$ from an experiment probing typical momentum transfers of the order $Q^2_{\\rm ref}$ would \ncorrespond to a cross section of\n\\begin{equation}\n\\label{eq:rescale_scalar}\n \\tilde \\sigma_{\\rm Xe,SI}^p = \\sigma_{\\rm SI}^p\\times \n \\left(\\frac{Q^2_{\\rm ref}+m_\\phi^2}{Q^2_{\\rm Xe,ref}+m_\\phi^2}\\right)^2\n\\frac{Q^2_{\\rm Xe,ref}+4m_\\chi^2}{Q^2_{\\rm ref}+4m_\\chi^2}\n\\end{equation}\nin the Xenon-1T detector. As an example, consider the CRESST surface run~\\cite{CRESST:2017ues}, where a threshold energy of $\\sim20$\\,eV\nfor the sapphire \ndetector would imply $Q^2_{\\rm ref}\\sim (0.98\\,\\mathrm{MeV})^2\/\\epsilon_{\\rm th}$. Similarly, a thermal recoil energy of of $29$\\,eV\nin XQC corresponds to $Q^2_{\\rm ref}\\sim (8.7\\,\\mathrm{MeV})^2$ for the nuclear recoil on Si nuclei\n(assuming $\\epsilon_{\\rm th} = 0.02$ as for the unscaled limits).\nTurning to cosmological limits, a baryon velocity of $v_b^{\\rm rms}\\sim33$km\/s at the times relevant for the emission\nof Lyman-$\\alpha$ photons~\\cite{Silk:1967kq} implies typical momentum transfers from the Helium nuclei to DM of \n$Q_{\\rm ref}^2\\sim4 \\mu_{\\chi{\\rm He}}^2\\times10^{-8}$. This means that, for the range of DM and mediator masses considered \nhere, the cross section at these times becomes roughly constant and we can approximate $Q_{\\rm ref}^2\\approx0$ \nin Eq.~(\\ref{eq:rescale_scalar}). The same goes for the constraints stemming from the MW satellite abundance,\nwhich are sensitive to even lower redshifts and thus smaller momentum transfers~\\cite{Nadler:2019zrb,Maamari:2020aqz}.\n\nIn Fig.~\\ref{fig:limits_scalar} we show a subset of these correspondingly rescaled constraints\\footnote{%\nUpper bounds on the excluded cross section, due to attenuation effects, cannot simply be rescaled as\nin Eq.~(\\ref{eq:rescale_scalar}). \nFor the sake of Fig.~\\ref{fig:limits_scalar}, we instead adopt a rather simplistic \napproach~\\cite{Davis:2017noy,Kouvaris:2014lpa,Emken:2017erx,Emken:2018run} to estimate these \nlimits by requiring that the most energetic halo DM particles, with an assumed velocity $v_{\\rm max}$,\ncan trigger nuclear recoils above the CRESST threshold of 19.7\\,eV\/$\\epsilon_{\\rm th}$ after attenuation in the \nEarth's atmosphere. For the average density and distribution of elements in the atmosphere, we follow Ref.~\\cite{USatm}.\nBy treating $v_{\\rm max}$ and the effective height of the atmosphere, $h_a$, \nas free parameters, we can then rather accurately fit the results of more detailed \ncalculations~\\cite{Emken:2018run,Mahdawi:2018euy} for the case of a constant cross section\n-- with numerical values in reasonable agreement with the physical expectation in such a heuristic approach. \nFinally, we adopt those values of $v_{\\rm max}$ and $h_a$ to derive the corresponding limits for the case of a scalar mediator,\nas displayed in Fig.~\\ref{fig:limits_scalar}.\n}\n -- for mediator masses \n$m_\\phi=1$\\,MeV, 10\\,MeV, 100\\,MeV and 1\\,GeV -- along with the full CRDM constraints derived here.\nWe also indicate, for comparison, with dotted black lines where non-perturbative couplings would be needed in this model to \nrealize the stated cross section. This line is only visible for the case of $m_\\phi=1$\\,GeV,\nwhich demonstrates that it is generically challenging to realize large cross sections without invoking light mediators.\nThe presence of an abundant species with a mass below a few MeV, furthermore, would affect how light elements are\nproduced during big bang nucleosynthesis (BBN). For a 1\\,MeV particle with one degree of freedom, like $\\phi$, \nthis can be formulated as a constraint of $\\tau>0.43$\\,s~\\cite{Depta:2020zbh} on the lifetime of such a particle.\nPhysically, this constraint derives from freeze-in production of $\\phi$ via the inverse decay process. \nSince $\\phi\\to\\gamma\\gamma$ (apart from $\\phi\\to\\bar \\nu\\nu$) is the only kinematically possible SM decay channel, \nthe translation of this bound to a constraint on the SM coupling $g_p$ is somewhat model-dependent.\nFor concreteness we consider the Higgs portal model, where $\\tau>1$\\,s at $m_\\phi=1$\\,MeV corresponds to \na squared mixing angle\n$\\sin^2\\theta=(8.62\\times10^2g_p)^2>3.8\\times10^{-4}$~\\cite{Krnjaic:2015mbs}. The area above the dashed\nline in the top left panel of Fig.~\\ref{fig:limits_scalar} requires either a {\\it larger} value of $g_p$ than what is given by this\nbound, or a non-perturbative coupling $g_\\chi^2>4\\pi$. This confirms the generic expectation that for very light particles \nBBN constraints are more stringent than those stemming from the CRDM \ncomponent~\\cite{Krnjaic:2019dzc,Bondarenko:2019vrb}.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{figures_draft\/figure_six_wl.pdf}\n\\caption{%\nLimits on the DM-nucleon scattering cross section evaluated at a reference momentum transfer of \n$Q_{\\rm Xe,ref}=35$\\,MeV, as a function of the DM mass $m_\\chi$. From top left to bottom right,\nthe panels show the case of a {\\it scalar mediator} with mass $m_\\phi=1$\\,MeV, 10\\,MeV, 100\\,MeV and 1\\,GeV.\nSolid purple lines show the updated CRDM limits studied in this work. We further\nshow limits from the Lyman-$\\alpha$ forest~\\cite{Rogers:2021byl}, the XQC \nexperiment~\\cite{McCammon:2002gb,Mahdawi:2018euy}, the CRESST surface \nrun~\\cite{CRESST:2017ues, Emken:2018run} and an alternative analysis of the CRESST \nlimits~\\cite{Mahdawi:2018euy}. All these limits are rescaled to match the situation of a light mediator,\nas explained in the text. The parameter region above the dotted black line in the bottom right panel\nrequires non-perturbative couplings, while the area above the dotted line in the top left panel is excluded\nby BBN.\n}\n\\label{fig:limits_scalar}\n\\end{center}\n\\end{figure}\n\n\nOur results demonstrate that in the presence of light mediators the largest DM mass that can be constrained \ndue to CR upscattering is reduced from about 10\\,GeV, cf.~Fig.~\\ref{fig:constraints_constant},\nto just above 1\\,GeV (for $m_\\phi\\sim1$\\,MeV). This is a direct consequence of the suppressed CRDM production\nrate discussed above. On the other hand, the reduction of the cross section also implies a smaller attenuation\neffect, thus closing parameter space at larger cross sections. More importantly, complementary constraints\nfrom cosmology and dedicated surface experiments become {\\it more stringent} in the presence of light mediators, \nonce they are translated to a common reference cross section. To put this in context, let us first recall that in the constant \ncross section case, Fig.~\\ref{fig:constraints_constant} tells us that cross sections \n$\\sigma_{\\rm SI}\\gtrsim 2\\cdot10^{-31}\\,{\\rm cm}^2$ are safely excluded across the entire DM mass range \n(or $\\sigma_{\\rm SI}\\gtrsim 6\\cdot10^{-31}\\,{\\rm cm}^2$ when assuming that the thermalization efficiency of \nCRESST is indeed as low as 2\\,\\%). From Fig.~\\ref{fig:limits_scalar} we infer that these limits can be somewhat \nweakened for sub-GeV DM, when considering light meditators in the mass range \n$10\\,{\\rm MeV}\\lesssim m_\\phi\\lesssim 100\\,{\\rm MeV}$ (as we will see further down, the situation of a vector mediator\nis not appreciably different from that of the scalar mediator shown here). Concretely, the upper bound on the \ncross section now becomes $\\tilde \\sigma_{\\rm SI}\\lesssim 3\\cdot10^{-31}\\,{\\rm cm}^2$, independently of the DM\n{\\it and} mediator mass. For a 2\\,\\% thermalization efficiency of CRESST~\\cite{Mahdawi:2018euy} and a narrow \nrange of mediator masses, $10\\,{\\rm MeV}\\lesssim m_\\phi\\ll100\\,{\\rm MeV}$,\na small window opens up above the maximal cross section that can be probed with CRESST. \nThe reason is the gap between Lyman-$\\alpha$ bounds and the weakened CRESST limits from Ref.~\\cite{Mahdawi:2018euy}\nthat is visible in the figure, for $m_\\phi\\gtrsim10\\,{\\rm MeV}$, and which is closed by the CRDM limits only for mediator\nmasses of $m_\\phi\\gtrsim 30$\\,MeV. Nominally, for $m_\\chi\\sim2$\\,GeV and $m_\\phi\\sim 30$\\,MeV, \nthis would allow for cross sections as large as $\\tilde \\sigma_{\\rm SI}\\sim 4\\cdot10^{-29}\\,{\\rm cm}^2$.\nIn either case, the conclusion remains that CRDM leads to highly complementary limits, and that this \nrelativistic component of the DM flux is in fact crucial for excluding the possibility of very large DM-nucleon\ninteractions.\n\n\n\\subsection{Vector mediators}\n\\label{sec:vector}\n\nWe next consider the general case of a massive vector mediator $V$, with interactions given by\n\\begin{equation}\n\\mathcal{L}= V_\\mu \\left(g_\\chi \\overline{\\chi}\\gamma^\\mu \\chi + g_{p}\\overline{p}\\gamma^\\mu p + g_{n}\\overline{n}\\gamma^\\mu n\\right)\\,.\n\\end{equation}\nWe will again assume $g_n=g_p$ for simplicity, noting that smaller values of the ratio $g_n\/g_p$ can lead to\nsignificantly smaller cross sections (see, e.g., Refs.~\\cite{Frandsen:2011cg,Kaplinghat:2013yxa}); in our context this would\nmostly imply that the attenuation in the overburden becomes less relevant, leading to more stringent constraints.\nIn analogy to Eq.~(\\ref{eq:leff_scalar}), this implies the following dominant interaction terms with scalar and fermionic nuclei, respectively:\n\\begin{equation}\n\\label{eq:leff_vector}\n\\mathcal{L}_{\\rm int}= -g_N V_\\mu\\left( i N_0^*{\\mathop{\\partial^\\mu}^{\\leftrightarrow}} N_0+\\overline N_{1\/2}\\gamma^\\mu N_{1\/2}\\right),\n\\end{equation}\nwhere the effective mediator coupling to nuclei, $g_N$, is again given by the coherent enhancement stated in Eq.~(\\ref{eq:gN_coh}).\nFor the elastic scattering cross section on nuclei we find\n\\begin{eqnarray}\n\\label{diffsig_full_vector}\n\\frac{d\\sigma_{\\chi N}}{d T_N}&=&\\frac{\\mathcal{C}^2 \\sigma_{\\rm SI}^\\mathrm{NR}}{T_N^\\mathrm{max}}\n\\frac{m_A^4}{(Q^2+m_A^2)^2}\n\\times G_N^2(Q^2)\\\\\n&&\n\\times\\frac{1}{4s \\mu_{\\chi N}^2}\n\\left\\{\n\\begin{array}{ll}\nm_\\chi^2Q^2-Q^2s+(s-m_N^2-m_\\chi^2)^2& ~~\\mathrm{for~scalar~}N\\\\\n\\frac12 Q^4 -Q^2s+(s-m_N^2-m_\\chi^2)^2& ~~\\mathrm{for~fermionic~}N\n\\end{array}\n\\right..\\nonumber\n\\end{eqnarray}\nHere, the cross section in the ultra-nonrelativistic limit,\n\\begin{equation}\n\\label{eq:sig0_vector}\n\\sigma_{\\rm SI}^\\mathrm{NR} = \\frac{g_\\chi^2 g_p^2 \\mu_{\\chi p}^2}{\\pi m_A^4}\\,,\n\\end{equation}\ni.e.~for $Q^2\\to0$ and $s\\to(m_N+m_\\chi)^2$, agrees exactly with the result obtained for the scalar case, as expected.\nFor large energies and momentum transfers, on the other hand, the behaviour\nis different.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.99\\textwidth]{figures_draft\/figure_seven_wl}\n\\caption{%\n{\\it Left panel.} \nMinimal kinetic energy $T_\\chi$ that a DM particle must have, prior to attenuation, in order \nto trigger a signal in the Xenon-1T experiment for DM nucleus interactions via a {\\it vector mediator},\nas a function of the spin-independent DM-nucleon scattering cross section in the highly non-relativistic limit, \n$\\sigma_{\\rm SI}^\\mathrm{NR}$. Yellow (green) lines indicate a DM mass\n$m_\\chi=10$\\,MeV ($m_\\chi=1$\\,GeV), and different line styles correspond to mediator masses \n$m_A=1,10,100$\\,MeV as indicated.\nSolid lines show the case of a constant spin-independent scattering cross \nsection and are identical to those displayed in Fig.~\\ref{fig:attenuation_ff}.\n{\\it Right panel.} \nConstraints on $\\sigma_{\\rm SI}^\\mathrm{NR}$ as a function of the DM mass $m_\\chi$. Solid purple lines \nrefer to the case of a constant cross section, as in Fig.~\\ref{fig:constraints_constant}, while other line \nstyles show the case where the interaction is mediated by a light scalar (red) or vector (green) particle \nwith mass $m_{\\rm med}=10$\\,MeV and $1$\\,GeV, respectively.}\n\\label{fig:limits_vector}\n\\end{center}\n\\end{figure}\n\nThe resulting CRDM fluxes are nonetheless so similar to the scalar case shown in the left panel of Fig.~\\ref{fig:results_scalar} \nthat we refrain from plotting them separately. Differences do exist, however, for the stopping power in the overburden.\nIn the left panel of Fig.~\\ref{fig:limits_vector} we therefore show the minimal initial kinetic energy needed by a CRDM\nparticle to induce detectable nuclear recoils in Xenon-1T. Compared to the scalar case, cf.~the right panel of \nFig.~\\ref{fig:results_scalar}, the attenuation is more efficient for highly relativistic DM particles due to the $s$-dependence \nof the terms in the second line of Eq.~(\\ref{diffsig_full_vector}). As before, the effect of these model-dependent terms\nfrom the scattering amplitude \nis most visible for highly relativistic particles, with small $m_\\chi$, and large mediator masses, where \nthe suppression due to the factor $(1+Q^2\/m_A^2)^{-2}$ is less significant.\n\nIn the right panel of Fig.~\\ref{fig:limits_vector} we compare the final exclusion regions for the \nsituations considered so far, i.e.~for a contact interaction, scalar mediators and vector mediators, respectively. \nFor the sake of comparison in one single figure, \nwe plot here the cross section in the ultra-nonrelativistic limit. For an interpretation of \nthese limits in comparison to complementary constraints on DM-nucleon interactions \nwe thus refer to the discussion of Fig.~\\ref{fig:limits_scalar},\nnoting that the rescaling prescriptions for vector and scalar mediators are qualitatively the same.\nThe first thing to take away from Fig.~\\ref{fig:limits_vector} is that, as expected, the exclusion regions \nfor heavy mediators resemble those obtained for the constant cross section case. The figure\nfurther demonstrates that the only significant difference between scalar and vector mediators appears \nat smaller mediator masses, where the former are somewhat less efficiently stopped in the overburden. \nIt is worth noting, however, that this region of parameter space where the vector and scalar cases differ substantially\nis nonetheless excluded by Lyman-$\\alpha$ bounds. \nThe general discussion and conclusions from the scalar mediator case \nexplored in the previous subsection thus also applies to interactions mediated by vector particles.\n\n\n\n\n\\subsection{Finite-size dark matter}\n\\label{sec:puffy}\n\nAs a final generic example of a $Q^2$-suppressed cross section let us consider the situation \nwhere the DM particle itself has a finite size that is larger than its Compton wavelength. \nThe corresponding scattering cross section then takes the same form as in the point-like case,\nmultiplied by another factor $\\left|G_\\chi(Q^2)\\right|^2$ that reflects the spatial extent of \n$\\chi$~\\cite{Feldstein:2009tr,Laha:2013gva,Chu:2018faw} (for concrete models see also, e.g., \nRefs.~\\cite{Nussinov:1985xr,Chivukula:1989qb,Cline:2013zca,Krnjaic:2014xza,Wise:2014ola,Hardy:2015boa,Coskuner:2018are,Contino:2018crt}).\nSpecifically, just as for nuclear form factors, we have \n\\begin{equation}\nG_\\chi(Q^2)=\\int d^3 x\\,e^{i\\mathbf{q}\\cdot\\mathbf{x}}\\rho_\\chi(\\mathbf{x})\\,,\n\\end{equation}\nwhere $\\rho_\\chi(\\mathbf{x})$ is the distribution of the effective charge density that the interaction couples to.\nFor simplicity we will choose a dipole form factor of the form\\footnote{%\nThe exact choice of the form factor does not significantly affect our results, as long as $G_\\chi(Q^2)